id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
6832622 | pes2o/s2orc | v3-fos-license | Socioeconomic Impact of Cancer in Member Countries of the Association of Southeast Asian Nations ( ASEAN ) : the ACTION Study Protocol
Cancer has been cited as the leading cause of mortality globally, accounting for 13% (or 7.4 million) of all deaths annually with 70% of these occurring in low and middle income countries (WHO, 2010). It is projected that mortality from cancer will increase significantly over the coming years with ~13 million deaths per year worldwide expected by 2030. The trend is even more striking in Asia where the number of deaths per year in 2002 of 3.5 million is expected to increase to 8.1 million by 2020 (Lancet, 2010). As the availability of medical technologies and treatments expands across regions, the economic burden of cancer treatments, not only to health systems but to
Introduction
Cancer has been cited as the leading cause of mortality globally, accounting for 13% (or 7.4 million) of all deaths annually with 70% of these occurring in low and middle income countries (WHO, 2010). It is projected that mortality from cancer will increase significantly over the coming years with ~13 million deaths per year worldwide expected by 2030. The trend is even more striking in Asia where the number of deaths per year in 2002 of 3.5 million is expected to increase to 8.1 million by 2020 (Lancet, 2010).
As the availability of medical technologies and treatments expands across regions, the economic burden of cancer treatments, not only to health systems but to 1 individuals and their households, will inevitably become more pronounced. These impacts will be felt most strongly in socioeconomically disadvantaged groups particularly (although not exclusively) those in low-and middle-income countries where social safety nets, such as universal health insurance, are less likely to be present. A consequence of this is that such illness, particularly through the costs associated with its treatment and its impact on people's ability to work, can be a major cause of poverty. The ACTION (Asean CosTs In ONcology) study will examine such economic impact of cancer on households in the Association of Southeast Asian Nations (ASEAN) region. The ASEAN is a geopolitical and economic organization of ten independent countries located in Southeast Asia: Brunei, Cambodia, Indonesia,
Figure 1. Interview Flowchart
Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand and Viet Nam. This region contains more than half a billion people, almost 9% of the world population, spread over highly diverse countries, from economic powerhouses like Singapore to poorer economies such as Laos, Cambodia and Myanmar.
The ACTION study will assess the incidence of financial catastrophe and economic hardship associated with cancer. In addition it will examine the impact of cancer on quality of life, and the variations in the way in which patients within and across ASEAN countries are managed. Findings can raise awareness of the extent of the cancer problem, identify priorities for further research and catalyze political action to put in place effective cancer control health care policies.
Overview
The ACTION study is a longitudinal study of 10,000 hospital patients with a first time diagnosis of cancer. Patients will be followed throughout the first year after their cancer diagnosis. The primary aim is to assess the impact of cancer on household income and quality of life.
Study population
Men and women aged 18 years and older will be eligible to participate in this study if they fulfill the following criteria: -A first time cancer diagnosis received in hospital in the last 6 weeks; -Aware of their new cancer diagnosis; -Conscious and with sufficient cognitive capacity to give consent and complete an interview; -Willing to participate in the baseline and two follow-up interviews.
Men and women will be excluded if they are participating in a clinical trial.
Sites
A cross-section of public and private hospitals as well as cancer centers across 8 ASEAN countries (Cambodia, Laos, Indonesia, Malaysia, Myanmar, Philippines, Thailand and Viet Nam) will participate.
Recruitment and consent
At each site, consecutive patients receiving a new diagnosis of cancer, fulfilling the inclusion criteria, will be approached to participate in the study. The treating physician will identify eligible patients and provides the patient with the patient information sheet and patient informed consent form. The research officer will then contact the patient and seek his or her consent for participation. This would entail participation in the baseline interview and two follow-up interviews (at approximately 3 and 12 months after the baseline interview) and consent to examine their individual patient files (see Figure 1).
A screening log will be completed by the research officer at each participating site, with details of all patients approached to participate. Age, sex, type of cancer and area code of home town (optional) will be collected from non-responders.
Sample size
The initial target for patient recruitment is between 1000 and 2500 per country, with a maximum of 10,000 patients in total. Countries will recruit a sample of consecutive patients diagnosed with cancer. A sample of 10,000 patients allows us to reliably estimate (within a maximum of ± 1% error) the prevalence of financial catastrophe, illness induced poverty, clinically relevant decrease in quality of life, depression and anxiety, across the region.
Similarly, for the country-specific analyses, a sample of at least 1000 patients per country allows us to estimate the prevalence of financial catastrophe and all secondary outcome measures with acceptable errors (i.e. a maximum of ± 3% error).
Primary outcome
Incidence of financial catastrophe following treatment for cancer: Financial catastrophe is defined as out-ofpocket direct health care expenditure at 12 months exceeding 30% of household income as assessed over the 12 months of follow-up.
Secondary outcomes
-Illness induced poverty: This will be assessed by a change reported in household income which brings a household from initially above the prevailing poverty line (country specific) at baseline to below that line at 12 months.
-Quality of life (QoL) (generic): This will be assessed on the basis of change in health utility over a 12 months DOI:http://dx.doi.org/10.7314/APJCP.2012.13.2.421 Socioeconomic Impact of Cancer period, as measured by the EQ-5D.
-Quality of life (QoL) (cancer specific): This will be assessed by a change in quality of life over a 12 months period, as assessed by the EORTC QLQ-C30. -Psychological distress: The presence of psychological distress (anxiety and depression) at baseline, 3 and 12 months will be assessed using the HADS. -Hospital costs: These are the costs of hospitalization and hospital treatment incurred by patients in the 12 months after primary diagnosis. These costs will be assessed by examining the patient's medical file at 3 and 12 months, as well as information provided by the patient in the follow-up interviews.
-Non-hospital health care costs: These are the health care costs which are incurred in the 12 months after primary diagnosis outside of hospital by patients. These costs will be assessed during the interviews. The patient is given a cost diary that can assist in answering the health care utilization questions.
-Out-of pocket costs: These represent the hospital and non-hospital health care costs which are directly incurred by patients at point of delivery and not reimbursed by insurance. These costs will be assessed during the interviews. The patient is given a cost diary that can assist in estimating out-of-pocket expenses.
-Indirect costs: This is a measure of the change in household income in the 12 months of the study, as assessed in the suite of questionnaires.
-Economic hardship: Assessed as the inability to make necessary household payments such as housing costs, energy, food, and health care costs, as assessed in the suite of questionnaires.
-Disease status: Response to treatment (i.e. complete response, partial response; stable disease, progressive disease) is assessed at 12 months.
-Survival status: Vital status of the patient will be collected at both follow-up assessments. When a patient has died the cause of death will be determined, if possible.
Questionnaires
A suite of questionnaires will be interviewer -administered at baseline and both follow-up visits. All questions have been translated into local languages. Table 1 provides the domains of the questionnaires and the source from which the questions are drawn, and when they will be administered.
Household economic hardship is to be determined by a series of questions about failure to make household payments and whether there was help provided by any organization or individual to meet these payments. Similar questions have been successfully used in several studies investigating economic hardship (Heeley et al., 2009;Wei et al., 2010;Essue et al., 2011;Hackett et al., 2011).
Health care utilization and out-of-pocket costs will be assessed using a questionnaire developed within the study. Treatment costs will be assessed by abstracting data from consented participant's medical files.
Quality of life is to be assessed by the EORTC QLQ-C30 from the European Organisation from Research and Treatment of Cancer and the EQ-5D by the EuroQol group. The EORTC QLQ-C30 is a self-administered questionnaire specifically developed to assess the quality of life of cancer patients. The questionnaire consists of 30 items. After transformation, the EORTC QLQ-C30 has several multi-item functional subscales (e.g. physical, emotional functioning), multi-symptom scales (e.g. fatigue, pain), a global health subscale, and single items to assess symptoms (e.g. sleep disturbance). Scores on the functional and global health scales range from 0 to 100, where a higher scale score represents a higher level of functioning (Aaronson et al., 1993).
The EuroQol (EQ-5D) is a short self-reported generic health-related quality of life instrument that consists of two parts: a self-classifier and a Visual Analogue Scale (VAS) (EuroQoL Group 1990). The self-classifier comprises five items relating to problems in the following domains: mobility, self-care, usual activities, pain/discomfort and anxiety/depression. Each domain has three levels, namely, no problems, some problems and severe problems. Combinations of these categories define a total of 243 health states. An EQ-5D health state can be converted to a single summary index by applying a formula that essentially attaches weights to each of the levels in each dimension. Index scores generally range between 0 (death) and 1 (full health) (Tsuchiya et al., 2002;Tongsiri and Cairns, 2011).
The Hospital Anxiety and Depression Scale (HADS) is to be used to collect information on psychological distress and more specifically the presence of depression and anxiety. The HADS is a self-report instrument designed for use with medically ill patients. Scores of 8 (possible range 0-21) or more on the depression or anxiety subscales are classified as 'depressed' or 'anxious', respectively (Zigmond and Snaith 1983).
Reporting of study outcomes and deaths
Information about the occurrence of all study outcomes as defined above, as well as study withdrawal and death, will be sought at each of the scheduled visits and captured on the participant's case record form. Cause of death will be determined by the research nurse or physician.
Statistical analysis
Initial descriptive analyses will be produced with outcomes reported for each country. Analyses will be undertaken to investigate associations between demographic, socioeconomic and cancer specific factors and each of the key outcomes. Country-specific and pooled analyses will be undertaken.
The analyses will allow us to provide evidence in the ASEAN countries of: -the impact of different cancer types on quality of life, household economic and social outcomes across countries -the influence of insurance status, hospital type, region and socioeconomic status on these outcomes -an analysis of the variations in costs and treatment for cancer across hospitals and countries -an analysis of non hospital direct costs, non-health care costs, indirect costs and out-of-pocket costs incurred by patients with cancer
Data collection and data entry
Data will be collected through conducting structured interviews by a trained interviewer at a place of the participant's convenience. The interviews are to be face to face (or, where necessary, by telephone); the interviewer will read out all the questions and notes the participants' answers on the paper forms. The first interview will be held at the hospital, after the participant has given informed consent to participate in the study, and before start of treatment. In addition, the participant is provided with a cost diary that is kept for the duration of the study, assisting in capturing health service use and out of pocket costs. The interviewer may be an investigator, nurse, or other health care professional. A follow up interview will be carried out at 3 and 12 months after the baseline interview. To minimize loss to follow-up and optimize the validity of responses, the 3 and 12 months interviews will be held face to face at either: 1) the participant's home, 2) in the clinic during a follow-up visit; or 3) at a location convenient to the participant. If a face to face interview is not feasible, a telephone interview will be conducted. If, due to disease progression, a patient is unable to undergo the interview, another member of the household, identified by the participant at baseline, may do the interview on their behalf. In this case, the quality of life questions will be left out of the assessment.
Site staff will attend a two-day training prior to the start of the study. This training will cover the following topics: aim and rationale of the ACTION study, general research methods, participant recruitment, and data collection and entry. The training will enable individuals to recruit study participants, employ the research tools to conduct interviews and to manage the data collection and data storage processes in their country. The overall goal is to capture reliable, unbiased data, which truly represents the overall population.
Site staff are required to enter participants' responses onto case report forms (CRFs) and then enter the data into a secure centralized web based database. Data will be entered once, with several automated quality checks incorporated in the database system.
Ethics
This study will be conducted in accordance with all relevant local, national and international regulations. Each of the participating sites reviewed a copy of the research protocol and provided written approval and agreement to participate in this study. This study has also been approved by the University of Sydney's Human Research Ethics Committee.
Discussion
Each year, more than 700,000 new cases of cancer occur in the countries of ASEAN (Ferlay et al., 2010) and this number is expected to increase. Cancer has a severe impact on individuals and communities. Not only does it lead to disability and death, its treatment costs and associated loss of income can quickly undermine family finances. Consequently, globally, as well as for the ASEAN region, cancer has negative implications for poverty reduction and economic development. Policy and funding priorities in each of these countries must plan to strengthen their health systems to cope with projected increases in cancer prevention, treatment and management needs (Farmer et al., 2010). Information on the socioeconomic impact of cancer can be an important advocacy tool for health policy. The ACTION study will provide novel and valuable information about the impact of cancer on quality of life and the economic circumstances of patients and their households. | 2018-04-03T03:34:54.280Z | 2012-01-01T00:00:00.000 | {
"year": 2012,
"sha1": "319d52babdf6862c945462d8d981751e291eb193",
"oa_license": "CCBY",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201218552488944&method=download",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1dc48fb4a47ae3b25134275b27746584ff60fdfa",
"s2fieldsofstudy": [
"Economics",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221257364 | pes2o/s2orc | v3-fos-license | Straight and Bent Bars Buckling Considered as the Axial Displacement of One Bar End
A new approach has been taken to the problem of straight and bent bar buckling, where bar buckling is considered as a function of axial displacement of one end. It was assumed that the length of a bar being buckled at any instant of buckling is the same as that of a straight bar, regardless of the size of axial displacement of one end of the bar. Based on energy equations, a formula was derived for the value of axial displacement of one bar end or buckling amplitude in the middle of bar length as a function of compressive force. The established relationships were confirmed by simulation tests using the finite element software Midas NFX and by experimental tests.
INTRODUCTION
Buckling is a classic mechanical problem, the concept of which was introduced by Euler (Euler, 1744). Over the years, buckling analysis has found wide application, including in civil engineering (Śledziewski & Górecki, 2020;Toledo et al., 2020), mechanical engineering (Czechowski et al., 2020;Kubit et al., 2019) or the maritime industry (Corigliano et al., 2019;Shen et al., 2020). Currently, there is a renewed interest in buckling, in particular in applications in composite structures (Rozylo et al., 2020;Schilling & Mittelstedt, 2020;Xu & Wu, 2008) or micro and nano technologies (Barretta et al., 2019;Chandra et al., 2020). A large number of industrial applications and scientific research (Li & Batra, 2013;Nistor et al., 2017) of buckling analysis have resulted in the development of buckling modeling methods. In (Harvey & Cain, 2020) authors investigated column behavior if member imperfection and load eccentricity are simultaneously present. Authors analyzed pinned members assuming linearly elastic, slender, uniform, and inextensible columns. To compare the relative significance of member imperfections and load eccentricity on the deflected shape a linear analysis based on Euler-Bernoulli theory was performed. The obtained results were experimentally validated using additively manufactured specimen. Additive manufacturing ensured accurate seeding imperfections to control, specimens buckling characteristics. A series of initially imperfect specimens with eccentric load application points were tested, exhibiting imperfection amplification and cancellation. Authors stated that good agreement between the theoretical predictions and the experimental results was observed. In (Zhu et al., 2017) author presented, an analytical study on the buckling problem of nonlocal Euler-Bernoulli beams using Eringen's two-phase nonlocal integral model (Eringen, 2002). Authors deduced the exact characteristic equation for the buckling loads, by the reduction method. In addition, a simple and explicit expressions of the buckling loads for four-type boundary conditions was developed. Authors proved that adopted nonlocal integral model has a consistent softening, in contrast to the nonlocal differential model. Authors stated that in comparison with differential model and the pure nonlocal model, the integral model considered advantages of well-posedness and selfconsistency. In addition, the established analytical formulae may be useful in providing guidelines for designing structures with nonlocal effect, since they contain the nonlocal parameter explicitly. In (Su et al., 2019) authors presented a finite prebuckling deformation (FPD) buckling theory to analyze the FPD buckling behaviors of beams with the coupling of bending, twist and stretch/compression. To verify the correctness of the proposed theory, it was compared with various analytical and numerical methods of modeling the transverse buckling of a three-point bending beam, lateral buckling of pure beam bending and Euler buckling. In result, it was stated than proposed FPD buckling theory for beams is able to give a good prediction, while the conventional buckling theory (Timoshenko & Gere, 2009) and conventional numerical method (Dassault-Systèmes, 2010) yield unacceptable results (in some cases with 70% error for a three-point-bending beam). In (Nikolić & Šalinić, 2017) authors presented a method of buckling analysis of non-prismatic columns based on rigid element method. Authors derived a general form of the characteristic equation, which enabled to perform buckling analysis of columns with continuously varying, doubly symmetric cross-section and multiple-stepped columns under different boundary conditions. The proposed method was verified through numerical examples. Authors concluded that results obtained on the basis of presented method have a high rate of convergence to the other results from the literature. Summing up the presented review of buckling analysis methods, it can be stated that despite the fact that buckling is a classic problem of mechanics, researchers are constantly proposing new approaches to solve this problem, as a result of which they determine the values characterizing buckling. One such approach is proposed in this article. Presented approach deals with the problem determining specific buckling amplitude of straight and bent bar, where it is considered as a function of axial displacement of one end of the bar. The main novelty is that assumption that the length of a buckled bar at any instant of buckling is the same as that of a straight bar, regardless of the size of axial displacement of one end of the bar. A formula for the value of axial displacement of one bar end or buckling amplitude in the middle of bar length as a function of compressive force was derivedbased on energy equations. The proposed method was validated for bars with different cross-section dimensions by comparing results obtained on its basis with finite element model results and experimental tests. The structure of the article is as follows: in Section 2, the proposed method of determining axial displacement of one bar end or buckling amplitude in the middle of bar length presented. Next, the results obtained on the basis of presented method are compared to finite element model and experimental test results. In Section 4, a discussion of the results obtained is provided. Section 5 contains the final conclusions that summarize the most important achievements of the article.
METHODOLOGY OF RESEARCH Euler bar buckling
Bending a bar caused by exceeding a critical value of axial compressive force is called buckling. The value of this critical force was determined by Euler (Bedford & Liechti, 2020;Euler, 1744;Gere & Goodno, 2009;Timoshenko & Gere, 2009). He considered the equilibrium of a bent bar (Fig. 1).
Fig. 1 Equilibrium of a bent bar, E -Young modulus, Jaxial moment of inertia of bar cross-section, L0initial length of straight bar
Solving the differential equation, Euler derived a well-known formula for the value of critical force: (1) The bent axis of the bar is a sinusoid: Amplitude A at the bar half-length can be written: However, a specific value of amplitude A at such approach to buckling cannot be determined.
Model of straight bar buckling under axial displacements of one end of the bar
The article aims to determine the course of axial displacement of one bar end (or amplitude A of buckling, measured at half-length of the bar) as the function of axial compressive force P. A model of buckling due to axial displacements of one bar end is presented (Fig. 2). It can be compared to a bar placed in the closing jaws of a vice. Axial forces occur then as reaction forces in the supports (jaws).
Fig. 2 Buckling of a bar with a specific initial length L0 at preset axial displacement δ of one end
It was assumed that the bent bar has a shape of a sinusoid with amplitude A, at a length between the supports shortened by displacement δ: To determine the relationship between amplitude A and a preset axial displacement δ, we need to know the formula for the length of the sinusoid. The sinusoid length for each value must be equal to the initial straight bar length L0.
The length of the sinusoid cannot be determined by elementary functions. A precise formula for the length of the sinusoid: (5) where: E is a complete elliptic integral of the second kind, e is an eccentricity of an ellipse. Elliptic integrals were encountered while calculating the ellipse circumference, hence their name. The term refers to integrals that cannot be expressed by elementary functions. We can imagine a sinusoid as an expansion of an ellipse formed by an intersection of a cylinder by a plane at a certain angle to the axis and passing through the diameter of this cylinder (Fig. 3a) (Czechowski et al., 2020;Kubit et al., 2019). Half of the ellipse circumference is the sinusoid length. An approximate formula for ellipse length L: where: a, b are a semi-axes of an ellipse. It follows from Figure 3b that a is a smaller minor semi-axis, equal to the radius of the cylinder from which the ellipse is expanded, γ is a dihedral angle between the ellipse plane and a plane perpendicular to the cylinder axis.
The major semi-axis b results from the initial length L0 of the bar, equal to half the circumference of the ellipse: In further steps, the attention will be focused on the determination of the major axis b from the above equation. After simple transformations: we temporarily adopt a constant value D: We remove the brackets: to get a square trinomial: The discriminant of the trinomial: The roots b1,2 of the equation After the rejection of the unreal solution, the only real solution is Replacing D by the previous value, we get After the reduction of the brackets: The final form of the major semi-axis b expressed by the minor semi-axis a: We substitute for a: a = (L0 − δ)/π We introduce the least common denominator: and obtain the final formula for the major semi-axis of the ellipse as the function of one variable δ: If δ = 0 then b = L0/π, i.e. the two semi-axes a and b are equal and the ellipse becomes a circle. The other root of the square trinomial would be b = L0/(9π) unrealistic, as the major semi-axis b would be smaller than the minor semi-axis a.
The sinusoid amplitude is expressed by the formula (Fig. 3b): The ultimate equation of the sinusoid: By knowing the sinusoid equation, we can determine the elastic energy US accumulated in the bar (Shen et al., 2020): replacing: Elastic energy US in the bar caused by bending: An increment of elastic energy ΔUS accumulated in the bar along with an increment of displacement Δδ is equal to work ΔUp of the external compressive force P over a distance of axial increment of displacement Δδ: The calculations of the previously discussed quantities a, b, A, US, P were made, depending on the stepwise changing value of the axial displacement δ. At the start of the compression, the bar was perfectly straight. The results of the calculations are contained in Table 1. The diagram of axial displacement δ as the function of axial compressive force P for this case is presented in Figure 4a. This is a vertical line intersecting the horizontal axis (axis of force) at the critical force value according to Euler. The amplitude A graph at half-length of the bar perpendicular to the bar axis as the function of the axial compressive force P for this case is presented in Figure 4b.
Fig. 4 Axial displacements δ of bar end (a) and the graph of amplitudes A = A(δ) of the bar buckling at the bar half-length (b) as the function of axial compression force P
It is also a vertical line intersecting the horizontal axis at the identical value of the critical force according to Euler. Analytically, the value of this asymptote can be easily expressed by the formula obtained by the substitution to force P of a small value of displacement equal to, e.g. δ = 0.01 mm = 1·10 −5 ·L0 mm (40) The minor semi-axis: The major semi-axis: The above value is identical to the value of the critical force determined by the Euler formula. Therefore, a graph of axial displacement δ of one bar end was obtained as the function of axial compression force P as well as a graph of buckling amplitude A at bar half-length as the function of the axial compression force P, which was the objective of this article.
Model of bent bar buckling under axial displacements of one end of the bar
If we assume the bar is initially bent A(δ0), which can be regarded as initial displacement δ0 of the bar end by value δ0 (Fig. 5) (we omit elastic strains caused by compressive stresses), then the elastic energy of the strain US caused by further movement of the bar end must be expressed by the difference of amplitudes A = A(δ) -A(δ0) (Fig. 5).
and the value of the compressive force P: This case of buckling was calculated for the initial value of the bar end displacement δ0 according to Table 2. The results of the calculations for four initial displacements δ0 of the bar end according to Table 2 are presented in Figures 9 and 10.
A simplified buckling model of a bent bar
In this approach only buckling amplitudes are considered. This method of determining the critical force is commonly used in lab classes on buckling (Buczkowski & Banaszek, 2006). The amplitude increment in a bar A = A(δ) -A(δ0) initially bent at its half-length by a value A(δ0) is determined with sufficient accuracy by means of the approximate formula (Buczkowski & Banaszek, 2006): The graph of this approximate value of amplitude A = A(δ) -A(δ0) increment as the function of axial compressive force P is presented for A(δ0) = 0.59 mm in Figure 8a.
Finite element model
Simulation calculations were made using the Midas NFX 2018 R1 preprocessor (Midas Information Technology Co. Ltd., Seongnam, Korea) (Midas, 2011) computer program based on the finite element method. The modeled bar had dimensions as given in the previous section. The one-dimensional finite beam elements (CBEAM) were used. The finite element model was composed of 81 nodes and 80 elements. The bar ends were modeled as hinged support, and one end had a preset axial displacement δ = −1mm (Fig. 6).
Fig. 6 Diagram of the computer model of a compressed bar
The non-linear static module (SOL 106) was used for the calculations. Further in this study the initial axial displacement δ was calculated according to
Experimental tests
Experimental buckling tests were conducted for a steel bar with dimensions as given in the previous section. The initial amplitude A0 was 0.59 mm. The value P of the axial force compressing the bar and amplitude A of buckling measured with a micrometer at bar half-length perpendicularly to the bar axis (Fig. 7).
Fig. 7 A bar examined for buckling on a strength tester FM 2500
The experiment results are given in Figure 8.
DISCUSSION
Summarizing the research presented in this article, the experimental studies show, that: • In a wide range of straight bar end axial displacement δ (Fig. 4a) and amplitudes A (Fig. 4b) the compressive force P is constant and equal to Euler's critical force (Euler, 1744;Timoshenko & Gere, 2009); • The more the bar is bent at the initial compression stage, the more flattened are the graphs illustrating axial displacement δ (Fig. 10) and buckling amplitude A (Fig. 9); • The asymptotic character of axial displacement δ of a straight bar end A (Fig. 10) and its amplitudes A (Fig. 9) as the function of axial compression forces is evident for compressive forces close to the value of Euler's critical force; • The amplitude graphs differ slightly for the four methods: analytical, simulation, approximate and experimental (Fig. 8a); • The analytical and simulation graphs of axial displacements δ of the bar end ( Fig. 3a) are significantly different in the first phase of compression, but they have a joint asymptote; • The conformity of amplitude A graphs (Fig. 9) as a function of axial compressive forces obtained by three methods confirms the correct approach to the bar buckling problem by considering the axial displacement of one end of the bar.
CONCLUSIONS
Presented article deals with the problem of straight and bent bar buckling, where bar buckling is considered as a function of axial displacement of one end. Owing to the derived formula it was possible to determine a specific value of amplitude A, which is not possible using Euler method. Obtained results (buckling amplitude and axial displacement of bar end for bars with different cross-section dimensions.) show high agreement with FEM analysis and experimental results. | 2020-08-24T13:14:51.953Z | 2020-08-20T00:00:00.000 | {
"year": 2020,
"sha1": "c8a6dad964d14807b81fe21afee15f80c0017ccc",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/mape-2020-0005",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0c25b83db9f985c08083eabc8760f76bba60d96e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
219044207 | pes2o/s2orc | v3-fos-license | Development of Resource Effective and Cleaner Technologies Using the Waste of Plant Raw Materials
The expediency of implementing a particular technology was determined by selecting the best ones from the point of view of environmental, economic and social aspects. The expediency of implementing a particular technology was determined by selecting the best ones from the point of view of environmental, economic and social aspects. The waste re-use is the second most acceptable technology, after waste prevention or minimization during production. This scientific work is dedicated to the waste re-use in food production. The article presents the results of the research and development of the technologies involving the use of organic waste– pomace of the juice production from Chaenomeles fruits. It was determined that the vegetable waste occupies a significant place among the total amount of food waste. The ways of using the waste from the production of Chaenomeles juice was developed and analyzed, which involve obtaining extract, gelling agent, powder, and the ways of their use in the technology of flour-based products were proposed. It was determined that the use of the powder from the pomace of Chaenomeles is the most effective, which allows not only the maximum use of raw materials, but also to improve the technology of production of flour-based products, shorten the fermentation process, extend the shelf life of the finished products and increase their biological value.
INTRODUCTION
An increase in the number of population and economic growth are provoking unprecedented changes in the planet as they drive the increasing demand for energy, land and water which is so necessary to meet the needs of mankind. According to the international organization Global Footprint Network, over the last 50 years, the environmental footprint (natural resource consumption indicator) has increased by about 190% (Global Footprint Network 2018, Cristian. 2010). Accordingly, creating a sustainable system of providing population with goods, including food, requires significant changes in production, supply and consumption, first of all by finding and implementing efficient resource-saving and cleaner technologies, thus reducing the anthropogenic pressure on the environment.
Such technologies include the low waste and waste-free production. The problem of waste is one of the key environmental issues and it is even more significant in terms of resources (Andreichenko. 2011). Finding a solution to this problem is of great importance for solving the issues of energy and resource independence of a state, saving the natural material and energy resources, and constitutes an urgent strategic task (priority) of the state policy of each country.
The concept of waste-free technology was formed due to the need to protect the environment from the anthropogenic impact of material production. In solving this problem, along with the introduction of means of solid waste, wastewater and gas emissions treatment, improvements in the existing and creation of new eco-friendly industries have become of great importance. This trend is known as the waste-free technology (Characteristics of the impact 2019).
The problem of converting the processing of agricultural raw materials into a waste-free production cycle has two interrelated aspectseconomic and environmental. The first aspect is related to the expansion of resources through deeper, integrated processing of agricultural raw materials and the application of unused waste as a source for obtaining food, feed and fertilizers. In the food industry, during the production of basic (target) products a significant amount of waste and by-products containing hundreds of thousands of tons of sugar, protein, oil, vitamins and other valuable substances are produced in specialized enterprises. It has been substantiated and proven by science that with the complex use of raw materials, it is possible to use almost all the waste and by-products of the food industry with high efficiency (Petruk et al. 2019).
Another aspect of the problem is closely connected to the environmental factors. The development of the processing industries is accompanied by a continuous increase in the impact of production on the environment. The anthropogenic loads on the biosphere should have reasonable limits, the excess of which leads to the disturbance of equilibrium in nature and to imbalance in ecological systems. Therefore, it is now of particular importance to assess the impact of the food production technologies on the environment. The main way to solve the problem is to develop the waste-free industries (Petruk et al. 2019).
The basis of waste-free production is the complex processing of raw materials with all its components, since the production waste is unused or underused part of the raw material.
The basic principles of waste management are: compliance with the principles of a closedcycle economy; waste management hierarchy; integrated waste management information system; systematization and planning; extended producer responsibility (EPR); integration into the EU waste market and the European waste management system. Implementation of these principles will help to prevent waste generation and to contribute to the maximum recycling, as envisaged by the five-step waste management hierarchy (Stessel. 2012, Zaitseva et al. 2018) One of the ecological problems is provision of the humanity with clean, quality food rich in biologically active components. Chaenomeles is a new raw material on the Ukrainian market. During the juice production, 50% of the processed material is pomace, which in its turn contains more than 5% of organic acids, approximately 2% of tannins, a high content of ascorbic acid, vitamins В 1 , В 2 , and also substances of P-vitamin activity and a large amount of pectins. In addition, they include phosphorus, potassium and calcium. Chaenomeles does not contain fat, sodium, so it is useful in diet, but it is rich in dietary fiber and copper (Fedulova et al. 2009 It is expedient to use the secondary products of the Chaenomeles processing in the production of the flour-based products for accelerating fermentation and stabilizing the finished product quality indicators, which will enable to eliminate chemical food additives and to obtain cleaner food products (Levchenko et al, 2016).
Thus, taking into account the considerable amount of Chaenomeles pomace in the juice production and its biological value, the research and development of new resource effective and cleaner technologies are of current interest.
MATERIALS AND METHODS
The materials of the research were the Chaenomeles fruits and wastes of the juice production as well as ready-made flour products from the yeast-containing dough. The determination of the main physicochemical parameters of the raw material was performed according to standardized methods.
The identification of the phenolic substances contained in the Chaenomeles extracts was determined with the method of high performance liquid chromatography on Agilent Technologies chromatograph (model 1100), which is equipped with a G1379A flowing vacuum degasser, a G1313А automatic injector, a G13116А column thermostat, and a G1316A diode matrix detector. A chromatographic column with size 2.1×150 mm, filled with octadecylsilyl sorbent, granulations 3.5 mcm «ZORBAX-SB C-18», was used for the analysis.
The parameters of detection were set as follows: wavelength -313 nm (for phenolic acids and their derivatives), 350 nm (for glycosides of flavones), 371 nm (for flavones), 525 nm (for anthocyanins); for the fluorescence detector, the extinction is 280 nm, the emission is 320 nm for catechin and epicatechin; scale of measurements 1.0; scanning time 2 sec. The spectrum removal parameters are each peak 190-600 nm. The identification of phenolic compounds was performed by retention time of the standards and spectral characteristics (compared with the literature data of high-performance liquid chromatography studies of berries and juices).
The content of organic acids and sugars was determined by means of high performance liquid chromatography on Agilent Technologies chromatograph (model 1100). A carbohydrate chromatographic column with a size of 7.8×300 mm, "Supelcogel-C610H" was used for the analysis. A step-by-step mode of chromatography was established: the feed rate of the mobile phase was 0.5 ml/min; eluent aqueous 0.1% solution of H 3 PO 4 ; working pressure of the eluent 33-36 kPa; column thermostat temperature of 30°C; sample volume 5 μl.
The parameters of the spectrophotometric detection were set as follows: wavelength 210 nm; width of slit 8 nm; scan time 0.5-1.0 s. The identification of organic acids was carried out at the time of retention of the relevant standards. The sample preparation for the analysis of organic acids in pomace involved weighing approximately 5 g of the pomace accurate to 0.1 mg for each 10 cm 3 a test tube and adjusting to the mark with water. After 30 min exposure in the ultrasonic bath, the solution was filtered through a membrane Teflon filter with pore sizes of 0.45 μm in the vial for analysis.
The NIST07 and WILEY 2007 mass spectra libraries were used to identify the components, with a total of more than 470000 spectra combined with the AMDIS and NIST identification programs. The method of internal standard was used for the quantitative calculations (Levchenko et al. 2016).
RESULTS AND DISCUSSION
The Chaenomeles pomace is a compacted mass consisting of peel, seeds and pulp residues. The quality indices of the pomace are slightly different from that of the fresh raw material. The comparative characteristics of the chemical composition of fresh fruits and pomace are given in Table 1.
It has been identified ( Table 1) that, in comparison with the raw material, pomace is a valuable source of BADs. The pomace contains a considerable amount of pectic substances, sufficient content of L-ascorbic acid and phenolic substances, which confirms the prospect of their further processing.
A significant proportion of the biologically active substances of Chaenomeles are in the bound state, only a part of them is in cellular juice and when processed, they move into a soluble part. The primary carbohydrates that make up the primary cell wall are cellulose, hemicellulose and pectin. Cellulose microfibers are bound through hemicellulose bridges, forming a cellulose-hemicellulose network that is surrounded by a pectin matrix. The phenolic compounds are preferably localized in the skin and cell wall of the pulp of the raw material (Tkach et al. 2014).
The studies on the fractional composition of organic acids in the extracts showed the presence of malic (16.70 g/100 g DS), citric (0.54 g/100 g DS), quinic (5.22 g/100 g DS) and succinic (0.36 g/100 g DS) acids. Their content is 50-60% lower than the juice, but it is sufficient to use them as a vegetable additive in the production of food products, in particular, the products from the yeast-containing dough. The presence of malic and succinic acid in pomace enables to recommend them for use in the flour technology in order to intensify the fermentation process and improve the organoleptic properties of the finished product. However, in the raw form they should be applied , because the introduction of raw pomace will adversely affect the organoleptic properties of the finished products.
It was found that the pomace obtained after the extraction of juice from the Chaenomeles fruit contains phenolic substances, the fractional composition of which is shown in table 2 It was established ( Table 2) that the content of phenolic substances in the extract is higher in comparison with the juice. In particular, the content of procyanidins exceeds by 40%, flavan-3-oils -by 70%, oxycoric acids -by 66% and flavones -by 75% their content in juice.
Different ways of further processing of the wastes of juice production of the fruits of Chaenomeles are investigated: extraction, drying and obtaining of jellying juice. The expediency of water extracts, powder and gelling juice for further processing into foodstuffs, considering their valuable chemical composition (Table 3), has been determined.
Pomace is a raw material in which the maximum number of cells is destroyed during the pretreatment of the raw material before pressing and during the juice extraction process. As a result of the conducted studies it was found that the best physicochemical parameters for the extraction of the biologically active complex from the untreated pomace of Chaenomeles were obtained at hydromodules (HM) 1: 3, the extraction temperature was 50 ºC; extraction time -120 min. In the case of extraction of dry Chaenomeles pomace with water, the best results are obtained under the following conditions: hydromodules 1:10; temperature -50 ºС; duration -120 min.
A rational way of recycling fruit and vegetable raw materials involves drying with a purpose of producing fruit and vegetable powders. The powders from plant raw materials allow expanding the food resources and assortment of food products, because they contain all components of raw materials in a concentrated form. Due to the fact that the moisture content of the dried product is 5-8%, then the biochemical reactions in it are almost completely stopped, allowing a long time for its storage (Petruk et al. 2019). The Chaenomeles pomace remaining after juice extraction was used to investigate the dynamics of the extraction process using an extractant -water.
As a result of the conducted research, the optimal parameters for drying the Chaenomeles pomace in the convection steamer were established: temperature -60°C, duration -2 hours, thickness of the layer of pomace at drying -1.5-2 cm. The pomace was ground after drying. The powder obtained was an unstable mass, heterogeneous in size, with color, taste and aroma characteristic of Chaenomeles. In the grinding batch, the particles up to 160 microns in size predominated. The third direction for processing the Chaenomeles pomace is the production of gelling agents on their basis.
The resulting secondary products of recycling wastes of Chaenomeles juice production can be used in the food industry and restaurant industry as a dietary supplement in the manufacturing of flour-based products, including the products from the yeast-containing dough. Depending on their chemical composition and colloidal state, they may have different effects on the properties of the yeast-containing dough.
The results of the organoleptic evaluation showed that the yeast products with the addition of 1.5% powder, 30% extract and 10% gelling agent received the highest scores according to the results of tasting evaluation. In the production of the yeast-containing dough, there are a number of processes that by connecting with each other form a yeast containing dough with characteristic properties. There are a number of factors influencing the dough fermentation process, the main ones being raw materials.
At the stage of maturation of the dough, there are profound changes in the carbohydrate-amylase and protein-proteinaceous complexes of the flour. As a result, the dough acquires a certain elasticity, springiness, viscosity and plasticity, and it also accumulates the substances that form the taste and aroma of the finished products (Poliakova et al. 2009, Khomych andHorobets 2016).
While determining the effect of additives on the intensity of accumulation of yeast cells in the dough during fermentation (Figure 1), an increase in the number of them in the samples, with the addition of 1.5% of the powder, was identified, which is caused by the chemical composition of the additive, which promotes the active reproduction and accumulation of yeast cells.
In order to conduct a complex assessment of the impact of the secondary products processing of Chaenomeles on the carbohydrate-amylase complex of flour and processes, which occur during the maturation of the dough, (Figure 2a) and acid accumulation (Figure 2 b) during fermentation.
After three hours of dough fermentation, the best results were found in the samples with 1.5% of powder addition, with the shortest time for the ball to rise and the lifting power of the yeast increasing by 25%. The products of the Chaenomeles processing due to the peculiarities of their chemical composition can influence the life activity of acid-forming bacteria and ensure the production of high quality products.
The value of the titrated acidity in the experimental samples with the addition of the secondary products of the Chaenomeles processing increases by an average of 15% in comparison with the control. Thus, the samples with the addition of 1.5% of the Chaenomeles powder after 120 minutes of fermentation had a titrated acidity index, which determines the dough ripeness for further processing. The conducted research allowed developing the accelerated method of making the yeastcontaining dough with the use of the products of processing of Chaenomeles, which was implemented in the basis of the flour-based foods production.
CONCLUSION
The obtained experimental results indicate the feasibility of using the Chaenomeles waste (pomace of juice production) in the technology of obtaining food products, like the flour-based products from the yeast-containing dough, which will reduce the negative impact of organic waste on the environment and will allow using the full potential of raw materials.
The use of a powder of Chaenomeles pomace in the technology of producing the flour-based foods is the most efficient and rational, because it enables the maximum use of secondary raw materials in the food production.
The introduction of the yeast-containing dough from 1.5% Chaenomeles powder to the recipe had a positive effect on the carbohydrate-amylase complex of flour, increased the gas-forming capacity by 19%, the indicator of titratable acidity by 15%, increased the level of organic powder by 45% acids, pectic substances, minerals, which contributed to the intensification of the process of gas formation, and created the conditions for reducing the total duration of fermentation.
It was determined that when the powder is introduced, there is an increase in the porosity index by 10%, the formability by 16%, and the specific volume -by 18%. The flour-based foods with the addition of powders are become stale slower and retain their properties for 5 days. The use of powders from the Chaenomeles pomace inhibits the development of the potato disease and reduces the overall microbiological contamination of finished flour-based products.
Thus, the use of the powder from the Chaenomeles pomace in the production of the flourbased products allows raising the biological value and consumer properties of the end-product with maximum waste use and reduced the anthropogenic pressure on the environment. | 2020-04-23T09:15:23.792Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "d2d2f2bdc294de5892b04e39c57e3ef5f1e95dad",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12911/22998993/119814",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7ef479b1e7748ac51e81a37ac395c46d642f8669",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
182607274 | pes2o/s2orc | v3-fos-license | Multilayered composite coatings of titanium dioxide nanotubes decorated with zinc oxide and hydroxyapatite nanoparticles: controlled release of Zn and antimicrobial properties against Staphylococcus aureus
Purpose: This study aimed to decorate the surface of TiO2 nanotubes (TiO2 NTs) grown on medical grade Ti-6Al-4V alloy with an antimicrobial layer of nano zinc oxide particles (nZnO) and then determine if the antimicrobial properties were maintained with a final layer of nano-hydroxyapatite (HA) on the composite. Methods: The additions of nZnO were attempted at three different annealing temperatures: 350, 450 and 550 °C. Of these temperatures, 350°C provided the most uniform and nanoporous coating and was selected for antimicrobial testing. Results: The LIVE/DEAD assay showed that ZnCl2 and nZnO alone were >90% biocidal to the attached bacteria, and nZnO as a coating on the nanotubes resulted in around 70% biocidal activity. The lactate production assay agreed with the LIVE/DEAD assay. The concentrations of lactate produced by the attached bacteria on the surface of nZnO-coated TiO2 NTs and ZnO/HA-coated TiO2 NTs were 0.13±0.03 mM and 0.37±0.1 mM, respectively, which was significantly lower than that produced by the bacteria on TiO2 NTs alone, 1.09±0.30 mM (Kruskal–Wallis, P<0.05, n=6). These biochemical measurements were correlated with electron micrographs of cell morphology and cell coverage on the coatings. Conclusion: nZnO on TiO2 NTs was a stable and antimicrobial coating, and most of the biocidal properties remained in the presence of nano-HA on the coating.
Introduction
Medical implants used in orthopedics or dentistry should be sufficiently durable with mechanical properties that mimic the intended tissue. 1 They must also be safe for the patients in the long term and ideally show some antimicrobial properties to minimize the infection risk right after surgery. Unfortunately, there is no single material with all these desirable properties, and in recent years attention has turned toward enhancing the properties of implants with coatings of nanomaterials. 2,3 In orthopedic and dental implants, several types of nanocomposite coatings are employed including diamond-like carbon coatings on Co/Cr alloy, 4 nanocollagen and calcium phosphate, 5 hydroxyapatite (HA) nanoparticles and polycaprolactone, 6 and carbon nanotubes (CNTs) reinforced with HA. 7 The purpose of such coatings has been mainly to improve biocompatibility and/or strengthen the respective implant material, rather than address antimicrobial properties.
Titanium dioxide nanotubes (TiO 2 NTs) have shown promise in such fields in the past. The nanotubes are readily grown on medical grade titanium, and they can resist mechanical stresses similar to those faced by bone. 8 They have also been shown to be biocompatible with bone cells, partly because they mimic the surface morphology of bone. 9 However, TiO 2 NTs alone are not antimicrobial, and the development of infection around bone implants is a clinical concern. Indeed, the failure of two-thirds of implants postsurgery is attributed to infection. 10 Staphylococcus aureus is one of the most common causes of infection in both polymer [11][12][13] and metallic implants. 14 To enhance antimicrobial properties of titanium implants, attempts have been made to coat TiO 2 NTs with antibiotics such as gentamicin 15 or vancomycin. 16 However, infections related to implants are normally caused by a consortium of microbes. 17 Individual antibiotics are inevitably only targeting at a few of the organisms present. There is also the concern that antibiotic resistance can develop during the treatment. 18 Alternatively, dissolved metallic elements such as silver, copper, and zinc have been known for their antimicrobial properties for centuries. Their solubility and biological reactivity have restricted their applications to simple disinfectants in the past, but now nanoparticulate forms of these metals are available. Of these metals, silver nanoparticles are arguably the strongest biocide with minimum inhibitory concentrations (MICs) of 3.25 mg/L to Streptcoccus mutans 19 and silver nanoparticles are also toxic to S. aureus when silver is presented as a filler in chitosan 20,21 or a coating on medical grade titanium alloy. 22 However, from a clinical safety perspective, silver remains a nonessential toxic element that should not normally be present in the human body. It is, therefore, more desirable to use a nutritionally required metal, such as zinc, that is easily handled and excreted by the human body, but at the same time antimicrobial. Zinc oxide nanoparticles (nZnO) have antibacterial properties against both Gram-positive and Gram-negative bacteria. For example, nZnO was found to be an effective bactericide against Escherichia coli, as measured by Varaprasad et al. 23 In the latter study, the inhibition zone for the nano-zinc oxide containing fibers was between 2.1 and 3.6 mm in an agar diffusion plate test. A minimum of 100 µg/mL of nZnO in suspension was found to be antibacterial against both Gram-positive bacteria (S. mutans and S. pyogenes) and Gram-negative bacteria (Vibrio cholerae, Shigella flexneri, and Salmonella typhii) as measured by MIC assay after 12 hrs exposure in Muller-Hinton broth. 24 The effect of particle shape and size on toxicity is still being debated. 25 Apparently, it is the method of synthesis that determines the initial shape, size and morphology of zinc-containing nanoparticles. 26 There are several techniques for growing nZnO on the surface of TiO 2 NTs. These include a hydrothermal method, electrodeposition, pyrolysis deposition, atomic layer deposition, self-assembled monolayers, and others. 27,28 These methods give rise to nZnO of different shapes and dimensions, such as flower-like, hexagonal rod-like and sphericallike particles. 26 All of the various shapes have been shown to have antibacterial properties, with the smallest sizes generally exhibiting the highest antibacterial properties. 25 However, the challenge is to firmly attach the nZnO to the surface of the TiO 2 NTs such that the integrity of the composite is not compromised and so that the antimicrobial activity persists. In some cases, nZnO particles are formed with uneven coverage on the surface of the nanotubes. 29,30 Other researchers were able to get uniformly distributed nZnO particles on the surface of TiO 2 NTs. 31,32 Annealing can also affect the size of the nZnO particles on the nanotubes; 32 and higher annealing temperature tends to give improved stoichiometry of nZnO relative to other components in the composite. 33 The biocompatibility of the external surface of the composite also needs to be considered in the context of the fibroblasts involved in wound healing and the osteoblasts that are critical to the osseointegration of the implant into the surrounding bone. There is evidence that nZnO can also have some toxicity to mammalian cells (epithelial cells), 34,35 and so it may be desirable to moderate any direct contact of the nZnO with the human tissue. HA is a bioceramic material which has a similar structure to bone and is well known as a biocompatible material that promotes osseointegration. 7,36 Nanoforms of HA are also available in this regard. 37 This study aimed to develop a process to decorate TiO 2 NTs grown on Ti-6Al-4V alloy discs with a uniform coating of nZnO. The synthesis of the nZnO coating was optimized by exploring different annealing temperatures. The composite coating was then made with a nano-HA top coat. To demonstrate the antimicrobial properties, the resulting composite coatings were tested against S. aureus. This microbe is considered to be one of the main causes of infection in orthopedic implants 38 and was hence used for testing the biocidal properties of the nZnO coatings. For these latter studies, the approaches included counting the proportions of live and dead bacteria on the coatings, monitoring microbial activity with a lactate production assay, as well as electron microscopy to observe coating integrity and the presence of any bacteria.
Materials and method
The material fabrication process involved the synthesis of TiO 2 NTs by anodizing the surface of medical grade titanium alloy that was subsequently doped with nZnO to incorporate some antibacterial properties. The nZnO was allowed to incorporate as crystal growing on the surface of the TiO 2 NTs. Then, a final HA mineral was added to form a composite coating. The composite coatings were characterized and then tested for their antibacterial properties against S. aureus.
Growth of TiO 2 NTs with nZnO and HA coating
A sheet of medical grade Ti-6Al-4V alloy of 1 mm thickness (William Gregor Ltd, London, UK) was initially laser cut into 15 mm discs (Laser Industries Ltd, Saltash, UK). The alloy was then polished with #400, #800, and #1,200 grit silicon carbide paper (Elektron Technology Ltd, Torquay, UK). Subsequently, the discs were further polished with 6 micron and 1 micron diamond paste (Agar Scientific, Stansted, UK), after which they were cleaned by ultrasonication (12 MHz) in a mixture of NaOH (1 mol/L), NaHCO 3 (1 mol/L) and Na C 6 H 7 O 7 (1.5 mol/L) in a ratio of 1: 1: 1.5, respectively, for 10 mins. The TiO 2 NTs were then grown on the cleaned surface of the Ti-6Al-4V alloy and then characterized following the optimized protocol described in the previous paper. 39 Briefly, TiO 2 NTs of an external diameter of 116.2 ±6.4 nm (mean ± SEM, n=54) were grown on the surface of the Ti-6Al-4V alloy by anodizing. This involved immersing the alloy for 1 hr in a mixture of 1 mol/L NH 4 HPO 4 and 5 g/L NH 4 F (0.5 g of NH 4 F in 100 mL of ammonia solution). The solution was adjusted to pH 4 with 1 mol/L phosphoric acid. The samples were anodized at a voltage of 20 V, with an initial sweep rate of 0.5 V/s, using a dualoutput programmable power supply (Metrix Electronics Limited, Tadley, UK). The Ti-6Al-4V discs with the freshly grown TiO 2 NTs were then annealed at 350 ºC for 2 hrs in a furnace (Carbolite RWF 1200, Carbolite Engineering Services, Hope Valley, UK). Care was taken to provide a gradual increase in temperature and gradual decrease back to room temperature during the annealing to ensure the final crystalline phase of the nanotubes was anatase. 40 Afterward, the TiO 2 NTs were functionalized with -OH groups by treating them with 2 mol/L NaOH at 50°C for 2 mins. 41 This provided a reactive surface for the next steps in the synthesis of the composite material.
A modified version of the protocol by Liu et al 42 was used for the synthesis of nZnO on the TiO 2 NTs. In order to determine the appropriate concentration of chemicals required to grow nZnO, pilot trials were performed ( Figure S1), cumulating in the following procedure. The Ti-6Al-4V discs with the functionalized TiO 2 NTs were immersed in a 1:2 mixture of 0.075 mol/L analytical grade Zn(NO 3 ) 2 (prepared in ultrapure deionized water) and 0.1 mol/L hexamethylenetetramine (prepared in dilute ammonia), with 2 mg analytical grade citric acid. The mixture was subsequently heated to 80ºC, with continuous stirring on a magnetic hot plate. After 2 hrs in the mixture, the alloy discs of TiO 2 NTs now with the nZnO present were sonicated in deionized water for 10 mins to wash the coatings and remove any loosely bound materials and dissolved zinc.
The next step involved stabilizing the crystalline structure of the nZnO onto the TiO 2 NTs (hereafter, called TiO 2 -ZnO).
Little is known about the formation of nZnO crystals on the surface of novel structures such as TiO 2 NTs, and so this step was performed at three different annealing temperatures (350, 450, and 550ºC) in order to explore the resulting material morphology, surface roughness, and chemical composition. The annealing was performed in triplicate, by gradual heating of the samples to the required temperature in a furnace (Carbolite RWF 1200). The samples were maintained at the desired final temperature for 1 hr, before being allowed to gradually cool to room temperature. The resulting coatings are hereafter termed as TiO 2 -ZnO/350, TiO 2 -ZnO/ 450, and TiO 2 -ZnO/550 in relation to the annealing temperatures of 350, 450, and 550ºC, respectively. A control for the annealing treatment was the unheated TiO 2 -ZnO discs for comparison. The resulting discs were examined for morphology and elemental composition of the surfaces (in triplicate) by scanning electron microscopy (JEOL7001F SEM) coupled with energy-dispersive X-ray spectroscopy (EDS). The EDS composition was described using the AZtec analysis software supplied with the EDS attachment (Oxford Instruments, Oxford, UK). In addition, surface roughness was measured using an Olympus Laser Microscope LEXT OLS3100. The characterization of all of the TiO 2 -ZnO composites at the end of this step of the synthesis is shown in Figure 1.
The final step in the overall synthesis of the composite coating was to add HA. Each of the nZnO-coated materials from the step above (from all annealing temperatures) was separately immersed in 3 times the normal concentration of a simulated body fluid (3SBF) which was prepared using a concentrated version of Kokubo any losses of Zn from the discs and the expected decrease of Ca and P in the media during this final step of the HA synthesis. The spent 3SBF media were acidified with 1-2 drops of 70% nitric acid and stored until required for trace metal analysis (see below).
Characterization of the coatings
The morphology and chemical composition of the TiO 2 at each step of the synthesis (ie, addition of nZnO and then HA) were found by SEM with EDS as shown in Figures 1 and 2. Figure 1 shows the surface morphology, prior to the HA additions. The growth of the TiO 2 NTs gave generally good coverage of the alloy. The material is known to consist of two different phases, the alpha-phase (α, the majority of the coating) and the beta phase (β, the depressions), which cause uneven TiO 2 NT growth rate. 39 The additions of nZnO, regardless of the annealing temperature, gave complete coverage ( Figures 1B-E), although there were some differences in surface roughness due to heat treatment temperature. Figure 1A illustrates the evenly distributed TiO 2 (before nZnO is added) and the EDS analysis confirmed the chemical composition as mainly Ti and O which is consistent with the presence of a majority of TiO 2 . As shown in Figure 1B, TiO 2 -ZnO had a nano-needle structure with a length of about 100 nm and width in the order of 10 nm and uniformly distributed over the surface of the nanotubes. The presence of zinc and oxygen in the EDS analysis confirmed the attachment of nZnO to the surface, although the nanostructure of the underlying TiO 2 NTs was visually discernible. Figure 1C illustrates the uniformity of the coating on TiO 2 -ZnO/350 with the nano-needle structure still present, but denser than those on TiO 2 -ZnO. The EDS analysis showed a higher amount of Zn present on TiO 2 -ZnO/350 than on TiO 2 -ZnO. Similar observations were made for the TiO 2 -ZnO/ 450 ( Figure 1D). However, at the highest annealing temperature of 550 ºC, although the coverage was good (>90% coverage), a few gaps were observed in the coating with respect to TiO 2 -ZnO/550 ( Figure 1E). The nZnO layer was denser than the other coatings and the underlying nanotubes were not visible. The amount of zinc present on the surface of the TiO 2 -ZnO/550 was similar to TiO 2 -ZnO/450 by EDS ( Figure 1D). The final completed composite with the HA added is shown in the lower row of Figure 2 (panels A-D). The HA formed on TiO 2 was present as micron-scale globules with nanostructured surfaces as shown in Figure 2A. TiO 2 -ZnO, TiO 2 -ZnO/350, TiO 2 -ZnO/450, and TiO 2 -ZnO/550 had HA grown on them after the overnight immersion in 3SBF as seen in Figure 2B-D. With increasing annealing temperature used for nZnO, the HA coating became denser. TiO 2 -ZnO-HA/550 ( Figure 2E) had less gaps in the coating as compared to TiO 2 -ZnO-HA/350 ( Figure 2C). The HA coating on TiO 2 -ZnO provided a full coverage so that the underlying nZnO was not exposed.
One of the concerns regarding the incubation of the partially made composite in 3SBF was that, while a HA layer might be evolved, this would be at the expense of considerable Zn leaching from the material surface. This was not the case as shown in Figure 3. In Figure 3A, the EDS measurements of the composite before and after incubation in the 3SBF media are shown. While there was a loss of some Zn from the surface as measured by EDS, this was only about one-fifth of the total Zn present regardless of the previous annealing temperature. In terms of total Zn metal lost to the external medium ( Figure 3B, one-way ANOVA, P<0.05), there was a clear relationship with the annealing temperature in the nZnO addition step, with the highest temperatures resulting in the least Zn leaching. The 3SBF media showed the expected trend of decreasing Ca and P concentrations following the incubation ( Figure 3C), consistent with ion adsorption to the surface during HA formation on the composite. The samples at the highest annealing temperature for nZnO coating addition resulted in the greatest decreases in Ca concentration in the 3SBF media (one-way ANOVA, P<0.05).
The visual observations of the final surface morphology in Figure 1 were confirmed by surface roughness measurements ( Figure 3D). The presence of nZnO on the coating increased the roughness, compared to the TiO 2 nanotubes coating alone (one-way ANOVA, P<0.05). The annealing temperature at the nZnO addition step of the synthesis also influenced the final outcome on surface roughness, with the greatest roughness values associated with the highest annealing temperatures (one-way ANOVA, P<0.05). However, the final step of HA additions tended to decrease the surface roughness of each composite (one-way ANOVA, P<0.05, Figure 3D). In this study, only the coatings were characterized with the aim of testing their antimicrobial activity. The effect of the coatings on the base alloy was not investigated. However, the total coating thickness is below 1 µm so no significant effect on mechanical properties is expected.
For the logistics of biological testing, one "best" composite had to be selected for experimental work. After considering all the characterization information, TiO 2 -ZnO/350 and TiO 2 -ZnO-HA/350 were chosen as the coated samples to be taken forward for further testing. This was selected on the basis that the nZnO coating was uniformly structured as well as covering the whole surface, and while the deposition of HA was also good, the gaps in the HAwould allow some direct access to the biocidal nZnO coating. Subsequently, further batches of Ti-6Al-4V discs coated with the composite using the 350°C annealing temperature were prepared. The composites were then sterilized under 36.42-40.72 kGy gamma radiation (Becton, Dickinson and Company, Swindon, UK), as we have done previously with nanocoated Ti-6Al-4V alloys. 19
Dialysis experiment and the release of dissolved metal
This experiment was conducted to aid the interpretation of the biological experiments with respect to the toxicity due to the presence of dissolved Zn, but also to inform on the stability of the coatings in the SBF. The dialysis experiments were conducted according to Besinis et al 44 7.4 with a few drops of 1 mol/L HCl. Experiments were conducted in triplicate at room temperature in previously acid washed (5% nitric acid) and deionized glassware. Dialysis tubing (MW cutoff, 12,000 Da, Sigma Aldrich, UK) was cut in 7 cm×2.5 cm lengths and sealed at one end using a Mediclip and then filled with one of the coated discs as appropriate with 7 mL of SBF. The dialysis bag was closed with another Mediclip and the bag suspended in a 500 mL pyrex beaker containing 243 mL of SBF (ie, total volume 250 mL). The beakers were gently stirred throughout and maintained at 37°C, and 4 mL aliquots of the SBF was collected from the external compartment of the beaker at 0, 0.5, 1, 2, 3, 4, 6, 8, and 24 hrs. The SBF samples were acidified with a drop of 70 wt% nitric acid and stored for metal analysis (see below). At the end of the 24 hrs, the dialysis bags were also carefully opened and 4 mL of the fluid therein collected for metal analysis. Dialysis curves were plotted using SigmaPlot 13.0 (Systat Software, Inc.), after deducting the background ionic concentrations of the SBF. A first-order rectangular hyperbola function was used to fit dialysis curves to the raw data. The maximum initial slope of the curves informed on the maximum apparent dissolution rate of each substance.
Plate preparation and exposure to S. aureus
The experimental design involved exposing S. aureus to the coated samples of TiO 2 -ZnO/350 and TiO 2 -ZnO-HA/350 in 24-well, flat-bottom sterile polystyrene plates (Thermo Fischer Scientific, Loughborough, UK). TiO 2 NT-coated discs were used as a control for the coating effect. Zinc chloride was used as a metal salt control for any possible dissolved zinc effect from the nZnO. S. aureus was allowed to grow on its own as a negative control (ie, no biomaterials present). Nine replicate runs were conducted for each type of coated samples and the various controls (n=6 for biochemical assays and n=3 for SEM). Following the approach by Besinis et al, 44 the materials were exposed to S. aureus for 24 hrs and the proportion of live to dead cells and the amount of lactate produced were evaluated (see below). The concentration of total dissolved zinc, calcium, and phosphorus released from the coating in the SBF was also measured (see metal analysis, below). S. aureus was chosen as it is considered to be one of the main causes of infection in orthopedic and dental implants. 38,45 S. aureus was cultured in brain heart infusion (BHI) broth (Lab M Ltd, Bury, UK) at 37°C. A bacterial suspension having OD 0.018 at 595 nm absorbance (Spectrophotometer Genesys 20, Fisher Scientific) was prepared in the BHI broth at a concentration of 1×10 7 cells/mL. For the experiments, 2 mL of the bacterial culture was pipetted in each well of a 24-well plate containing TiO 2 NTs, TiO 2 -ZnO/350, TiO 2 -ZnO-HA/350, ZnCl 2 (0.001 M), and nZnO dispersed in ultrapure deionized water on their own (n=9 replicates of each). A zinc concentration of 0.001 M was used for the positive controls as this reflected the maximum amount of zinc released from the coatings. The 24-well microplates were then incubated at 37ºC on a shaking table. At the end of the overnight exposure, six of the replicate plates were used for biochemistry. An aliquot (1 mL) of the supernatant from each well was collected for the LIVE/ DEAD® kit and lactate production assays (see below). The remaining supernatant was acidified with 70 wt% HNO 3 and used for metal determination. Then, the remaining adherent bacterial pellets were collected. Bacterial pellets were obtained using the same protocol as Besinis et al, 19 whereby the samples from the wells were sonicated (12 MHz) for 60 s in 2 mL of sterile saline to remove the attached bacteria from the discs. Then, 1 mL of the resulting suspension was allowed to grow in 5 mL of BHI broth for 5 hrs at 37ºC on a shaking table with the aim of increasing the amount of live cells in order to readily measure them with the Live/Dead assay. The viability of the cells and the amount of lactate in the suspension was also assessed followed by the measurement of the ionic composition of the latter. For the remaining three replicates, the supernatant was removed and the samples were prepared for electron microscopy (see below).
Cell viability
The cell viability of S. aureus in both the supernatant and incubated cell suspension from all of the relevant treatments and controls were assessed using the L7012 LIVE/DEAD® Backlight TM Kit (Invitrogen Ltd, Paisley, UK). Briefly, 100 μL of the supernatant and 100 μL of the incubated homogenate from each replicate for the different treatments were transferred to a V-bottom 96-well microplates (Corning, UK). The microplates were centrifuged at 4,000 rpm for 10 mins in a 2,040 Rotors microplate centrifuge (Centurion Scientific Ltd, Chichester, UK) with the aim of pelleting the bacteria, after which the pellets in each well were washed with 1 mL of sterile NaCl saline. The pellets were centrifuged again at 4,000 rpm for another 10 mins. The final washed pellets were resuspended in 1 mL of saline. Then, 100 μL of the final suspension from each well was pipetted into another 96-well plate flat bottom microplate for fluorimetry. Briefly, 100 μL of freshly prepared dyes from the LIVE/DEAD kit was added to those wells and mixed thoroughly. The microplate was incubated in the dark at room temperature for 15 mins after which the fluorescence of the wells was immediately measured on a Cytofluor II fluorescence plate reader (PerSeptive Biosystems, Inc., Framingham, MA, USA) at an excitation wavelength of 485 nm and emission wavelengths of 530 nm and 645 nm, respectively. The readings at 530 nm were divided by the readings at 645 nm in order to obtain the percentage of live to dead cells in the supernatant and the incubated cell suspension from the different samples and controls according to the kit instructions.
Lactate production
The metabolic activity of S. aureus was assessed by measuring the amount of lactate present in both the supernatant and incubated cell suspension from the treatments and appropriate controls in the experiment (6 replicates of each) using the approach utilized by Besinis et al. 44 The measurement of lactate would suggest the presence of metabolically active bacterial cells. The lactate assay reagent was prepared by pipetting 1 μL of 1,000 units/mL of lactate dehydrogenase (Sigma-Aldrich Ltd, St. Louis, MI, USA) into wells in a flatbottom 96-well plate followed by 10 μL of 40 mmol/L nicotinamide adenine dinucleotide (NAD) (Melford Laboratories Ltd, Ipswich, UK) and 200 μL of 0.4 mol/L hydrazine prepared in a glycine buffer of pH 9. Then, 100 μL of the supernatant, or 100 μL of the incubated homogenate as appropriate, was transferred to a V-bottom 96-well microplate and centrifuged at 2,000 rpm for 10 mins to generate a clean supernatant that could be measured for total lactate. 10 μL of these supernatants was added to the 211 μL of the lactate assay reagent mixture in the flat-bottom 96-well plate described above. The microplate was then placed in an incubator at 37ºC for 2 hrs to allow lactate production to occur. The absorbance was then read at 340 nm against Metal analysis following S. aureus exposure The exposed broth and the detached bacteria were analyzed for zinc, calcium, and phosphorus composition.
After the exposure to S. aureus, 1 mL of the broth or the detached bacteria was diluted with Milli-Q water to a final volume of 5 mL. Subsequently, they were acidified with two drops of 70 wt% nitric acid to prevent Zn adsorption to the test tubes during storage. Total Zn concentrations were determined by inductively coupled plasma mass spectrometry (ICP-MS, Varian 725-ES Melbourne, Australia). Whereas the total Ca and P concentrations were analyzed by optical emission spectrometry (ICP-OES, Thermo Scientific XSeries 2, Hemel Hempstead, UK). Calibrations for both ICP-OES and ICP-MS were performed with matrix-matched analytical grade standards. For ICP-MS, the standards and samples contained internal references (0.5%, 0.25%, and 1% of iridium) for SBF, broth and any homogenates made from bacteria. In the complex matrix of broth and SBF, the detection limit was around 0.003 µg/L for zinc for ICP-MS and 5 µg/L for calcium and 40 µg/L for phosphorus by ICP-OES.
Imaging of the attached S. aureus
The remaining 3 repeats of the control, TiO 2 , TiO 2 -ZnO/ 350, TiO 2 -ZnO-HA/350, ZnCl 2 and nZnO alone were used for imaging under high-resolution SEM with the aim of visually confirming the attachment of S. aureus on the different surfaces. After the 24-hr exposure to S. aureus, the supernatants from the 24-well plates were removed after which the plates were washed twice with sterile saline (0.85 wt% NaCl). Then, 2 mL of 3 wt% glutaraldehyde in 0.1 mol/L cacodylate buffer was added to each well and was allowed to stay overnight at 4ºC. The next day, the glutaraldehyde was removed, and the samples were washed with 0.1 mol/L cacodylate buffer. An increasing concentration of ethanol (30%, 50%, 70%, 90%, and 100%) was used for serial dehydration of the samples as appropriate. The samples were then coated with carbon for viewing under a JEOL7001F SEM. Once in the microscope vacuum chamber, each sample was viewed at three different random locations (ie, 3 images of each specimen ×3 replicate samples). Care was taken to systematically photograph the specimens at the same magnifications. A ×1,000 magnification was used to explore the extent of coverage of the surface with S. aureus.
Statistical analysis
The data from the cell viability assay, the lactate production assay, and the ionic concentration measurements were analyzed using Statgraphics Centurion XVII (StatPoint Technologies, Inc.). After descriptive statistics, data were checked for normality and for equal variances (Levene's test). When data were parametric, the data were analyzed for treatment or time effects using one-way ANOVA with Fisher's least significant difference (LSD) test post-hoc. In cases of unequal variances, the data were transformed before analysis and where the data remained nonparametric, the Kruskal-Wallis test was used. Data are presented as mean ± SEM unless otherwise stated. The default 95% confidence level was used for all statistics.
Dialysis experiment and the stability of coatings
Prior to the analysis of the release of zinc from the coatings in the presence of the S. aureus (see below), the dissolution of apparent total Zn from the coatings were analyzed in the presence of SBF to aid the interpretation of the bioassays and to inform on the stability of the coatings. The results are reported in Figure 4. The total concentration of Zn in the beakers from the samples without any added zinc was minimal, as expected. For the coatings containing Zn, there was an exponential rise in the total Zn concentration in the external compartment of the dialysis bag, reaching a maximum total Zn concentration of 8 22.6±0.9 µg/L of total Zn, respectively, with significantly less total Zn from the latter coating with HA (Kruskal-Wallis, P<0.05; n=6). This suggests the HA coating is impeding the release of total Zn into the media, but nonetheless, this was still enough to be biocidal (see Cell morphology and survival section).
Cell morphology and survival
Specimens from the controls and treatments were examined for abundance and morphology of the bacteria by electron microscopy at the end of the experiment, and the resulting surface is reflected in Figure 6. The bacteria in the wells without any discs in them (ie, a control grown directly on the plastic of the culture plate) survived and grew on the whole surface of each well, as expected ( Figure 6A). The bacteria cultured on the TiO 2 NT discs also grew over the whole surface of the discs, although slightly less dense than the plastic control ( Figure 6B). In contrast, the treatments with either just the zinc salt ( Figure 6C) or ZnO alone ( Figure 6D) showed very few bacteria, indicating that both treatments were very biocidal. The discs coated with both TiO 2 -ZnO/350 and TiO 2 -ZnO-HA/350 also had much reduced coverage of bacterial cells attached to their surfaces compared to the discs coated with just TiO 2 NTs, showing that the composites with or without the HA present retained antimicrobial properties. Electron microscopy alone can only determine the presence of bacteria, not whether the organisms are alive or dead. The viability of S. aureus was therefore analyzed using the L7012 LIVE/DEAD® Backlight TM Kit after 24 hrs of exposure to the composite materials or the appropriate controls. Bacterial cell viability was determined for the bacteria firmly attached to the substrate ( Figure 7A) and those still present in the overlying broth ( Figure 7B). The lactate production by the bacteria was also measured in homogenates from the biofilm ( Figure 7C) and from bacteria in the overlying media ( Figure 7D) to confirm that the cells had some metabolic activity. Overall, the results of the LIVE/ DEAD assay and lactate production (Figure 7) reflected the morphological observations ( Figure 6). In the unexposed control, as expected for plastic culture wells and in keeping with the electron microscopy observations, the attached biofilm had excellent viability (mean ± SEM, n=6) of 100±3% and less in the cells in the overlying broth (72±3%, statistically different, Kruskal-Wallis, P<0.05, n=6). Both the attached cells and those remaining suspended in the broth showed readily measurable lactate production ( Figures 7C and D), with more in the attached microbes, as expected (Kruskal-Wallis, P<0.05, n=6). The survival of bacteria attached to TiO 2 NTs (63±3%), or in the broth overlying the TiO 2 NTs (38±2%), had slightly less viability than the plastic plate controls (statistically significant for each, Kruskal-Wallis, P<0.05, n=6), but both showed similar lactate production to their respective controls. This indicated that the cells observed on the TiO 2 NTs ( Figure 6) were mostly alive and metabolically active. In contrast, most of the bacteria in the overlying broth, from either the zinc salt or zinc oxide nanoparticles alone, were dead (6±2% and 2±0% alive, respectively), and this was reflected in low lactate production ( Figure 7D). The attached cells from the ZnCl 2 treatment fared better with 50±2% survival, but those from the nZnO treatment did not (only 1% survival, Figure 7A Figure 7C). Indeed, in terms of lactate production by either the attached biofilm, or the overlying broth, the TiO 2 -ZnO/ 350 discs were as effective as nZnO alone (Kruskal-Wallis, P>0.05, n=6). The addition of HA was less effective, with the TiO 2 -ZnO-HA/350 treatments showing a little more lactate production in the attached biofilm, but this was still much inhibited compared to either the unexposed controls or the TiO 2 NTs alone ( Figure 7C).
Discussion
Overall, this study aimed to make a biomaterial that was durable and capable of being decorated with nano-HA to impart potential biocompatibility with human bone. Both these features were considered with the safety requirements of medical implants in mind. The material was also designed to offer antimicrobial properties by the addition of zinc. This was achieved by using TiO 2 NTs as a scaffold to grow nZnO, and the subsequent annealing ensured that the nZnO particles remained attached. The material, with or without HA present, showed a slow and beneficial dissolution of Zn in SBF, that was also biocidal to one of the pathogens known to be a concern during implant surgery, S. aureus. The biocidal nature was confirmed by poor coverage of bacteria on the biomaterial and reduced bacterial survival as well as low lactate production by those microbes remaining.
Advantages and stability of ZnO nanocoatings
Although TiO 2 NTs on the surface of titanium alloy have been successfully used as a platform for the attachment of antibiotics to deliver some antimicrobial properties to bone implant material, 46 this approach is problematic because the antibiotics, as organic compounds, will inevitably be degraded, and thus offer only transient protection. There is also the concern of antibiotic resistance. The approach here, to use Zn as a biocide, therefore offers some advantages. An initial concentration of 0.075 mol/L of zinc nitrate as the source of zinc and 0.1 mol/L of hexamethylenetetramine successfully yielded nZnO particles on the surface of TiO 2 NTs (Figure 1). The morphology of the TiO 2 NTs and the decoration with nZnO were similar to those reported by Liu et al 42 using the same concentration of zinc nitrate. In the latter study, the resulting nZnO had a thinner structure, compared to those in the present study where the annealing modified the morphology of the asgrown ZnO nanocoating to a more spherical shape and altered the size of the nZnO (Figure 1). The ability of annealing temperatures to alter the size of nZnO has been reported previously 33 and was associated with the alteration in the crystal size and the reduced number of vacancies in zinc in the annealed zinc oxide. The annealing process is intended to improve the bonding of the nZnO (and any subsequent HA) to the relevant substrate as well as influence the size of the resulting crystals. Increasing the temperature has also been shown to improve the stoichiometry of the nZnO crystals, 33 in this study, a temperature higher than 350ºC caused the crystals of nZnO to merge with each other, reducing the porous structure of the latter coating ( Figure 1). This change, in turn, resulted in bigger crystals of HA forming on the surface of the nZnO (Figure 2). This might be regarded as clinically beneficial for a bone implant material, as better HA coverage is known to increase biocompatibility with osteoblasts. 47 There was also some loss of Ca and P from the 3⨰ concentrated SBF in the presence of the coatings (Figure 3), and this is likely due to adsorption to the coating surface and may, in the case of HA, also contribute to crystal growth. However, the Ca and P were also labile, and in the dialysis experiments with freshly prepared SBF, some Ca and P were leached into the medium (Figure 4). Regardless of the detailed mechanisms involved in crystal formation, the resulting TiO 2 -ZnO and TiO 2 -ZnO-HA combinations with an annealing temperature of 350ºC were selected for antimicrobial testing, partly on the basis of the results from the incubations in SBF (Figure 3). These experiments confirmed that µg/L concentrations of total Zn were released from the coating in SBF, and of the annealing temperatures used, 350°C achieved the best release ( Figure 3B), but with much of the original Zn remaining on the coating, as measured by EDS ( Figure 3A). This indicated a slow release of zinc into a biologically relevant media, whilst maintaining the coating integrity in terms of elemental composition and surface roughness. The cause of the apparent total Zn release into the SBF, in theory, could either be due to slight erosion of the coating such that intact nZnO particles were being released into the media, or more likely, the release of dissolved Zn ions by dissolution of the nZnO particles attached to the TiO 2 NTs. The dialysis experiments (Figure 4) confirmed the latter. The dialysis tubing (measured pore size <2 nm) enables only the apparent soluble Zn fraction to diffuse into the external compartment of the beakers, and this achieved equilibrium following a rectangular hyperbola as expected for solutes ( Figure 4). The dissolution of nZnO particles in biological fluids is well known, and nZnO is sparingly soluble such that usually a few µg/L of Zn 2+ ions is released over 24 hrs, depending on the details of the material synthesis and media composition. 48 Similarly, in SBF, µg/L concentrations of apparent total dissolved Zn were released during the dialysis experiments ( Figure 4). Less Zn was released by dissolution from the TiO 2 -ZnO-HA/350 coating as compared to the TiO 2 -ZnO/ 350 coating, likely because the HA top coat was limiting the accessible area of the nZnO for the media. Nonetheless, even in the presence of HA on the coating, the maximum release rate of total Zn was 2.6 µg/hr and is broadly comparable to other biocidal nanomaterials such as Ag NPs. 44 This release of Zn was biocidal to S. aureus (see Antibacterial properties section).
Antibacterial properties
TiO 2 NTs on medical grade titanium alloy provide a nanoscale surface that may better support osseointegration of bone implant compared to titanium alloy alone, but the TiO 2 NTs do not have any inherent antibacterial properties. 42,49 This was also the case in the present study, and although S. aureus grew less well on TiO 2 NTs compared to directly on the cell culture plate, there was still >80% viability of the bacteria and with no effects on lactate production (Figure 7). In contrast, both the positive control of 1 mmol/L ZnCl 2 and the nZnO particles alone were effectively killing the bacteria (Figure 7). The toxicity of ZnCl 2 to S. aureus is expected, with 2 mmol/L of Zn or much less reported to cause complete growth inhibition, depending on the strain of organism used, 50 and zinc is generally biocidal to microbes in the 1-10 mmol/L range, depending on salinity and temperature. 51 Zinc oxide nanoparticles have also been shown to effectively inhibit the growth of microbes such as E. coli and S. mutans. 23,24,42 There are fewer reports of nZnO toxicity to S. aureus, although concentrations of around 100 mg/L of nZnO are reported to cause growth inhibition. 52 Nonetheless, in the experimental conditions used here, the measured 354 µg/L of Zn as nZnO in the broth was an effective biocide (Figures 6 and 7) and was more effective than the ZnCl 2 treatment (Figure 7). The mechanism by which nZnO causes toxicity to microbes is not fully understood, but could involve free ion toxicity derived from the dissolution of the metal ion, or by direct contact toxicity of the particle on the exterior surface of the microbe, as has been suggested for Ag NPs. 53 The dialysis experiments confirmed dissolution of Zn (Figure 4), and hence Zn was detected in the SBF (Figure 3).
However, the key concern was whether or not nZnO was toxic to the microbes when present in the coating. Both the TiO 2 -ZnO/350 and TiO 2 -ZnO-HA/350 coatings caused growth inhibition of S. aureus. There were less coverage of bacteria on these coatings ( Figure 6) and fewer live bacteria present (Figure 7) compared to the TiO 2 NTs coating alone. About 80% of the bacterial cells died in the presence of TiO 2 -ZnO/350 ( Figure 7B). This is in agreement with the findings of Liu et al 42 and Roguska et al 29 in similar studies with nZnO coatings. However, in the present study, the distribution of the nZnO was more uniform and more stable due to the annealing process and arguably making the coating more efficient. Regardless, the addition of nano-HA on the nZnO slightly reduced the antibacterial effect of the coating. This might be expected, as the HA provides a barrier that could prevent direct contact of the bacteria with the underlying nZnO. For example, the gaps in the HA coverage were of the order of 200-300 nm at most (Figure 2), and yet the bacteria are around 2 µm long ( Figure 6). The addition of HA did reduce the Zn dissolution from the coating (Figure 4), perhaps because the exposed surface area of nZnO was less.
The chemical reaction between HA and nZnO also has to be taken into consideration, as zinc can attach or be adsorbed onto the HA particle, and hence decreases the apparent Zn leaching from the coating. 54 However, the small gaps in the HA structure did allow some Zn dissolution ( Figure 4) and with a maximum dissolution rate of 4.35 µg/hr. The latter would roughly equate to a release about 100 µg of Zn into a small volume (a few milliliters) around the point of surgery in a patient. Given the MIC for dissolved Zn is around 1 mmol/L (or 65 µg/mL), this would represent a desirable slow release of antibacterial Zn in the patient during and immediately after surgery, when the infection risk is greatest.
However, the coating containing nZnO did not completely kill the bacteria. The TiO 2 -ZnO-HA/350 caused about 60% mortality of the bacteria present on the coating ( Figure 7A). Other cations in the media, such as calcium, can compete with dissolved zinc for uptake into cells, 55 and the relatively high cation concentrations in the SBF may have offered the bacteria some protection from Zn exposure. The nZnO coating was also much less effective as a biocide than Ag NPs attached to TiO 2 NTs. For example, on the same TiO 2 NTs used here, Gunputh et al 56 showed the addition of Ag NPs killed >90% of the S. aureus present. This is not surprising, with the exceptional toxicity of Ag being associated with its ability to bind to -SH groups in proteins, 53 and being a non-essential metal there is no endogenous ion transport system in microbes to regulate intracellular concentrations of silver or prevent bioaccumulation. These same features of Ag make it less desirable for use in patients compared to zinc. Clearly, the moderate biocidal properties of nZnO coatings need to be considered with the benefits of Zn as an essential metal which is inherently less hazardous than silver for humans in the long term.
Conclusions
In conclusion, a composite coating was successfully synthesized with a uniform distribution of nZnO on TiO 2 NTs. The coating appeared stable in SBF over 24 hrs, and the dialysis experiments showed a slow, beneficial release of dissolved Zn. The addition of nano-HA maintained the roughness and nanostructure of the coating, but still enable an antimicrobial Zn release from the material, which was effective at killing around 60% of S. aureus attached to the coating. Further work is needed to confirm the antibacterial properties of the coatings against other common causes of orthopedic/dental implants infections such as Streptococcus mutans and E. coli, understanding the biocompatibility of the coating with human osteoblasts and whether less HA can be used to further improve
Supplementary material
International Journal of Nanomedicine
Publish your work in this journal
The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch ® , Current Contents ® /Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. | 2019-06-07T21:33:35.884Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "1b8030c32f55e7d0fb62438917e3355d8921a46d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=49902",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b8030c32f55e7d0fb62438917e3355d8921a46d",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
246083055 | pes2o/s2orc | v3-fos-license | Effect of Kegel Pelvic Floor Muscle Exercise Combined with Clean Intermittent Self-catheterization on urinary retention after radical hysterectomy for cervical cancer
Objectives: To investigate the effect of Kegel pelvic floor muscle training combined with clean intermittent self-catheterization on patients with cervical cancer, and to analyze the risk factors affecting urinary retention. Methods: A total of 166 patients with cervical cancer admitted to our hospital from October 2016 to December 2019, all of whom received radical resection of cervical cancer, were divided into two groups according to the random number table method: the observation group and the control group, with 83 cases in each group. The control group underwent clean intermittent self-catheterization, while the observation group underwent Kegel pelvic floor muscle exercise combined with clean intermittent self-catheterization. The catheter replacement rate, bladder residual urine volume, self-perceived burden scale (SPB), Kolcaba general comfort questionnaire (GCQ), incidence of urinary tract infection, and urinary retention after catheter removal were compared between the two groups. Logistics regression analysis was utilized to analyze the risk factors affecting urinary retention. Results: The incidence of catheter replacement, urinary retention, dysuria and bladder residual urine volume in the observation group were significantly lower than those in the control group (P<0.05). Postoperative SPB score of the two groups decreased significantly, while the GCQ score increased significantly. Postoperative SPB score of the observation group was significantly lower than that of the control group, while the GCQ score was significantly higher than that of the control group (P<0.05). Statistically significant differences can be observed in the comparison of catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin between the two groups (P<0.05). Logistic regression analysis showed that catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin were independent risk factors affecting urinary retention (P<0.05). Conclusions: Catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin are the risk factors for postoperative urinary retention in patients with cervical cancer. With Kegel pelvic floor muscle exercise combined with clean intermittent self-catheterization, a variety of benefits can be realized, such as improved bladder function, reduced incidence of urinary tract infections and urinary retention, as well as increased patient comfort.
INTRODUCTION
Cervical cancer, as a common clinical gynecological malignant tumor, is second only to breast cancer in the incidence of gynecological tumors. Early cervical cancer presents no typical symptoms and is easily overlooked, leading to invasion and metastasis of cervical cancer. 1,2 Currently, radical resection of cervical cancer is used as the principal mean for the treatment of early cervical cancer (stage IA to IIA). During the surgical resection of cervical cancer, the pelvic autonomic nerve will be destroyed and part of the nerves that innervate the bladder will be cut off, leading to bladder contraction and sensory dysfunction, as well as urination disorders and urinary retention. 3 After the urinary catheter is removed, the patient is unable to urinate successfully, which may lead to urinary tract infection and renal insufficiency in severe cases. In the process of clinical nursing, patients are usually instructed by nurses to carry out intermittent clamping training about three day prior to catheter removal to ensure smooth urination after catheter removal. However, practice shows that intermittent pinching training alone is not effective, with unsatisfactory bladder recovery effects for patients. For this reason, new nursing methods are urgently needed to be developed in the clinic to solve the problems of bladder function recovery and urinary retention after radical resection of cervical cancer. 3,4 With clean intermittent self-catheterization, patients can remain relatively free of catheterization and maintain a moderate intravesical pressure, which is conducive to reducing the risk of complications and the recovery of bladder function. Therefore, this surgical method is widely applied in clinical practice. Pelvic floor muscles surround the urethra, vagina, etc., support pelvic and abdominal organs, and have a close bearing on the urination function. It has been observed in some studies that pelvic floor muscle exercise boasts various benefits such as improving pelvic floor muscle contraction and diastolic tension, enhancing urinary continence capability, and accelerating the recovery of bladder function. 5,6 There are few reports on the combined effect of pelvic floor muscle exercise and clean intermittent self-catheterization in patients with cervical cancer after surgery. The purpose of this study was to investigate the effects of Kegel pelvic floor exercise combined with self-cleaning intermittent catheterization on postoperative patients with cervical cancer, and to elucidate the risk factors for urinary retention.
METHODS
A total of 166 patients with cervical cancer admitted to our hospital from October 2016 to December 2019, all of whom received radical resection of cervical cancer, were divided into two groups according to the random number and postoperative catheter indwelling of no less than three days; • Patients with reading, writing and cognitive capabilities and able to independently complete the postoperative questionnaire. Eexclusion criteria: • Patients with preoperative urinary system infections, bladder tumors, urinary calculus and other urinary system diseases; • Patients with dysfunction of the heart, kidney, liver and other important organs; • Patients who received radiotherapy or chemotherapy after surgery. Nursing methods: Patients in the control group received clean intermittent self-catheterization. The specific process is as follows: Patients cleaned their hands and urethral orifice under running water according to standard procedures, then picked up the rear end of the catheter and moistened it with warm boiling water near the tip of the catheter. Patients then slowly inserted the catheter into the urethra with thumb and forefinger until the urine flowed out and continued to insert 1-2cm. Gently pressed the bladder after the outflow stops, and then slowly pulled out of the catheter after confirming that there is no urine outflow. After the procedure, the catheter was rinsed with cold water, dried in the shade and stored in a neat storage box. The time interval of urethral catheterization was set according to patients' residual urine volume, usually 4-6 hour and 4-6 times a day. Patients were instructed to lie on their sides, turn over, and get out of bed regularly, and received education on urinary incontinence, urinary retention and other related complications, such as the replacement of drainage bag, the maintenance of catheter patency, drinking instruction, clamping and catheter removal time. At the same time, responsible nurses were also required to provide acute psychological care and guidance to the patients, and pay close attention to the abnormal state (depression, anxiety, etc.) of the patients.
Patients in the observation group underwent Kegel pelvic floor exercise on the basis of selfcleaning intermittent catheterization, that is, patients were instructed to perform contraction training on the abdominal muscles, vulva and pelvic floor muscles. Patients were instructed on the procedure of Kegel pelvic floor muscle exercise 3d before surgery, and diastolic and contractile exercises of the vagina, urethra and anal sphincter were performed on the 4d after surgery while lying in bed. Patient were supine with legs flexed apart. When inhaling, the perineum and anus were contracted forcibly, lasting about 10 seconds, and when exhaling, it was relaxed about 10s. The above actions were repeated for about 20 minutes, with an interval of 5-10s between each time, three times a day. During the hospitalization, the responsible nurse provided regular guidance and training. After discharge, regular telephone follow-up was conducted to 14 days after surgery.
Self-perceived burden:
The self-perceived burden (SPB) scale 8 was adopted for evaluation. The higher the score, the more serious the self-perceived burden. Kolcaba general comfort questionnaire (GCQ) 9 was adopted, which was divided into four dimensions: physiology, spirit, psychology, social culture and environment. A 4-point Likert scale was used for scoring, which was divided into 4 grades from 1 to 4 points, of which four points indicate strongly agree and 1 point indicates strongly disagree. The higher the score, the more comfortable it would be.
Four hours after the catheter was removed, the patient was instructed to empty the bladder, and the residual urine volume of the patient's bladder was measured by three-dimensional color ultrasound. And the occurrence of complications such as urinary retention, urinary incontinence, urinary catheter reset, and urinary tract infection were recorded. In this study, urinary retention was defined as bladder residual urine volume greater than 100ml after forced urination 4 hours after catheter removal. In case of more than 100mL residual urine on the day of extubation, the catheter needs to be reset, otherwise, the extubation is successful. Statistical Analysis: All the data of this study were statistically analyzed by SPSS 20.0 software, and the measurement data were expressed in the form of number of cases (percentage). Statistical analysis was performed using the χ 2 test, and measurement data were expressed as mean ± standard deviation ( ± s). Statistical analysis was performed using the t test, and data at different time points were compared using repeated measurement analysis of variance. In univariate analysis, the statistically significant factors were included in the multivariate logistic regression analysis. P<0.05 indicates a statistically significant difference.
RESULTS
No significant difference can be seen in the comparison of baseline information such as age, duration of surgery and length of hospital stay between the two groups (P<0.05), Table-I. The incidence of catheter replacement, urinary retention, dysuria and bladder residual urine volume in the observation group were significantly lower than those in the control group, with statistically significant differences (P<0.05). Postoperative SPB scores of the two groups decreased significantly, while the GCQ scores increased significantly. Postoperative SPB score of the observation group was significantly lower than that of the control group, while the GCQ score was significantly higher than that of the control group, with statistically significant differences (P<0.05). Table-III. Thirty-nine patients developed urinary retention after surgery and were divided into two groups: the urinary retention group and the normal group. Statistically significant differences can be observed in the comparison of catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin between the two groups (P<0.05). However, no significant difference was observed in the comparison of general data such as age, duration of surgery and intraoperative blood loss. Table-IV. Logistic regression analysis, taking urinary retention as the dependent variable (1=Yes, 0=No), and catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin as the independent variables, showed that catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin were independent risk factors affecting urinary retention (P<0.05). Table-V.
DISCUSSION
Radical resection of cervical cancer is prone to various complications due to large anatomical variation in the surgical site and large surgical wound area, among which urinary retention is a common complication. For this reason, improving intraoperative skills, reducing nerve damage and strengthening postoperative nursing care are of great importance for reducing the risk of complications. [10][11][12] Clean intermittent catheterization, as a scientific method of urinary tract management, has been widely applied in clinical practice. When performing clean intermittent catheterization, a urinary catheter is placed from the urethra into the bladder regularly in a sterile environment, so as to empty the urine regularly. 13,14 With clean intermittent self-catheterization, the self-care ability of patients can be improved, and the psychological and physical stress caused by the indwelling catheter can be avoided, which is conducive to the recovery of bladder function and the reduction of the risk of complications (such as infection and urinary incontinence).
Jingjing Zong et al. When performing clean intermittent selfcatheterization, patients are required to strictly control the clean environment before catheterization, clean their hands, wash the perineum, and strictly implement a drinking plan to ensure that the bladder is regularly filled, emptied, and discharged regularly, which is conducive to reducing or preventing urinary tract infections. 15,16 Nevertheless, prolonged postoperative indwelling of the urethral catheter may affect the function of the urethral sphincter, weaken bladder tension and detrusor contraction, increase the pressure of the bladder and urethra during urination, and affect normal urination, thereby increasing the incidence of complications such as urinary retention and dysuria. 17 It is shown in this study that the incidence of catheter replacement, urinary retention, and dysuria in the observation group was significantly lower than that in the control group, indicating that Kegel pelvic floor muscle exercise based on clean intermittent self-catheterization can effectively reduce the risk of complications. Which is consistent with the results of previous studies. In the event of catheter replacement and prolonged indwelling of catheter, a series of adverse effects will be caused to patients, such as affecting patients' confidence and feeling of voluntary urination after indwelling the catheter, as well as delaying the recovery of bladder function. During Kegel pelvic floor muscle exercises, a variety of benefits can be brought about via repeated and autonomous relaxation and contraction of pelvic floor muscles, such as promoted pelvic floor blood circulation, enhanced pelvic floor muscle tension and contractile and diastolic ability of levator ani muscle and distal urethral sphincter, improved urinary continence capability to avoid the Cervical Cancer and Logistics Analysis of Urinary Retention risk of urinary incontinence, accelerated recovery of bladder function. 18 The results of this study also show that, bladder residual urine volume in the control group was significantly higher than that in the observation group. Postoperative SPB score of the observation group was significantly lower than that of the control group, while the GCQ score was significantly higher than that of the control group, indicating that Kegel pelvic floor muscle exercise combined with clean intermittent selfcatheterization can reduce the burden on patients from mental and psychological aspects, prevent urinary system diseases complicated by cervical cancer surgery, and solve bladder dysfunction, so as to avoid infection and promote bladder function recovery, protect kidney function, and improve the quality of life of patients and nursing satisfaction.
Previous studies have also found that kegel exercise can significantly improve bladder function, pelvic floor strength and quality of life in patients with urinary incontinence. 19 During the radical operation of cervical cancer, important organs such as ureter, pelvic cavity and bladder are involved, which may easily damage related organs and connecting nerves, resulting in urinary retention and seriously affecting the quality of life of patients. Logistic regression analysis in this study show that catheter indwelling time, urinary tract infection, surgical incision infection, and surgical margin are independent risk factors affecting postoperative urinary retention in patients with cervical cancer (P<0.05). The extent of surgical resection has a bearing on neurogenic bladder dysfunction. Extensive hysterectomy requires complete removal of the ligaments around the cervix, but with a large surgical margin and a damaged pelvic autonomic nerve. As a result, the innervation function of the bladder is affected, the properties of the elastic muscle fibers of the bladder wall are reduced, the sphincter is relaxed, and the bladder is paralyzed, etc. 20 , thus increasing the risk of urinary retention. Prolonged catheter indwelling can not only affect bladder tension and urinary continence ability, but also increase the risk of urinary tract infection and aggravate dysuria, thus increasing the risk of urinary retention. Surgical incision infection can lead to urinary tract infection, which in turn causes the inflammatory reaction of the detrusor muscle, leading to edema, weakening the tension of detractor muscle, and thus easily causing urinaryretention. 21 Urinary retention is caused by a variety of factors. Therefore, targeted nursing interventions are needed to reduce the risk of occurrence. First of all, Kegel pelvic floor muscle exercises, abdominal muscle exercises, etc. can be carried out to enhance the ability of pelvic floor muscle groups to contract and urinary continence. Secondly, clean intermittent catheterization care should be performed after surgery to strengthen the cleaning and care of the perineum and urethral orifice. Soft urethral catheters should be selected to avoid urethral mucosal injury, smooth drainage should be maintained, and urinary bags should be replaced regularly. In addition, patients should drink more water to increase urinary excretion. All of these interventions can reduce the risk of urethral infections. Based on the pathophysiological state of the patients, comprehensive nursing can reduce the risk of postoperative urinary retention by selecting the appropriate timing of catheterization and extubation, improving the operational skills of physicians, reducing the resection margin, preserving neurosurgical resection and so on.
Limitations of this study:
The number of subjects included in this study is limited, so the conclusions drawn may not be very convincing. In addition, there was no significant statistical difference in the incidence of urinary incontinence after extubation between the two groups, which may be due to the small number of study cases. More time and effort will be needed to refine the data in the future in the hope of reaching a stronger conclusion.
CONCLUSION
Catheter indwelling time, urinary tract infection, surgical incision infection and surgical margin are the risk factors for postoperative urinary retention in patients with cervical cancer. With Kegel pelvic floor muscle exercise combined with clean intermittent self-catheterization, a variety of benefits can be realized, such as improved bladder function, reduced incidence of urinary tract infections and urinary retention, as well as increased patient comfort. | 2022-01-21T16:32:54.200Z | 2022-01-19T00:00:00.000 | {
"year": 2022,
"sha1": "4fcb5d9a9a5b896742679f2e0cae681e08639b31",
"oa_license": "CCBY",
"oa_url": "https://pjms.org.pk/index.php/pjms/article/download/4495/1256",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ebaabbe209903e86bddde9b8216541797752d43e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244783306 | pes2o/s2orc | v3-fos-license | Artificial intelligence integrated smartphone fundus camera for screening the glaucomatous optic disc
© 2021 Indian Journal of Ophthalmology | Published by Wolters Kluwer Medknow This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution‐NonCommercial‐ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non‐commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. Access this article online
smartphone fundus camera for screening the glaucomatous optic disc Dear Editor, In the absence of more definite signs, an increase in vertical cup disc ratio (VCDR) or its asymmetry is used to screen suspected glaucoma cases. However, due to its subjective nature, VCDR estimation on fundus photography has an inherent disadvantage of interobserver variability, especially when assessment is done by inexperienced observers. Due to these reasons, nonmydriatic monoscopic fundus photography (NMFP) of the optic disc has shown a wide range of sensitivity and specificity for detection of glaucomatous cupping, varying from 41% to 97%. [1][2][3] Automated estimation of VCDR by artificial intelligence (AI) can be a solution to this problem.
While there are software and algorithms for VCDR assessment from the photographs obtained by the currently available handheld fundus cameras, none have an inbuilt VCDR measurement integrated into the device. [4][5][6] In this study, we aimed to determine the efficacy of a smartphone-based fundus camera with an integrated offline cloud-synced AI-based assessment for VCDR (Remidio's Fundus on phone {FOP} NM-10, Bengaluru, India). [7] The study was approved by our institutional ethics committee and followed the tenets of the Declaration of Helsinki. Fifty eyes of 25 consecutive subjects (either normal, glaucoma suspects, or previously diagnosed glaucoma patients) presenting to a glaucoma clinic were evaluated by a single examiner using 90D Slit-lamp biomicroscopy (SLB).
Eyes with media opacities were excluded. VCDR was assessed on the slit-lamp biomicroscopy with the help of the inbuilt reticule by a single (blinded) glaucomatologist by integrated AI using nonmydriatic fundus photos taken on the FOP device and with inbuilt software of a tabletop SS-OCT device (Topcon DRI OCT Triton, Topcon Corporation, Tokyo, Japan). The VCDR measurements were compared using a Bland-Altman analysis and intraclass correlation coefficient (ICC). All analyses were performed using a statistical software package (SPSS for Windows, v. 26.0. SPSS, Inc, Chicago, IL).
Out of the subjects, seven were healthy, four were glaucoma suspects, and 14 were confirmed glaucoma patients. Adequate distancing was maintained between the examiner and patients during the procedure in view of the ongoing social distancing norms of the COVID-19 pandemic [ Fig. 1a]. The FOP device produced a fundus field of view of 40° and generated the VCDR report in less than 10 seconds. The resolution of images (3024 × 4032 pixels) obtained was higher than the currently used handheld fundus cameras and comparable to those obtained from the OCT device [ Fig. 1b and c]. [2,3] There was a good correlation between the two devices with an ICC of 0.86 (Pearson's correlation coefficient 0.76; P < 0.001); however, the OCT estimations of the VCDR were on an average higher by a factor of 0.14; CI: 0.04 to −0.32 [ Table 1 and Fig. 2].
In studies by Snyder et al. [4] and Muramatsu et al., [6] automated estimation of VCDR using fundus photographs had a moderate agreement with reference VCDR as assessed by expert ophthalmologists. Further, in areas of peripapillary atrophy, the disc margins were overestimated by the automated method. In contrast, we found the AI-mediated VCDR assessment to be more accurate and showed a good agreement with OCT-estimated VCDR. The OCT devices are known to provide a higher estimation of the CDR, probably because they utilize Bruch's membrane opening to define the border of the optic disc margin. [8] However, the FOP device correlated better with the VCDR assessment made clinically, with an ICC of 0.93 [ Table 1].
The use of AI-based VCDR assessment, integrated within the FOP device, obviates the need for external image-based software. Further, being an offline system, this device can be used in remote areas for screening where an active Internet connection is unavailable, especially in developing countries. The presence of a cloud syncing feature allows the device to update its database as and when connected to the Internet. Apart from being relatively cheaper, other advantages of the device are the examination of children under anesthesia, instant digital transfer of patient's disc photographs for record-keeping, teleconsultation, and usage as a tool for teaching. Limitations of this pilot study were the small sample size and a lack of direct comparison with other handheld fundus cameras. Notwithstanding these, we believe this particular handheld fundus camera can be used for evaluation of the disc for glaucoma in outpatient clinics, especially in pandemic situations.
Financial support and sponsorship
Dr. Divya Rao is being funded by the Remidio Innovative Solutions Pvt Ltd.
Conflicts of interest
There are no conflicts of interest.
Impact of COVID-19 pandemic on the spectrum of ocular trauma during Diwali at a tertiary eye care center of Western India
Dear Editor, The coronavirus disease 2019 (COVID-19) pandemic has been an unprecedented challenge to the healthcare services, with a great impact on the management of ocular emergencies, especially during Diwali, an annual Indian festival traditionally celebrated by lighting lamps, bursting firecrackers (FC), and socializing. [1,2] During the pandemic, people were expected to have muted festive celebrations with social distancing due to the fear of getting infected by the virus and various restrictions on travel and use of FC imposed by the Indian Government. [3] This study evaluated the impact of the COVID-19 pandemic on the demographic and clinical spectrum of ocular trauma presenting during the festival of Diwali at a tertiary eye care center in western India. The retrospective comparative study included patients with a history of noninfectious ocular trauma presenting during the five consecutive days of Diwali | 2021-11-28T06:16:28.600Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "ebe81a7ac1054dc3c9b0e1f97f17ff5f109fbea5",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijo.ijo_1831_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e775fad628be228c74266212de167eecd32a048c",
"s2fieldsofstudy": [
"Medicine",
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221323755 | pes2o/s2orc | v3-fos-license | COVID-19 pneumonia in an HIV-positive woman on antiretroviral therapy and undetectable viral load in Porto Alegre, Brazil
COVID-19 pandemic has been a problem worldwide. It is important to identify people at risk of progressing to severe complications and to investigate if some existing antivirals could have any action against SARS-CoV-2. In this context, HIV-infected individuals and antiretroviral drugs might be included, respectively. Herein we present the case of a 63-year-old HIV-infected woman with undetectable viral load, on dolutegravir, tenofovir and lamivudine, who was hospitalized due to COVID-19 pneumonia. In spite of having some clinical markers of severity on admission, the patient improved and was discharged after a week. To our knowledge, this is the first report of severe SARS-CoV-2 infection in an HIV-infected individual in Brazil.
Introduction
COVID-19, caused by the severe acute respiratory syndrome coronavirus type-2 (SARS-CoV-2), has been declared a pandemic in March 2020, following the first case reported in Wuhan, China in December 2019. Some patients, such as those suffering from cardiovascular disease, diabetes mellitus and obesity, were soon identified at greater risk for worse clinical outcomes. Nevertheless, there is still a lot to be discovered regarding this disease. In this context, not only the risk of COVID-19 complications among people living with HIV (PLWH) * Corresponding author. E-mail address: esprinz@hcpa.edu.br (E. Sprinz).
remains uncertain, but also the potential protective benefits of antiretroviral (ARV) drugs.
Despite of more than 12 million cases of COVID-19 worldwide as early July, reports of HIV/Sars-CoV-2 co-infection are still uncommon, 1 with the first known case in the USA only being reported on May 22, 2020, 2 People living with HIV accounted for only 1% of 16,749 patients with COVID-19 hospitalized in the United Kingdom, on a large prospective observational cohort study, with HIV having no impact on survival. 3 Thereafter, the prognosis of PLWH after the diagnosis of COVID-19 has also been a subject of debate. 4 While some researchers believe that HIV immunosuppression could result in greater susceptibility to SARS-CoV-2 infection, 5 others believe that these patients would be at a lower risk of complications, since impairment in cellular immunity might be associated with less inflammation, 6 of cytokine storm which has been associated with more severe cases of COVID-19.
There is also some controversy about the potential advantages against SARS-CoV-2 from being on ARVs. 7 There is some in vitro evidence that these drugs could have an impact against this virus. 8,9 Herein we present the case of an HIV-infected woman on antiretroviral therapy who was hospitalized at our center with COVID-19 pneumonia.
Case report
A 63-year-old woman was admitted to COVID-19 Unit at Hospital de Clinicas de Porto Alegre (HCPA), on southern Brazil, on May, 2020. The patient had been diagnosed with HIV infection in 2005, and was on tenofovir (TDF), lamivudine (3TC), and dolutegravir (DTG) since November 2019 (before she was on atazanavir/ritonavir, switched for DTG). Her viral load had been undetectable for a long time and her CD4+ cell count was 426 cells/mm 3 (CD4/CD8 ratio 1.25). Systemic arterial hypertension (SAH), well controlled with hydrochlorothiazide and losartan, was the other comorbidity she had.
At presentation, she complained of fever (39 • C), myalgia, inappetence, nausea, abdominal pain, diarrhea, hyposmia and hypogeusia, in addition to cough and dyspnea for a week. Upon admission, white blood cells (WBC) count was 9250 cells/mm 3 (76% neutrophils, and 16% lymphocytes). She had elevated C-reactive protein (65.5 mg/L), total creatine kinase (307 U/L) and lactate dehydrogenase (316 U/L). Serum creatinine was 0.84 mg/dL and she had no abnormalities in clotting tests, including a D-Dimer level of 0.42 ug/mL (with our reference value being up to 0.5 ug/mL). First arterial blood gas analysis (ABG) was performed with the patient on supplemental oxygen through a nasal cannula, but she did not present hypoxemia, with a 107 mmHg pO 2 and a peripheral oxygen saturation (SpO 2 ) of 98.5%. Chest X-ray exhibit opacities in middle thirds of both lungs (Fig. 1). Sars-CoV-2 was detected by RT-PCR in nasopharyngeal secretion swab.
The patient was treated with supportive measures that included oxygen via nasal cannula and received antibiotic therapy with amoxicillin/clavulanate for a total of seven days. After giving consent, she was randomized to one of the arms of a randomized clinical trial (Coalition-1 trial 10 ) and received hydroxychloroquine 400 mg bid plus azithromycin 500 mg per oral according to trial protocol, with no side effects related to any of these therapies during this period. Even though she presented some severity markers on admission, she had a favorable clinical evolution, with no need for treatment in ICU or requiring more invasive forms of oxygen therapy, being discharged in good clinical conditions seven days after hospitalization.
Conclusion
To our knowledge, this is the first reported case of COVID-19 disease in an HIV-infected individual in Brazil and remains the only one that we had the opportunity to care. This finding is of special interest, as Porto Alegre, the capital of the southernmost state of Brazil (Rio Grande do Sul), is one of the cities with highest HIV incidence in Brazil. 11 This report has the purpose to illustrate the several peculiarities of SARS-CoV-2 infection worldwide. Although the patient presented a severe course of COVID-19, the ARVs she was taking did not protect her from acquiring the infection.
Among the drugs used to treat HIV, lopinavir/ritonavir was the first to assessed against SARS-CoV-2. In spite of the studies in 2003 during SARS epidemic showing a decrease in mortality, intubation rates and unfavorable outcomes on SARS with the use of lopinavir/ritonavir, 12 a recent clinical trial showed no benefit of this medication for the treatment of severe cases of COVID-19. 13 Other combinations of protease inhibitors are under testing, with atazanavir/ritonavir showing greatest inhibitory potential in vitro against SARS-CoV-2. 8 A clinical trial of darunavir/cobicistat for treatment of COVID-19 is currently on progress in China. 14 Other ARVs have also shown some activity in vitro and in animal models. DTG, an integrase inhibitor, had demonstrated activity against SARS-CoV-2. 8,9,15,16 Likewise, TDF, a nucleotide analogue reverse transcriptase inhibitor widely used in the treatment of HIV and Hepatitis B infections, has emerged as a new investigative agent against COVID-19. It is from the same class as remdesivir, a novel nucleotide analogue that has activity against SARS-CoV-2 in vitro 17 and has been recommended for hospitalized patients with severe COVID-19 in many guidelines. 18 However, no ARV has demonstrated clinical impact so far.
One of the first published case series, comprising of 33 patients hospitalized for COVID-19 followed up at HIV centers in Germany, suggested that although SARS-CoV-2 infection could occur during treatment with ARVs, these patients did not appear to be at risk for worse clinical outcomes among those with symptomatic COVID-19. 19 A recent large Spanish cohort study of 77,590 HIV-infected people on ARVs identified similar risk factors for hospitalization, admission to ICU, and death when compared to the general population. 20 However, HIV-infected patients on TDF/FTC had a lower risk of COVID-19 and related hospitalization than those receiving other therapies. Whether the difference observed is due to the profile features of patients using this ART or the direct antiviral effect of the drug is still a matter of debate. 20 Currently, one trial combining tenofovir-alafenamide/emtricitabine and lopinavir/ritonavir to treat COVID-19 patients and another with tenofovir/emtricitabine as pre-exposure prophylaxis against COVID-19 in health care workers are on progress. 21 In our patient, these ARVs did not prevent SARS-CoV-2 infection. Still, a lot is still to come regarding this coronavirus infection and the clinical course in HIV-infected individuals and the possible impact of ARVs. r e f e r e n c e s | 2020-08-27T09:08:36.439Z | 2020-08-26T00:00:00.000 | {
"year": 2020,
"sha1": "9f6ecfbd6dceec071976d8a08f40e932bd76ea8c",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bjid.2020.07.009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a0622163e28197309896bbf74a98c079bbd996b",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
150635902 | pes2o/s2orc | v3-fos-license | Sexual health, sexual rights and sexual pleasure: meaningfully engaging the perfect triangle
Abstract To improve sexual health, even in this charged political moment, necessitates going beyond biomedical approaches, and requires meaningfully addressing sexual rights and sexual pleasure. A world where positive intersections between sexual health, sexual rights and sexual pleasure are reinforced in law, in programming and in advocacy, can strengthen health, wellbeing and the lived experience of people everywhere. This requires a clear understanding of what interconnection of these concepts means in practice, as well as conceptual, personal and systemic approaches that fully recognise and address the harms inflicted on people’s lives when these interactions are not fully taken into account. Bridging the conceptual and the pragmatic, this paper reviews current definitions, the influences and intersections of these concepts, and suggests where comprehensive attention can lead to stronger policy and programming through informed training and advocacy.
Introduction
Meaningful concern for sexual health requires attention to political currents, and social movements, within countries as well as at regional and global levels, as these influence health, legal and policy standards, and the impacts these all have on people's lived experience of their sexuality, sexual health, sexual rights and sexual pleasure. As inadequate support in any one area can have negative effects on the others, this paper takes as its starting premise that all efforts must be made to support a perfect triangle of sexual health, sexual rights and sexual pleasure for all people everywhere in the world. Given the current political moment with retrenchments occurring everywhere from the local to the global, increased conservatism in all parts of the world, let alone shrinking space for civil society, we move beyond drawing attention only to the negative but set out to highlight positive examples of how sexual health, sexual rights and sexual pleasure have been and can be jointly addressed. It is worth recalling that sexual health has been, and is, for almost all actors in both global and national spaces, a legitimising way to address sexual rights and sexual pleasure. Sexual health as an entry point allows engagement not only with the health sector but with programmers and policymakers who might not otherwise be immediately sympathetic to the importance of rights and pleasure. by the many technical and political actors engaged in this work. To ensure clarity, each is briefly discussed below, along with proposed definitions (see Box 1) as these will be relevant to the sections on policy, programming and advocacy that follow.
Box 1. Defining terms used in this article
Sexual Health " … a state of physical, emotional, mental and social wellbeing in relation to sexuality; it is not merely the absence of disease, dysfunction or infirmity. Sexual health requires a positive and respectful approach to sexuality and sexual relationships, as well as the possibility of having pleasurable and safe sexual experiences, free of coercion, discrimination and violence. For sexual health to be attained and maintained, the sexual rights of all persons must be respected, protected and fulfilled." 1 Sexuality " … a central aspect of being human throughout life encompasses sex, gender identities and roles, sexual orientation, eroticism, pleasure, intimacy and reproduction. Sexuality is experienced and expressed in thoughts, fantasies, desires, beliefs, attitudes, values, behaviours, practices, roles and relationships. While sexuality can include all of these dimensions, not all of them are always experienced or expressed. Sexuality is influenced by the interaction of biological, psychological, social, economic, political, cultural, legal, historical, religious and spiritual factors." 1 Sexual Rights "The application of existing human rights to sexuality and sexual health constitute sexual rights. Sexual rights protect all people's rights to fulfil and express their sexuality and enjoy sexual health, with due regard for the rights of others and within a framework of protection against discrimination." 1 Sexual Pleasure "Sexual pleasure is the physical and/or psychological satisfaction and enjoyment derived from solitary or shared erotic experiences, including thoughts, dreams and autoeroticism. Self-determination, consent, safety, privacy, confidence and the ability to communicate and negotiate sexual relations are key enabling factors for pleasure to contribute to sexual health and wellbeing. Sexual pleasure should be exercised within the context of sexual rights, particularly the rights to equality and nondiscrimination, autonomy and bodily integrity, the right to the highest attainable standard of health and freedom of expression. The experiences of human sexual pleasure are diverse and sexual rights ensure that pleasure is a positive experience for all concerned and not obtained by violating other people's human rights and wellbeing." 2 Sexual health was first defined rather vaguely by the World Health Organization (WHO) in a 1975 Technical Report as: "the integration of the somatic, emotional, intellectual and social aspects of sexual being, in ways that are positively enriching and that enhance personality, communication and love." 3 Twenty years later, the Programme of Action of the International Conference on Population and Development 4 included sexual health under the definition of reproductive health, indicating that its purpose is: "the enhancement of life and personal relations, and not merely counselling and care related to reproduction and sexually transmitted diseases." 4 This definition has since been widely used by global organisations including the World Health Organization (WHO) and non-governmental institutions such as the International Planned Parenthood Federation (IPPF), with important implications also for the approach to sexual health taken by the national level government and civil society actors.
With respect to sexual rights, arguably to this day no language has been as important as the political articulation in the 1995 Beijing Platform for Action from the Fourth World Conference on Women. 5 Paragraph 96, which sits within the health section of the document, states: "The human rights of women include their right to decide freely and responsibly on all matters related to their sexuality, free of coercion, discrimination and violence." 5 Essential in providing for the first time an international mandate to focus on, and invest in, women's reproductive and sexual health beyond the need to control women's fertility as part of a demographic agenda, and grounded within the internationally agreed legal human rights framework, this remains, despite its obvious limitations, the strongest statement agreed to by the governments of the world.
Beyond rhetorical articulation of a comprehensive sexual and reproductive health and rights (SRHR) agenda since the Cairo and Beijing articulations, the reality within countries and at the global level has been a siloed approach with reproductive health (and at times reproductive rights) higher on donor and policy maker agendas (whether driven by demographic trends, fertility control, ignorance or discrimination) even when the language is used. The limits to meaningful interaction between work to support sexual rights and reproductive rights is also embedded in movement politics; many movements, including women's health and rights, disability rights and reproductive health and rights movements have tended to neglect issues of sexuality and sexual pleasure. 6 Likewise, those working on sexual rights have more often than not stayed away from reproductive health and rights in their advocacy work 7 for substantive but also political reasons.
The HIV/AIDS movement, and the ways in which it brought focus to the rights of key populations, including their sexual and reproductive rights, has been the most useful in helping to catalyse greater engagement between movements/constituencies and those driving HIV prevention programmes to address sexual health and sexual rights more comprehensively, even as pleasure has rarely been a part of these conversations. 8 In addition, and in parallel, the global women's health and rights movement, LGBTIQ* movements, trans and intersex rights mobilisation, youth mobilising on sexual and reproductive health and rights, advocacy and programmatic work for sex workers' rights and the disability rights movement have started to force attention to taboos, stigma, discrimination and human rights violations and led to broader recognition of human rights related to sexuality and sexual health more broadly for all people. 9 The fact that this focus has been on harms, and not pleasure, has in some ways shaped the discourse. Nevertheless, the result has been an increased and comprehensive understanding of sexual rights as human rights relevant to sexuality and sexual health. This growing understanding has been reflected in the work of WHO 10 and other United Nations (UN) agencies, such as the UN Programme on HIV/AIDS (UNAIDS), the UN Population Fund (UNFPA), and the Office of the UN High Commissioner for Human Rights (OHCHR), including in their inter-agency statements (for example, on forced sterilisation, 11 elimination of discrimination in health care settings 12 and elimination of violence and discrimination against LGBTI people, 13 ) as well as the work of international non-governmental organisations such as IPPF. 14 The World Association for Sexual Health (WAS), whose work has contributed significantly to the understanding and acknowledgement of sexual rights internationally, issued a revised Declaration of Sexual Rights in 2014, 15 and an accompanying Technical Document 16 taking a comprehensive approach to sexual rights as human rights from a multi-disciplinary perspective and, importantly, with great attention to pleasure as an element of sexual health and sexual rights. 17 Civil society organisations internationally, regionally, and locally, such as CREA, 18 the Sexual Rights Initiative, 19 and The Egyptian Initiative for Personal Rights, 20 and scholarly initiatives with a strong focus on advocacy, such as Sexuality Policy Watch, 21 have contributed significantly to the advancement of sexual rights in the international, regional, national and local political spheres.
Sexual pleasure is the newest arrival to the sexual health and sexual rights policy landscape, the least developed and potentially the most open to interpretation. Outside the context of sexology, the study of sexual pleasurewhen it has occurred has generally had a narrow heteronormative bias, 22 addressing pleasure through a default focus on adults and within marriage, including in medical text books, sexuality education, etc. 22,23 Sexual pleasure most frequently emerges in policy and programming as a consideration relevant to sexuality or sexual health, rather than as a topic in its own right. Rights-based operational definitions of sexual pleasure in the context of sexual health, and more broadly, have been sorely lacking.
The World Association for Sexual Health (WAS), noted above, has been perhaps the most forward-looking in recognising the linkages between sexual pleasure, rights and health. As a professional association with global relevance, as far back as 2008, it urged all governments, international agencies, private sector, academic institutions and society at large, to recognise sexual pleasure as a component of holistic health and wellbeing 17 and has since urged various actors to recognise the importance of sexual pleasure in research, policy and service delivery, and connected to sexual rights not only from the "free from violence perspective," but from the perspective of positive sexuality. 15 It has also initiated expert consultations with the WHO and other relevant organisations, to urge them to adopt definitions of sexuality, sexual health and sexual rights, which recognise pleasure as a central element of those definitions. Inspired by the WAS *In this context, LGBTIQ denotes the lesbian, gay, bisexual, transgender, intersex, and queer or questioning movements. articulation, we adopt the working definition of sexual pleasure put forward by the Global Advisory Board for Sexual Health and Wellbeing 2 for purposes of this article (see Box 1), given its intent to highlight the interconnections with sexuality, sexual health, and sexual rights.
Linkages
The links between sexual pleasure and sexual health have long been understood. Sexual health is now also recognised to be closely associated with the extent to which people's human rights are protected. 24 To date, however, there has been insufficient attention to the ways in which people's experience of sexual pleasure is not only tied to their sexual health but dependent on the extent to which their sexual rights are respected, protected and fulfilled. 25 Undue weight seems still to be accorded to the health system and other related institutions as core enablers of individual pleasure. The pathways to how individuals seek and enjoy pleasure are often much more complex and frequently the interface with the health system or facility happens only later, and if there are health consequences (e.g. unintended pregnancy, infections, need for contraception, etc.). The failure to comprehensively approach sexual pleasure from its very roots and intersections with sexuality, sexual health and sexual rights has real implications for people's lives, including not only limits to the protections from sexual violence, and to the information and health services people can receive, but also from the perspective of how people can relate to their own bodies, establish relationships, and live in the world.
Using the definitions noted above to guide our analysis, below we provide some broad brushstrokes to explore the ways in which the intersections of sexual health, sexual rights and sexual pleasure are currently considered in policy and programming, and suggest some opportunities for advocacy to further strengthen these linkages.
Assessing laws and policies
Laws and policies matter because they set rules and frameworks for people's conduct in society and for programmatic interventions. They can contribute to or obstruct the development of programmatic and service delivery interventions on sexual health, deter or support people's experience of sexual pleasure, and enable or disable people to seek and receive the information they require to protect their sexual health and exercise their sexual rights. Legal frameworks can have content that respects and protects human rights, for example those that provide access to comprehensive sexuality education or give equal opportunities in all areas of life for people regardless of sexual orientation, gender identity and expression and age. On the other hand, laws may create limitations to sexual health and pleasure, such as those that do not allow adolescents or unmarried people to access sexual health services without parental or spousal consent. 10 Further, criminalisation of certain behaviours and identities (e.g. LGBT populations) has greatly limited the ways in which sexual pleasure has found expression in policy and programmes. In addition, the emergence of "adolescents" as a category of people requiring intervention has put a focus on the "prevention" of harms in legal frameworks (early marriage, early pregnancy and childbirth, etc.) and largely ignored positive attention to their pleasure, sexuality and sexual rights. The stigma associated with premarital sex and the "moral panic" that accompanies conversations about young people's, particularly young women's, sexuality is profound and deeply ingrained, and "policy" level advancements to address these issues often have to deal with backlash and opposition.
Social cultural taboos in relation to sexuality are often embedded in laws and policies with negative effects on sexual health and pleasure. For example, laws that penalise and criminalise sexual health related matters, such as same-sex sexual acts and behaviours, transgender expression, sex work, HIV transmission, possession of a condom as evidence of crime, and penalisation of the advertisement of contraception or abortion, codify a restricted approach to morality and reinforce power structures that control the bodies and behaviours of marginalised and discriminated against populations. The burden of unjustified use of criminal and punitive laws is significant: certain population groups, such as gay, lesbian and transgender people, women, adolescents, people engaged in sex work, and those living with HIV, can experience difficulties in accessing relevant services, let alone engaging in positive sexual experiences, with direct impacts on health and pleasure. 26 Controlling, undermining, restricting and medicalising the sexual needs and desires of certain populations can result in coercive laws, policies and practices. For example, the desires, pleasure and sexual health and rights needs of people living with disability, including with intellectual disabilities, are still often undermined, and controlled through forced sterilisation policies and practices, inaccessibility of sexuality, and sexual health information and services. 9 People with disabilities are often infantilised through laws and policies and held to be asexual 27 (or in some cases, hypersexual), their pleasure irrelevant at best, incapable of reproduction and unfit sexual/ marriage partners or parents. 28 For women, disability may mean legal exclusion from a life of partnership and active sexuality, and denial of opportunities for motherhood. 29 These laws, policies and controlling practices undermine the equal rights of people with disabilities, their need for access to information and services, and their desires for pleasure, reproduction and parenting.
While laws and policies should be set in such a way that they respect human rights and acknowledge sexual desire and pleasure as a basic human need, many laws that are set with the intention to protect can end up being discriminatory and coercive by, for example, not recognising non-binary gendered bodies, same-sex sexual practices, or the sexual desires of people under the age of 18 years old. For example, rape laws that do not recognise rape in marriage, or consider any sexual act to be rape if it occurs with a person under the age of 18 years old, or recognise only vaginal penetration by a penis as rape, exclude many people from the spectrum of protection. 16 Thus, application of the triangle approach to sexual health, rights and pleasure proposed here would require not only careful analysis but also amendment of laws and policies to ensure they do not inadvertently discriminate, and that they respect the rights-based definitions of sexuality, sexual health, and pleasure presented in Box 1. Some positive developments are occurring in this regard at both the international and national levels.
For example, WHO initiated changes to the International Classification of Diseases (ICD10) 30 that is the lead international policy on classification of diseases and health conditions. The ICD greatly influences not only the utilisation of services and insurance codes, but the setting and implementation of national laws, standards of care, services, medical education and research. Accordingly, WHO elaborated a new chapter for ICD 11, called "Conditions related to sexual health," that brings a more holistic view to sexual health, by connecting the body and mind in relation to sexual functions and dysfunctions; depsychopathologising gender expression, and eliminating any remaining codes related to sexual orientation. 31 All of these changes are not only justified from the sexology and sexual health perspective but also support sexual rights and different forms of pursuing consensual pleasure. 32 Human rights bodies, such as the UN Human Rights Committee, the Committee Against Torture, and the European Court of Human Rights, as well as national Constitutional Courts, have increasingly applied human rights standards, such as the rights to non-discrimination, freedom from inhuman and degrading treatment, human dignity, selfdetermination and bodily integrity, to various sexuality and sexual health related issues with positive implications for both health and pleasure. These linkages are visible in their decisions, though often related to the provision of health services, on such issues as forced sterilisation, involuntary surgeries on intersex and transgender people, mandatory HIV testing, and access to abortion, for example, in cases of rape. 16 These human rights standards, in turn, are increasingly reflected both in public health policies and programmes with positive implications for health, wellbeing and pleasure, with particular attention to the protection of sexuality from the privacy perspective. 16 Within countries, laws and policies that decriminalise consensual sexual behaviours, eliminate mandatory medical interventions, provide people under 18 years old access to the sexual health education, information and services they require, serve as good examples of attention to the kinds of policies needed to ensure health, wellbeing, rights and pleasure for all populations. 10 Assessing programmingwithin and beyond the health sector Historically, sexual health and rights programmes have tended to focus on preventing negative consequences associated with sexuality, such as: prevention of unintended pregnancies, HIV and sexually transmitted infection (STI) prevention and treatment and addressing sexual dysfunction. The importance of addressing the negative consequences of sexual health behaviours should not be minimised. In many cases, however, this approach has failed to recognise that some of the primary factors behind sexual health risk, and the need for sexual health information and services, are issues that relate to rights, pleasure and sexual desire and not to morbidities and mortalities. 33 The fact that some people don't seek the assistance of sexual health providers until they face a negative consequence related to their sexual activity (such as an STI, erectile dysfunction or contraceptive failure) can reinforce the notion that the health provider's role is only to resolve these negative issues. The majority of health service providers are not prepared to address the complexity of sexual pleasure (including the dissociation of safety and pleasure) and the diverse ways in which it is experienced at different points of life (adolescence, adulthood and older age) and among different populations (for example, lesbians, gay men and transgender people, as well as people living with HIV, among others).
Sexuality, sexual desire and sexual pleasure remain subjects of shame and stigma in many parts of the world, with sexual health programmes that only focus on the unwanted consequences of sexual behaviours, sexual morbidities and "normalised" heterosexual sexual practices further contribute to stigmatisation. Programmes that promote fear around the negative consequences of sexual activity are based on a risk approach, and leave important conversations about sexual health, sexual rights and sexual pleasure aside. Abstinence-only sexuality education is a particularly vivid example as, amongst other harms, it promotes the belief that premarital sex is "immoral," and reinforces traditional gender norms, such as the idea that it is unacceptable for women to express sexuality or sexual pleasure. 34 These programmes have had a lot of political support from conservative governments, such as the Trump presidency in the US.
Within the risk-based approach most common to programmatic work in this area, the importance of sexual pleasure to enable sexual health and wellbeing is not well recognised, despite evidence that demonstrates its relevance. For example, promoting pleasure in male and female condom use, alongside safer sex messaging, has been found to increase the consistent use of condoms and the practice of safer sex. This "power of pleasure" approach, 35 has been implemented with great success in several countries, such as Australia, Mozambique and Cambodia, among others. The triangle approach to sexual health programming proposed here, similarly, puts pleasure at the centre, as an element that is intrinsically linked to sexual health and sexual rights, acknowledging and tackling the various risks associated with sexuality, without reinforcing fear or shame. 36 The triangle approach could also be considered a "sex-positive" approach to sexuality and sexual health, constituting an approach that celebrates sexuality as a part of life that can enhance happiness, and not solely focused on preventing negative experiences.
Sexual health programmes, including those that address reproductive health and rights concerns, can cover a wide range of specific thematic areas and goals. They can focus on the provision of sexuality education and/or information (including capacity building programmes for service providers, peer educators or teachers) and the provision of a wide range of sexual health services, including sexuality counselling, HIV and STI prevention, testing and treatment, prevention of unwanted pregnancies, abortion, prevention, testing and treatment for human papillomavirus (HPV) and cervical cancer, prevention and treatment of testicular cancer, addressing sexual dysfunction, providing advice and services on sexuality, and so on.
The education of health providers is crucial for them to be able to deliver quality sexual (and reproductive) health services that incorporate rights and pleasure for all people: adolescents, adults, and the elderly, regardless of sexual identity, social or demographic characteristics. Gender stereotypes often shape health-care providers' interactions with clients and providers' response to adolescents seeking sexual health care can be similarly shaped by their own personal views and experiences about young people. 37 For all of these reasons, health-care providers may promote interventions that are more in keeping with their own beliefs than with the needs, rights and desires of their clients. 38 Yet it is worth recognising that some of the reason for this is not stigma but lack of training. Aside from select psychologists, sexologists or sex therapists, health-care providers are often not encouraged or sufficiently trained to feel comfortable providing services which place pleasure or rights at the centre of their engagement with clients.
In many countries, both medical students and practicing physicians "receive variable, nonstandardized, or inadequate training in sexual history taking and sexual medicine assessment and treatment." 39 For example, Malhotra and colleagues conducted a nationwide telephone survey of 500 fourth-year medical students in the US and medical school curriculum offices in 2008. 40 They found that 44% of medical schools in the US lacked formal sexual health curricula, that 17.4% of medical students felt uncomfortable taking sexual histories from ten to fourteen year-olds and 23.8% from adults aged 75 and over. Also, in 2008, Shindel et al invited 2261 medical students from the US and Canada to participate in an Internet-based survey, in which they found that 53% of respondents "felt they had not received sufficient training in medical school to address sexual concerns clinically." 41 While a decade old, these studies show that medical education in sexual health in the US and Canada is lacking, with many students and providers reporting feeling unprepared to address sexual health issues with their clients. 42 Data for the rest of the world are sorely lacking.
In its working definition of "sexual pleasure," the Global Advisory Board for Sexual Health and Wellbeing identified six key factors that represent links between sexual health, sexual rights and sexual pleasure that can be incorporated into programming and the delivery of services: selfdetermination, consent, safety, privacy, confidence, communication and the ability to negotiate with the partner(s). 2 SRHR programmes that incorporate the links between these three concepts recognise sexuality as a source of pleasure and wellbeing. To date, there are very few documented programmes and technical tools globally that have embraced and have been successful in using pleasure alongside sexual health and sexual rights. Three examples follow:
Example 1:
One such programme is Love Matters, which has created a digital platform on sexuality and pleasure. 43 The project operates at the intersection of media and public health, specialising in media for social change, and talking about sexual pleasure is at the core of their engagement strategy. Rather than using secrecy, silence and shame to try and prevent people from having (risky) sex or focusing only on the negative, they use pleasure as the hook to have difficult conversations with millions of young men and women around the globe. Love Matters is intended as a reality check for these young men and women, offering sexual health and sexual rights information with a positive take on pleasure and relationship satisfaction. The programme demonstrates that the web, mobile and social media platforms give young people the facts they need to have safer, healthier and happier sex, and can deliver science and rights-based sexual health and rights information with a pleasure perspective directly into the hands of young people.
Love Matters worked with the UK-based Institute of Development Studies (IDS) and curated the research bulletin "Digital pathways to sex education," released in February 2017. 42 It found that across all their sites the "pleasure" pages are more than eight times more popular than the family planning pages, and that the "sexier" content serves as a gateway to other information resources, including risk reduction and disease prevention. This unique demand-led, pleasurepositive approach to sexual health and rights education is reaching people in large numbers. Many sexual and reproductive health organisations are now starting to use the same approach. Love Matters has been a transformative global digital platform that has diversified sources of sexuality information beyond schools and health facilities, and points to the way in which online platforms have created the possibility for people to access diverse information relating to sexuality and pleasure without needing to interface with the health system or with providers.
Example 2:
An example of a technical tool which incorporates the triangle approach linking sexual health, sexual rights and sexual pleasure to inform SRHR programmes is Fulfil! Guidance Document for the Implementation of Young People's Sexual Rights, 44 published by the International Planned Parenthood Federation (IPPF) and the World Association for Sexual Health (WAS) in 2016. Fulfil! was the result of a multidisciplinary effort, and it responded to the fact that the majority of programmes (and policies) regarding youth's SRHR frequently emphasised "disease, death, disability and violence associated to sex and sexuality." In its first section, Fulfil! outlines elements that are fundamental for young people's sexual rights to be implemented, the first one being "a comprehensive understanding of young people's sexuality with diversity and sexual wellbeing at the core." Based on the IPPF and WAS Declarations of Sexual Rights, 14,15 Fulfil! stresses the importance of young people's experiences of sexual pleasure, as they shape other experiences throughout the lifetime and have a direct impact on their overall health. Apart from providing specific guidance for programmes, laws and policies, Fulfil! also presents a case-by-case decision-making model to support service providers in the implementation of young people's sexual rights, in which ethical, practical and legal factors need to be balanced.
Example 3:
With respect to training and capacity building, the Global Advisory Board for Sexual Health and Wellbeing (GAB) has developed a training toolkit 45 for future health professionals to deliver services with the triangle approach. The training aims to create an understanding of the links between sexual health, sexual right and sexual pleasure, and to raise awareness on the importance of delivering services with a rights and pleasure approach. It also includes the Pleasuremeter, 43 which is a tool based on the GAB's working definition of sexual pleasure and motivational interviewing techniques (such as asking open-ended questions and using scales to address behavioural stages of change) to address the links between sexual health, sexual rights and sexual pleasure in the taking of sexual histories. This training was piloted in May 2017 at the World Congress for Sexual Health in Prague, Czech Republic, has been further refined, sent for peer review, finalised and is now freely downloadable. †
Rethinking advocacy
Achieving a world where intersections between sexual health, sexual rights and sexual pleasure are reinforced and positively influence people's lives will require not only strong policy and programming, but also comprehensive and linked global, national and local grassroots advocacy. Advocacy is needed to: support policy and legal change; demand equal opportunities, rights and conditions for all; promote investment in local and national rights-based sexual health services that address pleasure; demand quality of care and comprehensive sexuality education; and hold relevant stakeholders accountable. This is not the concern of just one group. If pleasure is to be understood and addressed in the context of sexual health and sexual rights, advocates must include civil society organisations, researchers and research institutions, service providers, and both the public and private sectors. Most importantly, it will also require solidarity and harmonisation of efforts.
Efforts are being made daily to weaken global solidarity and global principles and institutions, and this is a concern for all engaged in sexual health and rights work. Silo-isation and conflicting approaches to using rights discourse, even amongst movements that ought to be aligned, occur as a matter of course, with profound effects on sexual rights and pleasure. One example is the conflation of sex work with trafficking by various women's rights groups and feminist organisations. 46 Efforts must be made towards cross-movement alliance building efforts that can enable/foster local-global connections, including with respect to differing perceptions of sexuality, pleasure, and rights: for example, finding common ground amongst those concerned with trafficking and those advocating for sex workers' rights, in working towards the elimination of all kinds of violence and exploitation.
Equality, non-discrimination and universalism are key to all social movements and engaging these principles will help to build bridges across movements. For the triangle to take hold in these complicated times, there is a need to form and maintain broad coalitionsones aimed at preserving our web of local and international partnerships, from the people we work with and the ways we work together, to how we understand the work we do within a framework of international solidarity. For those engaged with the intersection of sexual health, rights and pleasure as individuals, and as a collective group of human beings, we need to figure out how and when to act. This means that across the range of topics we work on we have to work together more actively to support people to achieve sexual pleasure and to be able to claim their rights, and to protect people who advocate in this area.
Understanding where progress has been made, and where backlash has occurred, is relevant to determining where and how sexual pleasure can best be brought into these efforts. For those of us engaged in advocacy to advance sexual rights and sexual pleasure will need to take into account current North/South, North/North and South/ South politics at a governmental level, as well as what is happening with those opposed to sexual rights, let alone sexual pleasure, in all forums.
Conclusions
There is increased recognition and support for sexual health and rights, even as current policy, programming and advocacy efforts and discourse around sexuality have yet to engage fully with the interconnections between sexual rights, sexual health and sexual pleasure. Implementation of the "triangle approach" to sexual health and rights is more important now than ever with the current † The GAB's toolkit is available for free download at the following link: http://www.gab-shw.org/resources/training-toolkit/. political climate, for every individual and especially for those who are most marginalised. 47 There is a need to learn from and move beyond conversations about pleasure, focused only on certain populations, whether with young people, women, and/or other marginalized populations, or specifically focused on sexual orientation, gender, gender expression, or sex characteristics, and support the relevance of sexual health, sexual rights and pleasure as a universal demand for all people.
An intersectional, inter-disciplinary and multisectorial implementation is critical to ensure that programmes are endorsed, implemented, funded and maintained, locally and globally. A potential first step may be to begin mapping, in order to make explicit, cases in which sexual health, rights and pleasure have successfully been brought together conceptually and operationally, as well as those cases where gaps in relation to the needs and rights of certain populations are identified. Such a mapping would make it easier to analyse where it would help to work with partners to ensure consistency, and where it might be strategic to begin this work at a very basic conceptual level. Where these concepts have been brought together in operational terms, there is a clear need to document programming efforts, and most importantly the difference this has made to people's lives. Rigorous evaluation of what has been put in place can effectively be used to frame arguments that are more likely to be accepted by states and institutions of power globally and within countries, and in ways that can facilitate access to resources which can then further support direct impacts on people's lives. It is important to acknowledge, however, that recognition of the importance of sexual pleasurewhen it does occurstill remains very much embedded within the public health realm. For this work to take root will need attention to the diverse and contextual ways through which individuals exercise choice, receive information and enjoy sexual pleasure, as well as more "eco system" work within institutions, communities, families, and so on, to create the enabling environment necessary for everyone to enjoy sexual rights and pleasure. Who is vulnerable or disadvantaged clearly will vary between countries and within countries, and so we need vigilance to ensure that a focus now and in the future results in better laws, policies and programmes to support sexual health, sexual rights and sexual pleasure for all people, and without distinction. lien entre le domaine conceptuel et les approches pragmatiques, cet article examine les définitions actuelles, les influences et les intersections de ces concepts, et il indique comment une attention globale peut mener à une programmation et des politiques plus fortes par des activités éclairées de formation et de plaidoyer. | 2019-05-13T13:05:58.372Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "522b3b51596319e7c457a0e00c1ad92f99b0744a",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26410397.2019.1593787?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e6c9b365486ea93da1db34315df9f2c7a17af8b7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
29927380 | pes2o/s2orc | v3-fos-license | Adjusting the Scott-Knott cluster analyses for unbalanced designs
The Scott-Knott cluster analysis is an alternative approach to mean comparisons with high power and no subset overlapping. It is well suited for the statistical challenges in agronomy associated with testing new cultivars, crop treatments, or methods. The original Scott-Knott test was developed to be used under balanced designs; therefore, the loss of a single plot can significantly increase the rate of type I error. In order to avoid type I error inflation from missing plots, we propose an adjustment that maintains power similar to the original test while adding error protection. The proposed adjustment was validated from more than 40 million simulated experiments following the Monte Carlo method. The results indicate a minimal loss of power with a satisfactory type I error control, while keeping the features of the original procedure. A user-friendly SAS macro is provided for this analysis.
INTRODUCTION
A common problem in plant breeding is comparison of new genetic combinations.In order to detect significant difference among treatments, several Multiple Comparison Procedures (MCP) were developed: LSD (Fisher 1935), Tukey (1949), SNK (Student 1908, Newman 1939, Keuls 1952), Scheffé (1953), and Duncan (1955).Nonetheless, all these procedures can result in groups overlapping, where one treatment ends up belonging to two or more groups simultaneously (Calinski and Corsten 1985).This behavior usually prevents a clear division of the whole set into two or more groups of treatments and also leads to a more complex simultaneous analysis of multiple variables due to the presence of overlapping subsets.Thus, selection for advancement of new genetic combinations to the next step in the plant breeding program requires extra effort to overcome this statistical issue.
Cluster analysis is a promising solution to avoid subset overlapping from widely-used MCP (O 'Neill andWetherill 1971, Plackett 1971).One example of an intuitive and satisfactory approach, avoiding subset overlapping, is the use of cluster analysis over the Mahalanobis generalized distance (Rao 1952).Additionally, clustering techniques can be applied to taxonomy purposes since they have high affinity to Hotelling's Principal Component Analysis and Fisher's Discriminant Analysis (Hotelling 1933, Fisher 1936, Edwards and Cavalli-Sforza 1965).
TV Conrado et al.In 1974, Alastair J. Scott and Martin Knott publicized their idea of using the Maximum Likelihood (ML) ratio test to evaluate the significance of partitions from cluster analysis of sample treatment means in designs with an equal number of observations per treatment (Scott and Knott 1974).The first review of methods for Scott-Knott means separation suggesting their use in agronomics was provided several years afterward (Chew 1976).The Scott-Knott approach is an alternative to the MCP in a situation in which two or more internally homogenous subsets of sample treatment means are expected.It uses a univariate form of the divisive clustering procedure (Edwards and Cavalli-Sforza 1965) with a likelihood ratio test for determining when to stop the clustering process to create non-overlapping, distinct, and exclusive subsets of sample treatment means.The process orders the treatment means to minimize the number of possible treatment mean partitions to be pondered (Fisher 1958) and then maximizes the sum of squares between clusters to determine the best partitioning.Despite a significant increase in the calculation volume for every additional treatment even after the ordering of treatment means, it is still feasible, even manually, if the number of partitions remains lower than 12 (Scott and Knott 1974).Indeed, the computations are more onerous than an MCP (Carmer and Walker 1985).Nevertheless, it should not be a problem for any modern computer (Gates and Bilbro 1978).Some procedures with the same idea of partitioning means into non-overlapping groups were published after Scott-Knott (1974).These procedures presented variations in regard to the decision-making process and the clustering logic, ranging from agglomerative to divisive, hierarchical to non-hierarchical, but all of them ensure groups with no overlapping (Jolliffe 1975, Cox and Spjotvoll 1982, Calinski and Corsten 1985, Bozdogan 1986, Bautista et al. 1997, Di Renzo et al. 2002, Ciampi et al. 2008).
Many researchers prefer cluster analysis in order to facilitate interpretation and presentation of results since it results in non-overlapping, distinct, mutually exclusive groupings of the observed treatment means (Gates and Bilbro 1978, Carmer and Lin 1983, Calinski and Corsten 1985, Carmer and Walker 1985).This advantage is very clear when it is necessary to evaluate more than one variable simultaneously because the test easily allows for a positive selection of primary traits and a negative selection for any traits remaining to be evaluated.It can be effortlessly performed over the clustered data with multiple variables by initially applying filters to keep only higher performance clusters for the most important trait (i.e.yield) and then by removing some clusters of lower performance in the variables of secondary importance (i.e. plant height, biomass, etc.).This procedure should result in a highly reduced subset of treatments that present higher performance for the top priority trait, with a desirable level for the secondary traits.
An early evaluation of the Scott-Knott test with agglomerative procedures under scenarios where there is more than one true group of treatment means, or partial true null hypothesis (p-H 0 ), exposed the lack of an appropriate experimentwise type I error control.The result of simulations suggested that the test should be used only when the experiment has been performed with great precision, and it may be unsuitable for experiments where use of MCP would be considered inappropriate, such as those whose design and purpose suggest meaningful, orthogonal, linear contrasts with a single degree of freedom among the treatment means.However, the Scott-Knott test exhibited a higher ability to correctly reject the null hypothesis (power) and detect small differences between treatments than even the LSD test (Willavise et al. 1980).
Moreover, the Scott-Knott test has the highest rate of correct decisions and aptitude for improving performance as the number of treatments increases, in comparison with the SNK, Duncan, t-student, and Tukey tests (Silva et al. 1999, Borges andFerreira 2003).The test exhibits higher than nominal type I error rate when evaluated in simulated scenarios in which the null hypothesis (H 0 ) is false for some treatments (p-H 0 ), although for scenarios where the null hypothesis is true for all treatments, the empirical type I error rate is under nominal levels even for the experimentwise type I error rate (Di Renzo et al. 2002, Borges andFerreira 2003).
The Scott-Knott test also provides higher robustness compared to the MCP tests for mean separation in non-Gaussian distributions (Borges and Ferreira 2003).Despite the lack of control of type I error, the test demonstrates much higher Power than any MCP, although these two features, high robustness and power, are very common to most cluster analyses (Bautista et al. 1997, Silva et al. 1999, Di Renzo et al. 2002, Borges and Ferreira 2003).The Scott-Knott test displays similar type I and type II error in comparison to Bautista et al. (1997) andDi Renzo et al. (2002).However, its performance is superior to that of Jollieffe (1975) (Di Renzo et al. 2002).
Group homogeneity can be improved by changing the clustering approach from divisive to non-grouped treatment clustering (Bhering et al. 2008).It usually reduces the number of significantly different clusters -slightly increasing the number of treatments grouped in each one of the different clusters.In spite of this drawback, this consequence can be useful in plant breeding scenarios in which positive selection followed by retesting is applied, since it can shift a small number of treatments from an inferior cluster to a superior one.
Since most plant breeding designs are unbalanced, the objective of this research is to adjust and validate the Scott-Knott test in order to allow its use in experiments under partially balanced incomplete block designs or balanced designs with missing plots, since the non-adjusted procedure is only applicable to balanced designs.This paper proposes a novel solution for use of the Scott-Knott test under unbalanced designs followed by its validation.In order to ease its use, a user-friendly macro for the SAS/STAT ® software is also provided.
Description of the proposed adjustment procedure
The original Scott-Knott (1974) test begins by ranking all the k treatment means to be grouped and then by calculating B 0 from the k treatments partitioned in two smaller subsets.The B 0 value is calculated for every k -1 possible partition, and the partition with the highest value of B 0 is tested using λ as two distinct subsets of treatment means.The test uses the circumference constant π (=3.14159…) and related adjusts to approximate the λ distribution to the χ 2 distribution.
If the chi-square test with ( k π-2 ) degrees of freedom rejects the null hypothesis, the process repeats; each one of these distinct subsets is, in turn, further subdivided until each of the final clusters is shown to be homogeneous by a likelihood ratio test on λ. (i) The statistic λ (i) depends on B 0 , which is the maximum value from the sum of squares of all the possible partitions of k treatments into two groups, and on σ ˆ 2 0 , which is the maximum likelihood estimator of σ 2 for treatments under the null hypothesis.
Equation (ii) shows how υs 2 is used where s 2 represents an unbiased estimator of σ 2 associated with υ degrees of freedom, y i is the treatment mean i, and y is the mean of all k treatments.The variable n is the number of replications, or the total number of blocks according to the experimental design.Moreover, equation (iv) used in a balanced experimental design can be modified and expressed as equation (v), where a different number of observations for every treatment is also permitted.After the modification, the corrected unbiased estimator of s 2 c can change according to the SE yi of treatments in the partitioned set.Thus, in order to accommodate subsets of treatments with unequal and equal numbers of observations, s 2 c should be calculated for every null hypothesis before testing the statistic λ against a χ 2 distribution with the associated υ degrees of freedom.Hence, for every clustering step, s 2 c can change to adapt to the number of observation of each treatment in the current clustering process. (v) Along with correction of s 2 c , the raw treatment mean y i should be replaced by y ˆi , which is the treatment mean adjusted to the effect of the unequal number of replications/blocks.The following changes in the original procedure are minimal and are disclosed in equations (vi).The notation λ c should be used to identify λ statistics while using the correction even though the testing process against the χ 2 distribution remains the same as the original procedure.
(vi)
As expected, the correction increases the σ ˆ 2 0c value as the number of observations per treatment decreases -lowering the final λ c value.This leads to a lower probability of rejecting the null hypothesis, which protects the test from the type I error.The unbalanced treatment adjustment maintains the same features and results as the original method in balanced treatment scenarios.Indeed, s 2 c only changes for clusters in an unbalanced condition (i.e., missing plots).When clustering the same experiment, after partitioning all treatment means with missing plots, the remaining clusters should have the same s 2 c value.It is important to keep in mind that since the process follows a hierarchical clustering sequence, the very same subset of treatment means with an unequal number of observations can be partitioned multiple times before composing the final specific cluster.Indeed, the calculation of s 2 c for every candidate partition that challenges the χ 2 distribution makes the adjustment hard to be calculated manually, but it provides satisfactory protection to the original Scott-Knott test without a significant reduction in power.
Validation of the proposed adjustment procedure
The s 2 c deduction can indicate how the correction affects the Scott-Knott test; nevertheless, it is necessary to quantify and compare the power and type I error of the adjustment while using it.in order to validate the proposed adjustment, use of the Monte Carlo method (Metropolis and Ulan 1949) is a suitable option to simulate experiments with known parameters and then evaluate the results by comparing the original test against the adjusted solution for unbalanced designs (Carmer and Swanson 1971, Silva et al. 1999, Borges and Ferreira 2003).For that purpose, more than 40 million experiments were simulated for multiple unbalanced levels combined with several α values.The simulation scheme is composed of three main branches: complete H 0 (μ 1 = μ 2 = μ 3 = ... μ I ), partial H 0 (μ 1 = ... = μ I/2 ≠ μ (I/2+1) = ... μ I ), and complete alternative hypothesis H 1 ( μ 1 ≠ μ 2 ≠ μ 3 ≠ ... μ I ).The first branch was used only to quantify type I error and the third only to measure power, while the second branch measures type I error and power.
All three branches contained nine levels of α (0.01, 0.02, 0.05, 0.08, 0.10, 0.12, 0.15, 0.18, and 0.20).Within each α level, there were ten levels of missing data (0.00, 0.01, 0.02, 0.05, 0.08, 0.10, 0.12, 0.15, 0.18, and 0.20).Since the second and third branches were used to evaluate the test Power they also exhibited four (1, 2, 3, and 4) levels of δ (true difference between two treatment means).In order to improve the robustness of the study, 50,000 experiments were simulated for all 810 Monte Carlo simulation setups across all three branches, culminating in a total of 40.5 million simulated experiments.Furthermore, every simulated experiment was composed of a random number of blocks (3 to 20) and a random number of treatments (4 to 100).Experiments with a number of observations lower than 50 were replaced to avoid a small number of degrees of freedom after data removal at random to reach the required missing level.The number of both blocks and treatments were from a uniform distribution.The effects of block and observation error were from a normal distribution with a mean of zero and a standard deviation of one.The differences between subsets were defined as the product of δ and σ x (standard error of the mean).After each experiment was generated, some plot values were removed at random.As the simulation removed plots randomly with no restriction, the minimum number of plots was set to one per treatment to avoid treatments with no plots.
Instead of measuring type I error per comparison, it was measured per experiment, a situation in which rejection of a single incorrect null hypothesis in an experiment scores as experimentwise type I error.This approach is more severe and general because it does not consider the number of treatments in the experiment (i.e., a higher number of treatments promotes an even higher number of contrasts, and it implies a higher probability of type I error).However, this approach should be able to make a better distinction between the original and adjusted procedures.Converging results were expected for both procedures (original and adjusted) under balanced designs.Thus, contrast can be observed only between balanced and unbalanced designs.
All 40.5 million experiments were simulated in SAS/IML ® and analyzed with SAS System for Windows 9.3 (SAS Institute 2011).The data were evaluated using the Generalized Linear Models Procedure (Proc GLM).Output of the adjusted means was grouped by a compiled macro.A recursive SAS local host multithread approach with isolated workplaces was used to speed up the simulation run time.
Stability of the process and the ability to suspend it was ensured by the use of macros capable of error handling, also oriented to processing batches of 5,000 experiments and logging all the processing responses.
Regarding the accuracy of the estimated type I error rates using Monte Carlo simulations, the exact binomial test was applied, contrasting the nominal significance level against the obtained empirical rate (Leemis and Trivedi 1996).In scenarios in which the exact binomial test rejected the null hypothesis (p < 0.01), the performance of the Scott-Knott test should be considered conservative when the empirical rate is lower than the nominal rate and should be considered liberal if higher.In scenarios in which the exact binomial test did not reject the null hypothesis, the tests were classified as precise.The F-value was obtained using equation (vii), where y represents the number of experiments with at least one type I error, α is the nominal significance level, and N is the number of simulated experiments (50,000).The p-value was found using υ 1 = 2(Ny) and υ 2 = 2(y + 1) degrees of freedom (Santos et al. 2001). (vii)
RESULTS AND DISCUSSION
Table 1 summarizes the results of 4.5 million simulated experiments.These experiments were simulated under the complete H 0 hypothesis (no real difference among treatments).For experiments with a balanced design (no missing plots), as the nominal α level increased, the empirical experimentwise type I error became higher.This persisted under experiments with missing plots using the proposed Scott-Knott adjustment, but reduced when the level of imbalance increased.It can be observed that the empirical values obtained using the Monte Carlo method for a balanced design (0% TV Conrado et al. of missing plots), in which the adjusted and non-adjusted procedures exhibit the same results, are below the nominal α level for values smaller than 0.05, but according to the exact binomial test, the difference is not significant.In contrast, the empirical value is significantly higher than the nominal value for some α levels higher than 0.10, which means that the original procedure should be considered liberal at these levels since it does not properly control the type I error even under the complete H 0 hypothesis.The intermittent classification for the alpha levels of 0.12 and 0.15 as a trend for empirical rates to surpass nominal rates as the nominal alpha level increases could be caused by approximation to the χ 2 distribution used by Scott-Knott (1974), but this thesis should be evaluated in further studies and does not belong to the scope of this study.
Moreover, in half of the simulated combinations, the experimentwise type I error was evaluated as significantly different from the nominal value by the exact binomial test.As expected, the adjustment led to a more conservative approach as the level of missing plots increased.This result suggested that in order to use the proposed adjustment, the user must take into account the level of imbalance (either from the planned design or from random loss of plots) before selecting the nominal α level.
In contrast, the adjusted and non-adjusted (original) Scott-Knott test exhibited a higher empirical experimentwise type I error rate than the nominal rate under p-H 0 (Table 2).It also showed a small increase in the experimentwise type I error rate when the level of missing plots became higher, but the magnitude of the experimentwise type I error rate reduced as the α level increased.This result validated the findings of Silva et al. (1999) and exposed the weakest point of the Scott-Knott test -the lack of control of experimentwise type I error under a p-H 0 .
Additionally, lower values of δ culminated in smaller differences in the experimentwise type I error rate between the adjusted and non-adjusted results of the Scott-Knott procedure (Figure 1).This trend persisted upon increasing the nominal α.Increasing α or δ led to a reduction in the difference in Power among balanced and unbalanced experimental designs (Table 3).It is also important to keep in mind that a higher value of δ indicates larger differences among the treatment values.Hence, it is easier for both procedures to detect these differences and reject the null hypothesis for any level of imbalance.The adjusted and non-adjusted tests exhibited lower Power for δ < 1.No significant differences in Power between the adjusted and non-adjusted procedures were noticed for δ > 1.Additionally, the adjusted Scott-Knott test maintained very high Power, even with a small α value under a complete H 1 (Figure 2).
However, as the level of imbalance got higher, there was a small loss of power when using the proposed adjustment.This performance was expected since missing information causes lower ability to reject the null hypothesis due to the additional protection required to control type I error.The small loss of power is a suitable indicator for adjustment efficiency, which is very important since the Scott-Knott test is recognized for its high power, with superior performance over the LSD and other widely used MCP (Willavise et al. 1980, Silva et al. 1999, Borges and Ferreira 2003).In spite of that, there is a trend of power reduction as the number of members per cluster decreases.This has already been pointed out and is similarities between hierarchical and non-hierarchical procedures (Tasaki et al. 1987), but it should not be assumed to be common to all clustering procedures since the clustering procedure of Bozdogan (1986) shows exactly the opposite response.
Although the loss of power lowers the total number of clusters, it is a tolerable deficiency for scenarios where the entries that are wrongly clustered together should be retested in the next stage of research.Since the retesting routine is often used in plant breeding programs, this error is preferable to the possibility of the error of discarding an entry without a satisfactory level of confidence.Thus, as for the non-adjusted Scott-Knott procedure, it is necessary to understand the error tolerance of the experiment under evaluation before using the proposed adjustment.
It is noteworthy that even using the proposed adjustment, the most common cause of the type I error under p-H 0 for the Scott-Knott test is late compensation for incorrect partitioning in the previous step, as a consequence of divisive binary partitioning.This usually occurs in scenarios where the true number of clusters is different from powers of 2 or from the geometric sequences with common ratio 2 (data not shown).This unsatisfactory compensation is very noticeable when the true number of clusters is 3, which is a weakness common to various clustering procedures (Tasaki et al. 1987).If the gap between clusters is not clear enough, the maximum likelihood test may select a splitting point around the median by mistake.Then, in the next step, while it seeks for the point that maximizes the likelihood, it has a chance to correctly split the subset between the first and second clusters.A clear demonstration of this is an experiment with 9 treatments TV Conrado et al.
truly distributed in 3 clusters, for example ABC/DEF/GHI, in which the test incorrectly performs the first partitioning as ABCDE/FGHI and then it differentiates the first true cluster from the rest of the subset, resulting in (ABC/DE)/FGHI.In following, for the same reason, the test can correctly discriminate the third true cluster from treatment F, culminating in 4 clusters: (ABC/DE)/[F/GHI].Although the first and third clusters are correct, the second cluster is improperly divided, increasing the type I error rate.This type of result is a consequence of adoption of a divisive hierarchical approach in order to allow comparison of the selected critical value which was obtained by empirical approximation, and afterwards, to declare the computed statistic significant or not (Carmer and Lin 1983).Some approaches avoiding hierarchical clustering have been published to avert this undesirable feature by simply allowing the creation of completely new clusters in every step of evaluation (Cox and Spjotvoll 1982, Calinsk and Corsten 1985, Bozdogan 1986).Despite that, the divisive hierarchical approach is still used for clustering (Di Rienzo et al. 2002, Valdano andDi Rienzo 2007).
Within plant breeding applications, the use of non-overlapping, mutually-exclusive subsets such as Scott-Knott creates a clear cutoff for the genotype advancement procedure, while results with multiple distinct subsets can help in financial management by assigning the right subset to an appropriate testing pipeline.Using the proposed adjustment procedure, this distinguishing feature is extended to experiments with missing data, which are very common in yield trials.For example, using cluster analysis on an unbalanced yield trial that results in 6 distinct subsets, the breeder would be able to submit solely the genotype subset partitioned in the highest category, "Group A", to be tested in the most accurate and expensive Pipeline I (the maximum number of locations in a randomized complete block design).Group B of genotypes could be placed in the intermediate Pipeline II (a smaller set of locations), and Group C and D could be tested in the lower cost Pipeline III (augmented blocks in the same locations as Pipeline II), while discarding the genotypes in Groups E and F (that have inferior performance compared to the commercial checks, clustered in Group C).After harvesting, the breeder can choose to retest only the superior genotypes from Pipeline III together with the new entries to be tested in Pipeline II or I.
A small drawback to the use of the proposed adjustment procedure is the increased complexity and volume of calculations in comparison to the non-adjusted procedure.Thus, in order to promote better dissemination of the proposed adjustment, a free compiled SAS GLM macro was developed and can be downloaded at http://www.tconrado.com/sas/sk.zip.The compressed file also contains an example to provide better understanding of the macro options and about how to use the software.
The proposed adjusted Scott-Knott procedure had performance similar to the original procedure under unbalanced experimental designs, with minimal loss of power, while maintaining satisfactory control of the experimentwise type I error and improved performance at α > 0.05.This adjustment increases the spectrum for use of the test, providing the researcher with an alternative to the MCP, even under a significant loss of experimental data (missing plots), and it is readily available for use in SAS.
Means Square Error (MSE) model is a good measure of variance, it is used as a satisfactory term for estimation of s 2 .Equation (iii) shows the relation between the unbiased estimator s 2 and the Standard Error of the Mean SE y , where RMSE is Root Mean Square Error.It is valid only under an equal number of observations for every treatment (n 1 = n 2 = ... = n k ).Additionally, under a balanced experimental design, SE y has the very same value for every treatment and leads to equation (iv), the base of the proposed adjustment, where the mean of the sum of the squares of SE y estimates s 2 .(iii) (iv) TV Conrado et al.
Figure 1 .
Figure 1.Empirical experimentwise error under the partial null hypothesis in the combination of three significance levels (ɑ) by four levels of true difference between two treatment means (δ).
Figure 2 .
Figure 2. Empirical power under the complete H 1 hypothesis in nine significance levels (ɑ) across ten unbalance levels.
Table 1 .
Empirical experimentwise type I error under no true difference between treatments
Table 2 .
Empirical experimentwise type I error under true difference between treatments of four standard errors of the mean (4σ x )
Table 3 .
Power of Adjusted Scott-Knott in several unbalance levels under the partial null hypothesis (H 0 ) under four levels of true difference between two treatment means (δ) | 2017-10-15T22:47:02.737Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "177ace6c02c6248e5c647a17b2008b7768bce055",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cbab/v17n1/1984-7033-cbab-17-01-00001.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "177ace6c02c6248e5c647a17b2008b7768bce055",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
119492596 | pes2o/s2orc | v3-fos-license | Forward Modeling of Space-borne Gravitational Wave Detectors
Planning is underway for several space-borne gravitational wave observatories to be built in the next ten to twenty years. Realistic and efficient forward modeling will play a key role in the design and operation of these observatories. Space-borne interferometric gravitational wave detectors operate very differently from their ground based counterparts. Complex orbital motion, virtual interferometry, and finite size effects complicate the description of space-based systems, while nonlinear control systems complicate the description of ground based systems. Here we explore the forward modeling of space-based gravitational wave detectors and introduce an adiabatic approximation to the detector response that significantly extends the range of the standard low frequency approximation. The adiabatic approximation will aid in the development of data analysis techniques, and improve the modeling of astrophysical parameter extraction.
I. INTRODUCTION
Gravitational wave astronomy can be broadly divided into high and low frequency bands, with the dividing line near one Hertz. Seismic and gravity gradient noise prevent ground based detectors from exploring the low frequency portion of the spectrum, making this sourcerich region the sole preserve of space-based observatories.
Ground and space-based interferometric gravitational wave detectors operate according to the same general principles, but differ in their implementation. Groundbased detectors, such as the Laser Interferometer Gravitational Wave Observatory (LIGO) [1], operate in the low frequency limit, where the wavelength of the gravitational waves is considerably larger than the size of the detector, and most sources are only in-band for a fraction of a second. These considerations simplify the description of the detector response, which may be well approximated by a quadrupole antenna moving at constant velocity with respect to the gravitational wave source. However, ground-based interferometers employ quasi-fixed rather than freely moving test masses, and the output of the detector is given by the response of the control loop used to keep the interferometer on a dark fringe. This complicates forward modeling efforts for ground-based detectors [2] as it makes the detector response non-linear. The situation with space-borne detectors is completely the opposite. Space-based detectors, such as the proposed Laser Interferometer Space Antenna (LISA) [3], will be able to detect gravitational waves with wavelengths that range from many times larger than the interferometer to many times smaller, and most sources will be in-band for months or years, so that the detector's orbital motion will impart amplitude, frequency, and phase modulations. These effects give rise to a complicated, time dependent detector response function [4]. Space-borne detectors typically have large arm-lengths (5 × 10 9 m for LISA) that vary with time, which prevents them from operating as traditional interferometers. Instead, the interferometer signals are produce in software from phase differences measured in the detector using a procedure know as Time Delay Interferometry (TDI) [5]. Despite these complications, the detector response remains linear, which greatly simplifies forward modeling efforts.
Forward modeling plays a key role in the design of any new scientific instrument, and is especially important when the instrument is the first of its kind. Work is now underway to produce an end-to-end model of the LISA observatory [6]. Key ingredients include accurate modeling of the spacecraft orbits and photon trajectories (this includes the effects of gravitational waves), realistic simulations of the time delay interferometry used to cancel laser phase noise, and experimental characterization of the various noise contributions. A good end-to-end model can help to make design trade-offs, and to avoid costly mistakes. Forward modeling can also be used to develop and test data analysis strategies. While we focus our attention on LISA, our forward model can be used to study other proposals for space-borne gravitational wave detectors, such as the Big Bang Observatory [7].
Work on various elements of the LISA end-to-end model have been under development for some time. Modeling of the detector response has its roots in the Doppler tracking of spacecraft [8]. Results were initially derived for a static array with equal arm-lengths [9,10]. Following the discovery of Time Delay Interferometry [5], these results were extended to a static array with unequal armlengths [5,11,12]. The orbital motion of the array was first incorporated in the low frequency limit [13], and later extended to the full detector response [4]. With the full response function in hand, we have developed an open source software package called The LISA Simulator [14] that takes as its input an arbitrary gravitational wave and returns as its output the simulated response of the LISA observatory. The main purpose of The LISA Simulator is to aid in the development of data analysis tools [13,15,16,17], but its modular design allows it to be extended into a full end-to-end model. For example, the static modeling [18] of the TDI implementation could be incorporated into The LISA Simulator, as could more realistic spacecraft orbits and experimentally determined noise spectra.
The value of a realistic end-to-end model has already become apparent with the discovery of flaws in the ini-tial TDI scheme caused by the rotation of the array [19], time dependence of the arm-lengths [20], and problems with clock synchronization in a moving array [21]. These difficulties require modification of the TDI variables [19,20,22] and/or changes in the mission design.
On the other hand, a highly realistic end-to-end simulation necessarily consumes a great deal of computer resources, and delivers a fidelity that exceeds the requirements of many data analysis efforts. Indeed, when searching a large parameter space, fidelity must be sacrificed in favor of speed. To this end we have developed an approximation to the full LISA response that extends the low frequency approximation by two decades. The motion of the array is stroboscopically rendered into a sequence of stationary states, yielding an adiabatic approximation to the full response. The adiabatic approximation allows us to write down a simple analytic expression for the response function in a mixed time/frequency representation. For sources with a few dominant harmonics, such as low eccentricity, low spin binary systems at second post-Newtonian oder, the adiabatic approximation provides a fast and accurate method for calculating the LISA response.
The outline of this paper is as follows: In Sec. II we describe the orbits of the interferometer constellation and describe how various effects enter into the detector response. In Sec. III we review the expression for the complete response of a space-borne detector. (An alternative derivation of the full response is given in Appendix B). In Sec. IV we show some applications of the general formalism using The LISA Simulator. In Sec. V we explore the limitations of the low frequency approximation, and in Sec. VI we introduce the adiabatic approximation and demonstrate its utility. We finish with an application, using the adiabatic approximation to determine when LISA can detect the time evolution of a binary system. We work in natural units with G = c = h = 1, but report all frequencies in Hertz.
A. Orbital effects
The current design of the LISA mission calls for three identical spacecraft flying in an equilateral triangular formation about the Sun. The center of mass for the constellation, known as the guiding center, is in a circular orbit at 1 AU and 20 • behind the Earth. In addition to the guiding center motion, the formation will cartwheel in a retrograde motion with a one year period (see Fig. 1). The detector motion introduces amplitude (AM), frequency (FM), and phase modulations (PM) into the gravitational wave signals [13,17]. The amplitude modulation is caused by the antenna pattern being swept across the sky. The phase modulation occurs when the differing responses to the two gravitational wave polarizations are combined together. The frequency (Doppler) mod- ulation is due to the motion of the detector relative to the source. Since both the orbital and cartwheel motion have a period of one year, these modulations will show up as sidebands in the power spectrum separated from the instantaneous carrier frequency by integer values of the modulation frequency, f m = 1/yr.
To describe the coordinates of the detector we work in a heliocentric, ecliptic coordinate system. In this system the Sun is placed at the origin, the x-axis points in the direction of the vernal equinox, the z-axis is parallel to the orbital angular momentum vector of the Earth, and the y-axis is placed in the ecliptic to complete the right handed coordinate system. Ignoring the influence from other solar system bodies, the individual LISA spacecraft will follow independent Keplerian orbits. The triangular formation comes about through the judicious selection of initial conditions. In Appendix A we derive the spacecraft positions as a function of time. To second order in the eccentricity, the Cartesian coordinates of the spacecraft are given by In the above R = 1 AU is the radial distance to the guiding center, e is the eccentricity, α = 2πf m t + κ is the orbital phase of the guiding center, and β = 2πn/3 + λ (n = 0, 1, 2) is the relative phase of the spacecraft within the constellation. The parameters κ and λ give the initial ecliptic longitude and orientation of the constellation.
Using the above coordinates the instantaneous separations between spacecraft are found to be with L = 2 √ 3Re. From this it is seen that to linear order in the eccentricity the detector arms are rigid. By setting the mean arm-length equal to those of the LISA baseline, L = 5 × 10 9 m, the spacecraft orbits are found to have an eccentricity of e = 0.00965, which indicates that the second order effects are down by a factor of 100 relative to leading order.
B. Gravitational Wave Description
An arbitrary gravitational wave traveling in thek direction can be written as the linear sum of two independent polarization states, where the wave variable ξ = t −k · x gives the surfaces of constant phase. The polarization tensors are given by where ψ is the principle polarization angle and the basis tensors e + and e × are expressed in terms of two orthogonal unit vectors, These vectors, along with the propagation direction of the gravitational wave, form an orthonormal triad, which may be expressed as a function of the source location on the celestial sphere (θ, φ), u = cos θ cos φx + cos θ sin φŷ − sin θẑ v = sin φx − cos φŷ k = − sin θ cos φx − sin θ sin φŷ − cos θẑ .
The above basis set is defined with respect to the barycenter reference frame. For a binary system -the standard gravitational wave source in the LISA band -it is natural to introduce another basis that is aligned with principle polarization axes,p andq, of the gravitational radiation. The orientation of the principle directions is chosen such that there is a π/2 phase delay between the two polarization states. The connection between the two basis sets is a rotation by the principle polarization angle ψ about the shared propagation directionk.
We model the gravitational waves from a binary system according to where Ψ(ξ) is the orbital phase. The instantaneous frequency of the n th gravitational wave harmonic is given by Unless the binary is highly eccentric or highly relativistic, the dominant emission will be quadrupolar, with frequency f (ξ) = f 2 (ξ), and will be well described by the restricted post-Newtonian approximation: Here M is the chirp mass, D L is the luminosity distance and ι is the inclination of the binary to the line of sight.
Higher post-Newtonian corrections, eccentricity of the orbit, and spin effects will introduce additional harmonics.
III. DETECTOR RESPONSE: ANALYTICAL
For two spatially separated test particles in free fall, the effect of a passing gravitational wave is to cause the proper distance between the masses to vary as a function of time. Finding the detector response reduces to solving for the appropriate timelike and null geodesics in the spacetime with the metric (10) In the above equation φ denotes the Newtonian potential set up by various bodies in the Solar system and h ij denotes the time-varying metric perturbation due to gravitational waves described in the previous section. The relevant geodesics are those of the two spacecraft, x 1 (τ 1 ), x 2 (τ 2 ), and the photons sent from spacecraft 1 to 2, x ν (λ). We need to find the path taken by the photon that leaves spacecraft 1 at time t 1 and arrives at spacecraft 2 at time t 2 , which amounts to a classic pursuit problem in curved spacetime. The calculation must take into account a host of factors, some due to the Newtonian potential, and some due to the gravitational wave.
During the time taken for the photon to travel between the spacecraft, both effects are small and can be treated independently.
The Newtonian potential leads to a variety of effects, such as a Shapiro time delay ∆L/L ∼ M ⊙ /R, gravitational redshift ∆ν/ν ∼ M ⊙ L/R 2 , deflection of light ∆θ ∼ M ⊙ L/R 2 , and tidal flexing ∆L/L ∼ M ⊙ L 2 /R 3 . Each of these effects is considerably larger than any of the effects caused by the passage of the gravitational wave, and they have to be subtracted before the gravitational wave data analysis begins. The first step in the subtraction relies on us being able to accurately model the orbital phase shifts using the Solar System Ephemeris. The second step in the subtraction employs a high pass filter to remove the residuals from the orbital fit, which occur at harmonics of the modulation frequency f m = 1/yr ≃ 3.2 × 10 −8 Hz. The orbital effects, and the procedure for their removal, should be included in the full end-to-end model, even though they do not directly affect the response of the detector to gravitational waves.
The effect of the gravitational wave on the phase shift can be found by setting φ = 0 in Eq. (10) and solving the geodesic equation for the photons and the spacecraft in the metric perturbed by the gravitational wave. There are two equivalent approaches for finding the phase shift. The first approach is to find the Doppler shift of the photon emitted by the first spacecraft and received by the second. The Doppler shift is then integrated with respect to time to give the phase shift. The Doppler derivation is given in Appendix B. The second approach is to integrate along the photons trajectory to find the path length variation caused by the gravitational wave [4]. The expressions given in Appendix B is valid to all orders in the spacecraft velocity v, and to first order in the gravitational wave strain h. However, as we explained in Ref. [4], it is hard to justify keeping terms of order vh given that v ∼ 10 −4 . It would take a phenomenally bright source, with a signal to noise ratio of ∼ 10 5 , for the vh cross terms to be noticeable. Working to leading order in v and h, the path length variation for a photon propagating from spacecraft i to spacecraft j is given by wherer ij (t) points from test mass i to mass j and h(ξ) is the gravitational wave tensor in the transverse-traceless gauge. The colon here denotes a double contraction, a : b = a ij b ij . Applying Eq. (11) to a pair of orbiting spacecraft requires the careful evaluation of ther ij (t) unit vectors. This calculation is complicated by the motion of the spacecraft and the finite speed of light. For a photon emitted from spacecraft i at time t i and received at spacecraft j at time t j the proper evaluation of the unit vectors isr The distance the photon travels between spacecraft is given implicitly through the relationship Here we have used the fact that the reception time is the emission time plus the time of flight for the photon. We can numerically estimate the magnitude of this point ahead effect by expanding the photon propagation distance in a v/c series: where v j (t i ) is the velocity of spacecraft j and is the instantaneous spacecraft separation. For the LISA mission with a mean arm-length of 5 × 10 9 m and spacecraft velocity v ≈ 2πf m R ≈ 10 −4 , pointing ahead gives a first order effect of approximately 10 5 m. For comparison, the orbital effects given in Eq. (2) impart a variation in the photon propagation distance of 10 7 m. An arbitrary gravitational wave can be decomposed into its frequency components: Such a decomposition allows us to rewrite Eq. (11) in the form where the one-arm detector tensor is given by and the transfer function is .
Here f * ij = 1/(2πℓ ij ) is the transfer frequency for the ij-arm. The transfer functions arise from the interaction of the gravitational wave with the detector. For gravitational radiation whose frequency is greater than the transfer frequency the wave period is less than the light propagation time between spacecraft, which leads to a self-cancellation effect accounted for by the transfer functions. Below the transfer frequency the transfer functions approach unity. This leads to a natural division of the LISA bandwidth into high and low frequency regions, which will be exploited in a later section when we approximate the response of the detector.
The connection of Eq. (11) to what is actually measured depends on the design of the gravitational wave detector. The current proposal for LISA is to have each spacecraft measure two phases differences, one for each arm. The phase difference, Φ ij (t j ), as measured on spacecraft j, is found by comparing the phase of the received signal from spacecraft i against the outgoing signal's phase that is traveling back to spacecraft i. Inherent in the phase difference measurements are both the gravitational wave signal and noise contributions from laser phase noise C(t), shot noise n s (t), and acceleration noise n a (t): Here the time t i is implicitly found through t i = t j − ℓ ij (t i ). The subscripts on the noise components indicate the directional dependence of that component: C ij is the laser phase noise introduced by the laser on spacecraft j that is pointed toward spacecraft i, n s ij is the shot noise in the photodetector on spacecraft j that is receiving a signal from spacecraft i, and n a ij is the projected acceleration noise from the accelerometer on spacecraft j in the direction of spacecraft i. The position noise and path length variation are converted into a phase difference by multiplying by the angular frequency of the laser, 2πν 0 .
Once the six phase differences are measured and telemetered down, the different interferometer signals can be synthesized. For example, the Michelson signal formed by using spacecraft 1 as the vertex craft is where t 21 and t 31 are found from However, due to the relatively large laser phase noise, the Michelson signal will not be a viable option. Instead a number of so called TDI signals will be used [5]. These signals are built by combining time-delayed Michelson signals in such a way as to reduce the overall laser phase noise down to a level that will not overwhelm the detector's output. A particular example of a TDI variable is the X signal [20]: where the new times t 12 , t 13 , t ′ 21 , and t ′ 31 are defined through the implicit relationships By permutations of the indices similar forms for the Y and Z-signals can be constructed. By writing the response of the detector in a coordinate free manner we are able to apply this formalism to an arbitrary space-based mission. All that has to be changed are the spacecraft orbits. It should also be emphasized that the response is calculated entirely in the time domain. In later sections we develop approximations to the full response by working in a hybrid time/frequency domain. This hybrid approach assumes extra information about the sources, which allows us to develop explicit expressions for the detector response.
A. Noiseless response
As an application of the equations presented in the previous section, we have simulated the response of the proposed LISA mission. The LISA Simulator [14] is designed to take an arbitrary gravitational waveform and output the full response of the detector. To apply the equations we have elected to work entirely in the heliocentric, ecliptic coordinate system. Therefore, all times are evaluated in terms of Solar System Barycentric (SSB) time. The conversion to the detector time is through the standard relationship dτ = 1 − v 2 (t)dt, but since we only work to leading order in v the distinction is not made. (In practice there will be difficulties in synchronizing the clocks on the spacecraft [21], but they do not trouble the simulations.) The positions of the spacecraft are calculated to second order in the eccentricity, Eq. (1), which includes the leading order flexing motion of the array. Tidal effects, and third order terms in the eccentricity, are neglected for now.
One of the guaranteed sources for the LISA mission is the cataclysmic variable AM Canum Venaticorum. This binary star system is comprised of a low mass helium white dwarf that is transferring material to a more massive white dwarf by way of Roche lobe overflow. AM CVn's orbital frequency of 0.972 mHz, and close proximity to the Earth (∼100 pc) make it a good calibration binary for LISA. Shown in Fig. 2 is the simulated response to AM CVn expressed as a strain spectral density h f (f ). Note that the barycenter gravitational wave signal will be approximately monochromatic, however, the motion of LISA introduces modulations that cause the signal to spread over a range of frequencies [17].
Another LISA source, but one whose event rate is poorly known, is the merger of two super-massive black holes. Figure 3 shows the simulated response of LISA to two 10 6 M ⊙ black holes coalescing at a redshift of z = 1. The observation tracks the final year before coalescence.
B. Noise
Laser phase noise, photon shot noise, and acceleration noise are expected to be the dominant forms of noise in space-borne detectors. As previously discussed, Time Delay Interferometry is used to reduce the effects of the laser phase noise to a tolerable level. We assume that the TDI signal processing is properly implemented, and therefore neglect laser phase noise in our simulation.
The simulation of the noise is done in the time domain by drawing random numbers at each time step from a Gaussian distribution with unit variance and zero mean. For the white photon noise we then scale the random number by the shot noise spectral density defined in Ref. [23] (S ps = 1.0 × 10 −22 m 2 /Hz). For the col- Comparing this graph to a standard LISA sensitivity curve [24], a number of differences are apparent. The most obvious one is the lack of rise in the high frequency region. This is because the standard sensitivity curve folds the average detector response into the noise curve. The Sensitivity Curve Generator includes the all sky averaged and polarization averaged transfer function, which equals 3/5 at low frequencies and grows as f 2 above the transfer frequency. A secondary difference is in the overall normalization, as the Sensitivity Curve Generator scales the path length variations by the interferometer mean arm-length of L, while we scale the path length variations by the optical path length of 2L.
To arrive at a simulation of the X noise we combine the noise elements as dictated by Eq. (23). Doing so gives the results displayed in Fig. 5, which agrees with the predicted results. To see this, we start with the analytical expression of the average Michelson noise curve shown in Fig. 4, (25) which is derived in the appendix of Ref. [25]. In the above f * = 1/(2πL) is the mean transfer frequency for an arm. Next, we note that the X signal is formed be differencing two Michelson signals, one time delayed by roughly twice the light travel time between spacecraft. Therefore, the noise will enter in the X signal as which has a Fourier transform of and a power spectral density of The strain spectral density of the X noise is given by Shown in Fig. 6 is a plot of h X f (f ) along with the average from Fig. 5. Although the derivation of the X noise strain spectral density assumed constant arm-lengths we see that there is excellent agreement between the predicted results of Eq. (29) and the simulation, which included the variations in the arms.
Although Eqs. (11), (20), and (23) give the full response of a space-borne detector, they are analytically difficult to handle and time consuming to evaluate. For this reason we will now explore some approximations to the full response that use information about the input waveforms and a simplified description of the detector. These approximations not only aid in the development of data analysis techniques, but also give a greater insight into the workings of the detector.
V. LOW FREQUENCY APPROXIMATION
In sections II and III we saw that the full response of a space-borne gravitational wave detector was complicated by the intrinsic arm-length fluctuations, pointing ahead, and the signal-cancellation accounted for in the transfer functions. As a first approximation to the response of LISA we will neglect all of these effects. That is, we will work to linear order in the spacecraft positions, evaluate all spacecraft locations at a common time, and set the transfer functions to unity. It should be noted that this approximation was originally worked out by Cutler [13] and can be viewed as an extension of the LIGO response to space-borne detectors. The transfer function T (f, t,k) can be set equal to unity when f ≪ f * . For the LISA mission, whose bandwidth is 10 −5 to 1 Hz, the transfer frequency has a mean value of f * = 0.00954 ≈ 10 −2 Hz.
In the limit f ≪ f * and f /ḟ ≪ L the path length variation (11) reduces to Working in terms of strains and neglecting noise, the Michelson signal from spacecraft 1 is given by The last line follows from the condition f ≪ f * . Using (3), (7), and (30) the strain can be re-expressed as where is the gravitational wave phase measured at spacecraft 1. The antenna beam pattern factors, F + (t) and F × (t), are given by where D + (t) = r 12 (t) ⊗r 12 (t) −r 13 (t) ⊗r 13 (t) : e + D × (t) = r 12 (t) ⊗r 12 (t) −r 13 (t) ⊗r 13 (t) : e × .(35) Working to linear order in the eccentricity, the Keplerian orbits given in (1) yield − 36 sin 2 (θ) sin 2α(t) − 2λ and Equations (32) to (37) constitute the analytical formalism for the Low Frequency Approximation. These equations are numerically quick to evaluate and can be handled analytically. As a point of reference, the strain presented in Eq. (32) can be shown to be equivalent (most easily through a numerical comparison) to that derived by Cutler [13].
To test the range of validity of this approximation we used The LISA Simulator (TLS) as a template to calculate the correlation between the full response and the Low Frequency Approximation (LFA), Using fixed random choices for the source location and orientation we systematically varied the gravitational wave frequency and calculated the correlation at each frequency. The results of this calculation are shown in Fig. 7. We found that the Low Frequency Approximation has a strong correlation to the true response for frequencies below 3 mHz, at which point the correlation drops to 95%. The steep turn down in the correlation as the transfer frequency is approached is to be expected as the Low Frequency Approximation neglects the self-cancellation effects encoded in the transfer functions. The wiggles at higher frequencies are due to the transfer functions present in the full response template s TLS . The precise structure of these oscillations depends on the source location through thek ·r ij (t) dependence in the transfer functions. However, the turn down at 3 mHz is location independent. The location dependence does not become strongly evident until the correlation value has dropped to roughly zero.
The significance of a particular correlation value is dependent on the signal-to-noise ratio of the source. For high S/N the effects neglected in the approximation will be detectable. Conversely, for a low S/N one may continue to use the approximation at higher frequencies as the difference would not be noticeable.
A. Response formalism
The breakdown of the Low Frequency Approximation comes about through neglecting the transfer functions. As a second approximation to the LISA response we will now include the transfer functions, but continue to hold the detector rigid by working to leading order in the spacecraft positions and evaluating all spacecraft locations at the same instant of time. Such an approximation has been worked out before for the case of a stationary detector in [25,26], but here we extend it to include the motion of the detector.
Physically this approximation can be viewed in the following way. At an instant of time we hold the detector fixed and send photons up and back along the interferometer arms and calculate the phase difference. We then increment the time by a small amount, moving the rigid detector to its new position in space, and repeat the process. This sequence of stationary states is the origin of the term "Adiabatic" for describing the approximation.
For chirping sources the Adiabatic approximation requires that the frequency evolutionḟ occurs on a timescale long compared to the light travel time in the interferometer: f /ḟ ≪ L. When this condition does not hold the Rigid Adiabatic Approximation is no longer valid and the full response should be used. In the limit f /ḟ ≪ L the path length variation (11) reduces to where the one-arm detector tensor is given by and the transfer function is The Michelson signal is given by which may now be expressed as and the round-trip detector tensor takes the form and the round-trip transfer function is The time dependent unit vectors,â(t) andb(t), are given byâ Collectively these equations are the analytical formalism for the Rigid Adiabatic Approximation. As with the Low Frequency Approximation, the expressions are computationally quick to evaluate and can be easily manipulated analytically. Figure 8 shows the correlation between the full response and the Rigid Adiabatic Approximation for a monochromatic gravitational wave. Note that by including the transfer functions we are able to extend agreement with the full response two decades in frequency beyond where the Low Frequency Approximation broke down. The turn down at ∼ 0.5 Hz comes about through neglecting the second order terms in the spacecraft positions. As we described in Sec. II A the second order orbital effects are down by two orders of magnitude in comparison to the linear order. This shows up in the Rigid Adiabatic Approximation through the transfer frequencies, which are evaluated for a rigid detector. Normally the transfer frequencies are given by but for a rigid detector this reduces to the static form f * = 1/(2πL). The extension to higher orders in the orbital eccentricity can be done. The trade off is that the expressions become more complicated since the transfer frequencies would then become functions of time. In turn, this would require that each transfer frequency be evaluated along each arm during each time step rather than using one constant value throughout the entire calculation. Additionally, the normalization of the unit vectors in Eq. (46) would need to be evaluated at each step since the arm-lengths would vary as a function of time via Eq. (2). Such an approach would be appropriately called the Flexing Adiabatic Approximation since the arm-lengths would now oscillate in time about a mean value of L. Although the expressions would become analytically complicated, the numerical evaluation would not be significantly slower since the additional steps are straightforward to evaluate. Figure 9 compares the output of The LISA Simulator to the Rigid Adiabatic Approximation for a binary system of intermediate mass black holes, each of mass 5000M ⊙ at a redshift of z = 1. The observation covers the final year before coalescence. The agreement is excellent.
In a recent paper [27], Seto used a variant of the Rigid Adiabatic Approximation to calculate the effects of LISA's finite arm-lengths on the analysis of gravitational waves from chirping supermassive binary black holes. A comparison of our Rigid Adiabatic Approximation, which is derived from the path length variation, and Seto's approach, which is based on the Doppler shift formalism is given in Appendix C.
B. Applications
Utilizing the speed of the Rigid Adiabatic Approximation we may investigate various data analysis questions. Here we provide one concrete example by determining when phase evolution of a binary system due to radiation reaction needs to be included in the source modeling.
For our calculations we used the restricted Post- Newtonian approximation, whereby the gravitational wave amplitude is calculated to first order, while the phase evolution is calculated to second order [29]. The justification for this is that LISA will be far more sensitive to the phase than the amplitude [13]. The lack of additional harmonics of the orbital period also simplifies the calculation as we only have to calculate a single transfer function at each time step.
To quantify the importance of including the evolution of the gravitational wave phase, we calculated the correlation between a monochromatic Rigid Adiabatic Approximation to one in which the phase evolution is included. Figure 10 shows the correlation for three types of binaries expected to reside inside our own galaxy: a white dwarf binary with mass components 0.5 M ⊙ , a neutron star binary with masses 1.4 M ⊙ , and a 10 M ⊙ black hole with a neutron star companion.
What we found is that the frequency at which the monochromatic signal diverges from one that includes phase evolution depends on the masses of the binary components. The reason for this comes from the expression forḟ , which contains a mass dependent coefficient. For the stellar mass binaries we studied, the drop in the correlation happened to coincide with the breakdown of the Low Frequency Approximation. Thus, most Milky Way sources can be modeled as monochromatic sources using the Low Frequency Approximation.
Another way to represent the same data is to map the initial frequency to the time of coalescence. The results of this calculation are shown in Fig. 11. In this case we see that chirping becomes important for stellar mass sources within ∼10 5 years of coalescence. As expected, the mapping to the new variable preserves the mass dependence seen in Fig. 10. A final way to represent this data is to set the independent variable equal to the change in the frequency scaled by a bin width, δf = (f f − f i )/∆f , where for one year of observation the bin width is ∆f = 1/yr ≃ 3.2 × 10 −8 Hz. Such an approach is shown in Fig. 12. Unlike with the other representations of the correlation between a monochromatic and coalescing signal, the results of this calculation are independent of the system's masses. It is also interesting to note that this result implies that it will be possible to detect if a source is coalescing or not well within a bin width. This fact is not in conflict with the Nyquist theorem, which states that the frequency resolution will not be better than the inverse of the observation time. The reason being that we have additional information, namely the functional form of the phase evolution, which is not assumed in deriving Nyquist's theorem.
VII. DISCUSSION
We have examined the forward modelling of spaceborne gravitational wave detectors with special emphasis on the LISA observatory. Forward modelling will play two distinct roles in the developement of space-borne observatories. The first is as part of a complete end-to-end model that takes into account every concievable physical effect, and the second is as an intermdeiary between source simulation and data analysis. Here we have focussed on the latter role, and to that end we have studied two simple approximations to the full response -the Low Frequency Approximation and the Rigid Adiabatic Approximation. We found that the Rigid Adiabatic Approximation could be used in place of the full response for a wide range of data analysis projects. For example, the relatively simple analytic form of the Rigid Adiabatic Approximation is well suited to the calculation of Fisher information matrices in studies of astrophysical parameter extraction. On the other hand, The LISA Simulator is available if we need to simulate the response to highly relativistic gravitational wave sources such as the merger of two black holes. To get the above coordinates as a function of time we first note that the ecliptic longitude is related to the eccentric anomaly, ψ, by and the eccentric anomaly is related to the orbital phase α(t) = 2πt/T + κ via Kepler's equation Assuming a small eccentricity we may solve Eq. (A4) through an iterative process where we treat the e sin ψ term as being lower order than ψ, Through such a procedure we arrive at (A6) Substituting this result into Eq. (A3) and expanding to second order in the eccentricity gives an ecliptic longitude of γ = (α−β)+2e sin(α−β)+ 5 2 e 2 cos(α−β) sin(α−β)+· · · .
(A7) Substituting the ecliptic longitude series into Eq. (A1) and keeping terms up to order e 2 gives the Cartesian positions of the spacecraft as functions of time, Re cos(2α − β) − 3 cos(β) Re sin(2α − β) − 3 sin(β) These are the desired coordinates of each spacecraft as a function of time.
APPENDIX B: DOPPLER STYLE DERIVATION OF THE FULL RESPONSE
The Doppler shift of a photon emitted by spacecraft 1 and received by spacecraft 2 can be elegantly derived [8] using the symmetries of the spacetime (10). When φ = 0 the spacetime admits three Killing vectors, These provide three constants of the motion, ζ (i) · U , which along with the normalization condition U · U = 0 or U · U = −1, fully specify U (λ) in terms of some initial four velocity U (0). Writing the metric as g µν = η µν +h µν , we may express the photon propagation four-vector as where s is a null vector in the unperturbed geometry: s µ s ν η µν = 0. At the time of emission from spacecraft 1, s(t 1 ) = s 0 + δ s 1 , while at the time of reception at spacecraft 2, s(t 2 ) = s 0 + δ s 2 . Here s 0 is parallel to the unit vector connecting the two spacecraft in the unperturbed spacetime, while δ s 1 and δ s 2 are perturbations to the path due to lensing by the gravitational wave. Defining we have σ α (t 2 ) = σ α (t 1 ) + ∆s α − 1 2 η αβ ∆h βγ s γ g αβ (t 2 ) = g αβ (t 1 ) + ∆h αβ (B4) which yields 2s α ∆s β η αβ = σ α (t 2 )σ β (t 2 )g αβ (t 2 ) −σ α (t 1 )σ β (t 1 )g αβ (t 1 ) = 0 .
These can be solved to give, for example, Here k → (1,k) is the null propagation vector for the gravitational wave. The frequencies of the emitted and received photons, as measured at spacecraft 1 and spacecraft 2, are given by ν 1 = − U 1 (t 1 ) · σ(t 1 ) and ν 2 = − U 2 (t 2 ) · σ(t 2 ) respectively. Here U 1 and U 2 are the fourvelocities of the two spacecraft. Note the ν 1 = ν 0 is the operating frequency of the laser on board spacecraft 1.
(B8) where U t = γ = dt/dτ and the v i are the ordinary three velocities of the spacecraft. The spacecraft trajectories U may be expressed in terms of the unperturbed trajectories U 0 according to where A α are constants set by the initial conditions at some time t. Once the initial conditions for the spacecraft have been set, equations (B6), (B8) and (B9) give the full Doppler shift ν 2 − ν 1 at any subsequent time.
The expressions simplify considerably if we drop terms of order v 2 , vh and higher: Converting this into a fractional frequency shift, ∆ν/ν 0 , and using s 0 = ν 0â , whereâ is the unit vector connecting the two spacecraft in the background geometry, we have ∆ν ν 0 =â ⊗â : ∆h Integrating the above expression with respect to time yields the time delay described by Eq. (11).
APPENDIX C: RECONCILIATION BETWEEN ALTERNATIVE RIGID ADIBATIC FORMALISMS
According to Eq. (2.2) of Ref. [28], the relative frequency shift for a photon traveling from spacecraft 2 to spacecraft 1 can be expressed as y 31 (t) = 1 2 A + cos(2ψ 12 ) + iA × sin(2ψ 12 ) × 1 − cos θ 12 U (t, 1) − U (t − L, 2) ,(C1) where the function U (t, i) gives the phase of the gravitational wave at spacecraft i, θ ij is the angle between the source location on the sky, −k, and the detector arm and ψ ij is given through the relationship tan ψ ij =r ji ·q r ji ·p .
Herep andq are unit vectors along the principle polarization axes of the gravitational wave. The amplitude coefficients, A + and A × , are given as functions of the orbital inclination angle ι and the intrinsic amplitude A, can be shown to equal unity by writing the vectorr ij in terms of the orthonormal triad {p,q,k} according tô r ij = sin θ ′ cos φ ′p + sin θ ′ sin φ ′q + cos θ ′k . | 2019-04-14T02:22:16.437Z | 2003-11-21T00:00:00.000 | {
"year": 2003,
"sha1": "a361fd0ccf81156788c6482b00954e88b9717dd0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/0311069",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a361fd0ccf81156788c6482b00954e88b9717dd0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
52966532 | pes2o/s2orc | v3-fos-license | Aortic arch cannulation with the guidance of transesophageal echocardiography for Stanford type A aortic dissection
Background Aortic arch cannulation for an antegrade central perfusion during the surgery for Stanford type A aortic dissection can be performed within median sternotomy. We summarize the safety and convenient profile of the central cannulation strategy using the guidance of transesophageal echocardiography (TEE) in comparison to traditional femoral cannulation strategy. Methods Sixty-two patients with acute Stanford type A aortic dissection underwent aortic arch surgery in our hospital. All the patients were operated by the same surgeon. Cannulation was performed in 33 patients through the aortic arch under the guidance of TEE (Group A) and in 29 patients through the femoral artery (Group F). Under moderate hypothermic circulatory arrest, the brain is continuously perfused in an anterograde manner through the brachiocephalic and left common carotid arteries. Preoperative characeristics and surgical information were collected for each patient. Additionally, 30-day mortality rate and the incidence of the temporary neurological dysfunction were recorded as the outcomes. To compare the categorical variables, we used the chi-squared test. Continuous variables were compared using the t-test. Results Preoperative characteristics were almost similar between the two groups. The mean operation time (7.33 ± 1.14 h vs. 8.93 ± 2.59 h, P = 0.002) and the mean cardiopulmonary bypass (CPB) time (260.97 ± 45.14 min vs. 298.28 ± 95.89 min, P = 0.024) were significantly shorter in Group A than those in Group F. The 30-day mortality rates were 9.09 and 27.59% in Groups A and F, respectively (P = 0.057). And the incidences of temporary neurological dysfunction were 39.39 and 65.52% in Group A and F, respectively (P = 0.040). Conclusions Aortic arch cannulation with the guidance of TEE during the aortic arch surgery is a simple, fast, safe, and less invasive technique for establishing cardiopulmonary bypass for Stanford type A aortic dissection.
Keywords: Cannulation site, Aortic arch cannulation, Transesophageal echocardiography, Femoral cannulation, Stanford type a aortic dissection Background Stanford type A aortic dissection is a devastating event associated with major morbidity and mortality and requires immediate surgical repair. During surgery for type A aortic dissection, the choice of cannulation site is of great importance to improve the outcomes of the operation [1]. For the past decades, various cannulation sites have been used. Femoral arterial cannulation (FC) has been used for cardiopulmonary bypass since the 1950s [2] and it has been reported to be the standard cannulation site, but it can bring a risk of distal re-entry, perfusion of the false lumen, malperfusion syndrome and cerebral embolization because of retrograde perfusion in the dissected aorta [3,4]. Axillary arterial cannulation was firstly described by Villard et al. in 1976, but it was infrequently used for arterial inflow until 1995 when the Cleveland Clinic published positive results in 35 patients after axillary arterial cannulation [5]. It provides antegrade cerebral perfusion to reduce the risk of stroke and retrograde embolization. However, it also involves some local complications such as injury of the artery or the brachial plexus which can lead to arm ischemia, insufficient CPB flow, atherosclerosis of the artery, and often requires side graft sewn to the vessel [1,6]. Subclavian artery cannulation is a more time consuming procedure and provides a cumbersome antegrade cerebral perfusion (ACP) because of selective ACP through only the right carotid artery during periods of systemic circulatory arrest [7]. What's more, if the type A aortic dissection extends beyond the brachiocephalic artery, or if the patient has an incomplete circle of Willis, the surgeons would choose not to cannulate via sites like subclavian artery, innominate artery or axillary artery [4,8]. Transapical aortic cannulation is an old technique that was initially described in the early 1970s [9], but it is limited to those patients with severely calcified ascending aortas and easy to bleed at the access site [10]. Recently, direct cannulation into the dissected ascending aorta has been reported by several surgeons [11][12][13] and that it can be performed rapidly without an additional incision. During the early 20th century, several surgeons tried to combine transesophageal echocardiography (TEE) with arterial cannulation to reduce the risk of cannulating into the false lumen [14,15]. These techniques have been described in numerous studies and have been widely used. However, the question on which cannulation site is the optimal site remains controversial.
The present study was undertaken to compare the experience and results in patients undergoing surgery for Stanford type A aortic dissection using two different cannulation sites: the aortic arch under the guidance of TEE and the femoral artery. We compare the two methods and try to provide helpful information regarding the selection of the cannulation method for aortic arch surgery.
Patient selection for surgery
This retrospective study was approved by the Institutional Review Board. Individual patient consent was not required. All patients with Stanford type A aortic dissection irrespective of the dissection flap in our hospital underwent computed tomography angiography (CTA) and transthoracic echocardiography (TTE) for diagnosis and operative planning. Cannulation sites were decided individually upon patient status and surgeon preference. From December 2015 to April 2017, 62 patients with acute Stanford type A aortic dissection underwent aortic arch surgery in our hospital. All the patients were operated by the same surgeon. Cannulation was performed in 33 patients through the aortic arch with the guidance of TEE (Group A) and in 29 patients through the femoral artery (Group F). Almost all of the 33 patients in Group A were complicated cases, wherein other conventional cannulation methods were precluded because of the involvement of the axillary and femoral arteries by the dissection flap. Clinical backgrounds and preoperative clinical condition of the patients are presented in Table 1.
Surgical technique
After general anesthesia and intubation, standard median sternotomy was performed, and cardiopulmonary bypass (CPB) was instituted by cannulating either the aortic arch with the guidance of TEE (Group A) or the femoral artery (Group F).
Aortic arch cannulation technique
Group A received aortic arch cannulation with the guidance of TEE, where in a TEE probe was inserted through the esophagus. Following the median sternotomy and systemic heparinization, the pericardium was opened slowly. A concentric pledget reinforced purse-string suture was placed through the adventitial layer on the lesser curvature of the aortic arch. A modified Potts Courmand style 18 gauge needle was used to puncture the aorta inside the purse-string suture. Once pulsatile bleeding was confirmed, a 0.035-in. flexible guide wire was introduced through the needle. After TEE confirmed the presence of the guide wire in the true lumen of the descending aorta, the needle was taken out, and the cannula was advanced over the guide wire. TEE confirmed the accurate positioning of the cannulation into the true lumen. After double-stage cannulas were inserted into the superior and inferior venae cavae, a CPB was established, and the patient started to cool down. The TEE was performed by cardiologists ( Fig. 1)
Femoral cannulation technique
Group F received femoral cannulation. Cannulation of the right or the left femoral artery was surgically exposed prior to sternotomy. The venous cannulation was performed with a double-stage cannula via the right atrium. Then, a CPB was established, the patient started to cool down, and a standard median sternotomy was performed. We used selective antegrade cerebral perfusion whenever total arch replacement was required, and the brain was selectively antegrade perfused with a rate of 5 ml/ kg/min and a temperature of 25°C to 27°C. An ice pack was applied on the head to maintain cerebral hypothermia until CPB was restarted. Myocardial protection was obtained by means of an antegrade infusion of cold blood cardioplegia. The patients underwent David and Bentall operations, ascending aortic replacements, total aortic arch replacements, hemi-arch replacements, descending aortic stented elephant trunk implantation, or other operations ( Table 2).
CPB time was defined as the cumulative time on full-body CPB, including moderate hypothermic circulatory arrest (MHCA). MHCA time was defined as the cumulative time of full-body circulatory arrest, which is equivalent to the brain perfusion time. Operation time was defined as the time from incision to closure. Cross time was defined as the time from clamping the aorta to opening the aorta. Stroke was defined as a new postoperative focal neurologic deficit or cerebral hemorrhage that persisted for more than 72 h, or a new focal lesion of the brain detected by a computed tomography scan. Temporary neurologic dysfunction was defined as a focal neurologic deficit lasting for less than 72 h, or postoperative delirium, agitation, confusion, or decreased level of consciousness without any new structural abnormality observed on imaging [16,17].
Statistical analysis
Patient data were analyzed using SPSS 22.0 for Windows. Categorical variables are presented as numbers and percentages, and continuous variables are presented as mean and standard deviation values. To compare the categorical variables, we used the chi-squared test. Continuous variables were compared using the t-test.
Methods
Sixty-two patients with acute Stanford type A aortic dissection underwent aortic arch surgery in our hospital. All the patients were operated by the same surgeon. Cannulation was performed in 33 patients through the aortic arch under the guidance of TEE (Group A) and in 29 patients through the femoral artery (Group F). Under moderate hypothermic circulatory arrest, the brain is continuously perfused in an anterograde manner through the brachiocephalic and left common carotid arteries. Preoperative characeristics and surgical information were collected for each patient. Additionally, 30-day mortality rate and the incidence of the temporary neurological dysfunction were recorded as the outcomes. To compare the categorical variables, we used the chi-squared test. Continuous variables were compared using the t-test The Methods include the sentences mentioned above and the part of the surgery process in the text.
Patient characteristics
A total of 62 patients were diagnosed with acute Stanford type A aortic dissection by contrast-enhanced computer tomography and echocardiography, and they underwent elective ascending aortic surgery from December 2015 to April 2017. Patient characteristics are presented in Table 1. No significant differences in age, gender, body mass index (BMI), Marfan's syndrome, hypertension, coronary heart disease, respiratory disease, liver dysfunction, renal dysfunction and cardiac reoperation were found. However, the rates of patients who smoke (69.70% vs. 37.94%, P = 0.012) and drink (54.55% vs. 20.69%, P = 0.006) were higher in Group A than in Group F. Liver dysfunction was defined as the assay index of laboratory examination that was used to evaluate the liver function were unusual. Renal dysfunction was defined as the value of creatinine > 120 mmol/L.
Intraoperative parameters
We determined that the puncture and cannulation of the aortic arch were possible in 33 of the 62 patients, and none of them experienced intraoperative difficulties. Additionally, femoral cannulation was performed in the remaining 29 patients, except that two patients required another cannulation through the innominate artery because of the presence of malperfusion in their right arms when the ascending aortas were cross-clamped. The techniques used are shown in Table 2. All the patients underwent ascending aorta replacement. Hemiarch repair was performed in 6 24.14%, P = 0.080). Concomitant cardiac procedures included coronary artery bypass in 4.84% (n = 3) of the patients because the coronary arteries were affected by the dissection, mitral valve plastic surgery in 6.45% (n = 4), tricuspid valve plastic surgery in 8.06% (n = 5), repair of ruptured sinus of Valsalva aneurysm in 1.61% (n = 1), left vertebral artery reconstruction in 1.61% (n = 1), and repair of atrial septal defect in 1.61% (n = 1). In general, we concluded that the surgeries were more complicated in Group A. Surgical duration and intraoperative data summary are shown in Table 3. After starting the extracorporeal circulation, the body temperature was cooled to 25°C to 27°C in all patients (nasopharyngeal temperature of 25.49°C ± 2.07°C vs. 26.05°C ± 2.78°C, P = 0.259; anal temperature of 27.14°C ± 1.73°C vs. 27.36°C ± 2.64°C, P = 0.144). No significant differences in MHCA time, absence of circulatory arrest, Hct after CPB, minimum hemoglobin concentration, and maximum serum lactic acid concentration during operations were found. However, the mean operation time (7.33 ± 1.14 h vs. 8.93 ± 2.59 h, P = 0.002) and the mean CPB time (260.97 ± 45.14 min vs. 298.28 ± 95.89 min, P = 0.024) were significantly shorter in Group A than in Group F. One patient required a second run of extracorporeal circulation to stop the hemorrhage after the termination of extracorporeal circulation. Table 4 shows the postoperative parameters. The length of intensive care unit (ICU) stay (5.50 ± 3.35 vs. 4.62 ± 1.75, P = 0.200) and intubation time (43.54 ± 36.38 vs. 36.52 ± 27.54, P = 0.393) were similar in both groups as the same with the need for tracheostomy (9.09% vs. 6.90%, p = 1.000), thoracentesis (30.30% vs. 44.83%, p = 0.237), and thoracic cavity closed-chest drainage (6.06% vs. 3.45%, p = 1.000). No significant intergroup differences existed in the frequency of hemorrhage requiring rethoracotomy, which occurred in only one patient in Group A; sepsis, which occurred in one patient (3.03%) in Group A and in one patient (3.45%) in Group F; renal failure, which occurred in two patients (6.06%) in Group A and in three patients (10.34%) in Group F; multiple organ failure, which occurred in two patients (6.06%) in Group A and in two patients (6.90%) in Group F; circulatory failure, which occurred in one patient (3.03%) in Group A and in four patients (13.79%) in Group F; intestinal ischemia, which occurred in one patient (3.45%) in Group F; limb ischemia, which occurred in one patient (3.45%) in Group F; or rehospitalization, which occurred in one patient (3.45%) in Group F only. The rate of temporary neurological dysfunction (TND) was significantly lower in Group A than in Group F (39.39% vs. 65.52%, p = 0.040) and the wake time was significantly shorter in Group A than in Group F (7.22 ± 3.78 vs. 12.35 ± 12.64, p = 0.046). No statistical difference in in-hospital mortality was found between the two groups; however, a trend toward a lower 30 day mortality (9.09% vs. 27.59%, p = 0.057) was observed in Group A.
Discussion
The optimal cannulation site for the repair of acute Stanford type A aortic dissection remain unknown. The most common site for cannulation in this setting was the femoral artery until the late 1990s [18]. However, femoral artery cannulation has a risk of distal re-entry, false lumen perfusion, organ malperfusion, and cerebral embolization because of retrograde perfusion in the dissected aorta [3,4]. As an alternative cannulation technique, direct ascending cannulation has been advocated by the Hannover group [19] and has been developed through the guidance of TEE by some surgeons [20,21]. Our center considered FC as the normal cannulation technique in the repair of the Stanford type A aortic dissection and has begun to use aortic arch cannulation with the guidance of TEE since 2015. Aortic arch cannulation with the guidance of TEE is easy, fast, and straightforward, and ensures antegrade flow in the aorta and could be advantageous compared with the axillary and femoral cannulations. If the three branches of the aortic arch and bilateral femoral arteries are all affected by the aortic dissection, this proves to be the best procedure to cannulate through the aortic arch. Opening another surgical area is not required, thus establishing CPB becomes faster, which is highly beneficial to a patient experiencing hemodynamic instability.
Moreover, no additional incisions are required and surgeons do not need to repair the cannulation site, thereby avoiding injuries in other peripheral arteries. With the guidance of TEE, we did not introduce perfusion of the false lumen because the cannulation was directly inserted into the false lumen. Surgoens can use a large-diameter cannulation to provide sufficient perfusion during CPB and shorten the time of cooling the body temperature, cutting down the time of surgery as a whole. What's more, this cannulation technique tends to provide selective antegrade cerebral perfusion to protect the brain from edema, stroke and other neurological complications and retrograde cerebral embolization. However, this technique has one negative outcome, which is the risk of aortic rupture at the cannulation site. Khaladj et al. [19] reported that only 1 of 122 patients (0.8%) had an aortic rupture caused by aortic cannulation in patients with Stanford type A aortic dissection. Hiroyuki et al. [18] did not report aortic ruptures after aortic cannulation of 82 patients for 20 years. Moreover, we did not cause any aortic rupture at the cannulation site in the 33 patients in the study. Therefore, the danger of aortic rupture at the cannulation site is extremely low. In Group A, the aortic arch cannulation with the guidance of TEE was technically feasible and safe in all 33 patients. Using careful and safe cannulation techniques, we encountered no difficulties related to the cannulation procedure and did not transfer to a different cannulation site. We did not observe any malperfusion phenomenon or problems directly related to aortic arch cannulation. However, in Group F, we found intraoperative malperfusion of the right upper limb, as evidenced by the decreased blood pressure of the right radial artery; and of the left brain hemisphere, as evidenced by the decreased cerebral oxygen saturation. This phenomenon may be caused by the malperfusion of the innominate artery. We suspected that this phenomenon may have been caused by the following: (1) the diameter of the cannula, which was limited by the diameter of the femoral artery, was too small to provide enough blood for the upper limbs and the brain; and (2) some blood may have flowed into the false lumen after the CPB was started; thus, the perfusion flow was lower than the value that detected by the instrument. This phenomenon required no treatment other than the additional cannulation inserted into the innominate artery. Moreover, intestinal ischemia was detected through the abdominal computed tomography (CT) scan of one of the two patients. The patients were fasted for more than 1 week and were given parenteral nutrition. Furthermore, no femoral arterial rupture was present in these patients. A patient's left dorsalis pedis artery pulsation was non-palpable during the first week post-operation and muscle force was weaker in the left lower limb than in the right lower limb. The temperature was lower in the left lower limb than in the other parts of the body. These results may have been caused by malperfusion because the cannulation was inserted into the left femoral artery of this patient.
Mortality with acute Stanford type A aortic dissection remains high with an average 30 day mortality rate of approximately 17%, which progressively increases to 25% in octogenarians [22]. In our single-center study, we reported a 30 day mortality rate of 17.74% in 62 patients. Our study shows that aortic arch cannulation with the guidance of TEE has a positive effect on 30 day mortality. Stefan and colleagues [23] found that the cannulation strategy used for the initial bypass has no impact on mortality, even though the femoral cannulation is performed more often in a sick patient group, as categorized by ASA classification. In another study, the risk for early mortality was driven by the preoperative clinical and hemodynamic status before the operation rather than by the cannulation technique [24]. In the retrospective study of Masahiro and colleagues [1], the mean operative time, mean CPB time, and interval time between the start of operation and start of CPB was significantly shorter in the central group, and central cannulation had a positive effect on mortality (6.8% vs. 17.3%, p < 0.001). In conclusion, their study showed that a direct central cannulation through the ascending aorta is successful in repairing type A dissection and produced surgical results that are superior to those of femoral cannulation. Hiroyuki and colleagues [18] found a trend toward a reduced mortality rate in patients with aortic cannulation, although no statistical differences in postoperative mortalities and morbidities between the aortic and femoral cannulation groups were present. The large German Registry for acute aortic dissection (GERAADA) [25] with 2137 patients does not show significant influence of the cannulation site on any outcome parameter.
Preoperative and postoperative neurologic symptoms were present in approximately 7 and 20%, respectively, of the patients [22]. In our study, preoperative neurologic symptoms did not differ between the two groups, but a significantly lower rate of TND was present in Group A than in Group F. Moreover, the patients who received aortic arch cannulation with the guidance of TEE tended to recover quicker than those who received the femoral cannulation. This effect will prevent cerebral embolization because of retrograde perfusion in the dissected aorta caused by the femoral cannulation. The risk of stroke between the two groups did not differ. With adequate cerebral perfusion and cerebral monitoring using the bilateral cerebral oxygen, a moderate hypothermic arrest with temperatures between 25°C and 27°C is acceptable in both groups. In the study by Stefan and colleagues [23], no differences in neurologic symptoms regarding the perfusion strategy were found. In another singer-center study by Stefan and colleagues [24], their data showed a new neurologic event in 11% of all patients, which did not differ between femoral and central cannulation. In other studies [1,18,24], the rate of short-term and postoperative neurology in patients receiving different cannulation techniques did not differ.
Moreover, the mean operation time and mean CPB time were significantly shorter in Group A than in Group F. This result may be attributed to the aortic arch cannulation with the guidance of TEE, which does not require another incision; and to the flow of the cannulation, which is larger than that of the femoral cannulation. Moreover, LV asynergy or pseudoaneurysm on the apex and aortic valve regurgitation in the early postoperative period by TTE were absent.
Conclusion
Direct aortic arch cannulation using the Seldinger technique with the guidance of TEE may be a simple, accurate, fast, and safe cannulation technique to establish CPB during the surgery to treat type A aortic dissection. This technique is an appropriate approach for patients with peripheral arteries affected by the aortic dissection or those with hemodynamic instability.
Limitation
Limitations of the present study are the relatively small number of patients from a single institution and the non-randomized and retrospective study design. Moreover, the cannulation site was not randomly chosen but individually decided depending on patient status. Therefore, further studies with large patient populations are necessary. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Authors' contributions HM and ZX and YG carried out the conception and drafted the manuscript; ZX, JS and YG give the administrative support; JS, CQ and LL provided the materials or patients of the study; HM collected and assembled the data; Data was analyzed and interpreted by HM; All authors read and approved the final manuscript.
Ethics approval and consent to participate
The review of these patients was approved by the Hospital Ethics Committee [(2016) West China Hospital of Sichuan University Biomedical Research Ethics Committee No.168] for human research, and the written informed consent was obtained from the participants.
Consent for publication
The informed consent for publication was obtained. | 2018-10-22T02:08:11.353Z | 2018-10-11T00:00:00.000 | {
"year": 2018,
"sha1": "1f5ce2676029c86835f2b1f70ea9a883818e4832",
"oa_license": "CCBY",
"oa_url": "https://cardiothoracicsurgery.biomedcentral.com/track/pdf/10.1186/s13019-018-0779-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f5ce2676029c86835f2b1f70ea9a883818e4832",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232423161 | pes2o/s2orc | v3-fos-license | LncRNAs: Architectural Scaffolds or More Potential Roles in Phase Separation
Biomolecules specifically aggregate in the cytoplasm and nucleus, driving liquid-liquid phase separation (LLPS) formation and diverse biological processes. Extensive studies have focused on revealing multiple functional membraneless organelles in both the nucleus and cytoplasm. Condensation compositions of LLPS, such as proteins and RNAs affecting the formation of phase separation, have been gradually unveiled. LncRNAs possessing abundant second structures usually promote phase separation formation by providing architectural scaffolds for diverse RNAs and proteins interaction in both the nucleus and cytoplasm. Beyond scaffolds, lncRNAs may possess more diverse functions, such as functioning as enhancer RNAs or buffers. In this review, we summarized current studies on the function of phase separation and its related lncRNAs, mainly in the nucleus. This review will facilitate our understanding of the formation and function of phase separation and the role of lncRNAs in these processes and related biological activities. A deeper understanding of the formation and maintaining of phase separation will be beneficial for disease diagnosis and treatment.
INTRODUCTION
The assembly of liquid-liquid phase separation (LLPS) in cells mediates numerous membraneless compartments' formation, such as stress granules (Wheeler et al., 2016;Gui et al., 2019), RNA-protein complexes, termed ribonucleoprotein (RNP) granules (Murakami et al., 2015;Pitchiaya et al., 2019), PGL-1/3 granules , nuclear paraspeckles Hupalowska et al., 2018;Yamazaki et al., 2018), and receptor clusters (Su et al., 2016). These compartments are involved in various physiological processes and pathological conditions. These two and three-dimensional membraneless organelles have well-defined boundaries, allowing specific biomolecules, such as proteins and nucleic acids, to be concentrated within liquid droplets and exchanged with the surrounding microenvironment (Banani et al., 2017). By creating distinct physical and unique biochemical compartments, phase separation facilitates temporal and spatial control of signaling transduction and biochemical reactions (Nott et al., 2015;Chong and Forman-Kay, 2016;Su et al., 2016). Phase separation transitioning from liquid to gel/solid implicates various central nervous diseases caused by aberrant aggregation of proteins common in amyotrophic lateral sclerosis (ALS) (Kim et al., 2013;Gasset-Rosa et al., 2019) and frontotemporal dementia (FTD) (Murakami et al., 2015). Dynamic liquid droplets formed by LLPS are believed to be driven by multivalent interactions between biomacromolecules containing intrinsically disordered regions (IDRs)/prion-like domains (PrLDs) or RGG/RG sequence (Kim et al., 2013;Banani et al., 2017;Chong et al., 2018). Those interactions always include charge-charge, pi-pi, and cation-pi interactions . Those PrLDs and RGG sequences of RNA binding proteins (RBPs) possess small polar residues and aromatic, positively charged amino acids, which are critical elements for intermolecular interactions (Maharana et al., 2018;Alberti et al., 2019). Those RBPs contribute to the formation of RNP granules and nucleus paraspeckles through interaction with diverse RNAs in the manner of LLPS (Patel et al., 2015;Fox et al., 2018). In addition, LLPS is sensitive to its surrounding environment. Biophysical features of LLPS components (usually specific proteins and nucleic acids) and environmental factors (such as temperature, concentration of salt solution, pH, co-solute, the concentration of other macromolecules, and the modification of phase-separation-related components) have an enormous influence on intermolecular interactions between RBPs and RNAs (Brangwynne et al., 2015;Nott et al., 2015;Reichheld et al., 2017;Franzmann and Alberti, 2019). Post-translational modifications (PTMs), such as phosphorylation (Larson et al., 2017;Zhang et al., 2018), methylation (Qamar et al., 2018;Ryan et al., 2018), ubiquitination (Dao et al., 2018), and SUMOylation (Jin, 2019;Qu et al., 2020) of proteins and m6A modification of RNA (Ries et al., 2019), modulate LLPS formation through regulating protein-protein or protein-RNA interaction, which are affected by the net charge distribution of those molecules.
As a member of phase separation, RNA cooperates with protein partners to drive LLPS formation and modulates the properties of droplets (Huo et al., 2020). Emerging pieces of evidence have reported that RNA not only serves as a scaffold in phase separation due to their abundant secondary structures (Jain and Vale, 2017;Fay and Anderson, 2018;Maharana et al., 2018), but also for their ability to decrease the viscosity of protein components and promote the diffusion of protein components (Elbaum-Garfinkle et al., 2015). Long non-coding RNAs (lncRNAs) are longer than 200 nt in length and unable to code proteins, but they play critical roles in cell metabolism and tumor development, largely depending on their subcellular localization (Zhang et al., 2014). Nuclear lncRNAs regulate transcription, epigenetic modification, and splicing processes of mRNAs (Schmitt and Chang, 2016;Tang et al., 2017). Evidence reveals that lncRNAs regulate mRNA translation and degradation by complementary base pairing and serve as an RNA sponge by interacting with the miRNA in cytosol (Yoon et al., 2013). Our previous studies have revealed that lncRNAs coordinate diverse signal transduction pathways, such as PIP3, HIF1-α, Hippo, Hedgehog, and NF-κB, to promote tumor development Zheng et al., 2017;Sang et al., 2018). Compared to small RNAs, lncRNAs are more capable of providing binding sites for RBPs involved in phase separation Chujo and Hirose, 2017;Fox et al., 2018;Yamazaki et al., 2018). The classical paraspeckles, which are mainly constituted by lncRNA NEAT1 and numerous RBPs, sequester component proteins and RNAs in the nucleus to mediate gene expression by extensive polymerization and multivalent interaction of LLPS components Yamazaki et al., 2018). However, further investigation is needed to understand how lncRNAs coordinate phase separation in different subcellular localization to contribute to diseases (such as degenerative diseases) and tumor development.
This review summarized current advances about phase separation and related lncRNAs in nucleus and cytosol during numerous biological processes (Figure 1). We have also summarized the lncRNAs referred to in this review (Table 1). Finally, potential therapeutic targets in phase-separation-related lncRNAs and phase separation components during disease development are also summarized.
PHASE SEPARATION
The membrane organelles in eukaryotic cells are well-defined by their membrane-boundaries which provide relatively independent compartments for their specific function (Hyman et al., 2014;Nott et al., 2015;Alberti et al., 2019). For example, endoplasmic reticulum (ER) is involved in the processing of protein and the synthesis of lipids; Golgi apparatus participates in the processing, sorting, and transporting of proteins. Lysosomes function as the cleaning machines for misfolding and pathological proteins; mitochondria provide cellular fuel. However, how do membraneless organelles assemble proteins, nucleic acids, and other molecular components into phase separation? What are the roles of these membraneless organelles in biomolecules metabolic processes, stress sensing, signaling pathways transduction, and gene expression regulation remain largely unknown. Since Hyman and Brangwynne first reported the formation of germline P granules by phase separation in worm embryo cells in 2009 (Brangwynne et al., 2009), the number of studies on phase separation touching myriad cellular functions have increased significantly.
The regulation of gene expression is a prominent event in healthy and diseased states and involves many factors (such as enhancers and coactivators). Recent studies suggested that gene regulation is always accompanied by phase separation assembled by numerous IDR proteins . Using live-cell super-resolution light-sheet imaging, a previous study found that mediator coactivator coordinates RNA polymerase II (RNA pol II) to regulate the assembly of mediator cluster at enhancer, thus activating gene expression (Cho et al., 2018). Typically, enhancers can activate promoters within the locus (Palstra et al., 2003). Those phase separation-mediated enhancers cause gene bursting expression. Transcriptional factors (TFs) MED1 and BRD4 condensate at super enhancers' (SEs) foci to coactivate gene transcription. This phase separation formed by SEs and TFs confers robust gene expression, which could explain why cancer cells acquire large SEs at driver oncogenes and results in bursting gene expression from a new perspective FIGURE 1 | The graphical abstract of phase-separation related LncRNAs involved in cellular function. (A) LncRNA NORAD functions as a multivalent binding platform for PUM1/2 proteins in cytoplasm; (B) LncRNA Xist mediates X chromosome silence and subsequently drives interaction between inactivated X chromosome and Lamin-B receptor (LBR); (C) LncRNA PNCTR sequesters PTBP1 in the perinucleolar compartment (PNC) and modulates splicing regulation function of PTBP1 protein; (D) DilncRNA synthesized at DSB foci and coordinates DDR proteins to promote the formation of DDR foci to response to DBS; (E) LncRNA NEAT1 functions as scaffolds to recruit CARM1, PSPC1, and p54nrb proteins to regulate cell differentiation and embryo development in paraspeckle; (F) LncRNA TNBL is accumulated as a perinucleolar aggregate at NBL2 loci and close to SAM68 body and is involved in genome organization, splicing regulation, and mRNA stability, respectively; (G) LncRNA HSATIII is involved in two nuclear bodies, n SB-M and n SB-S, formation to respond to thermal stress. . The composition of amino acids of TFs' activation domain in mammalian OCT4 and yeast GCN4 is vital for forming phase separation. Phase separation also coordinates multiple signaling pathways (such as estrogen receptor (ER) and Yes-associated protein (YAP) signaling axis) to respond to stress Cai et al., 2019). Changes in the components of phase separation often have an impact on their function. Phase separation formed by the histidine-rich domain (HRD) of cyclin T1 and DYRK1A contributes a lot to phosphorylated C-terminal domain (CTD). Disruption of HRD interaction downregulated gene expressions . Phase separation accumulated at chromatin foci is significantly dependent on the conformation of nucleosomes. A loose conformation of nucleosomes means the activation of chromosomes, while tight condensation suggests the
Subcellular localization Biological function References
LncRNA LINKA Cytoplasm Hyperactivate AKT, HIF1-α signaling pathway, and downregulate antigen presentation related genes to promote drug resistance and immune escaping and remodel glycolysis reprogram of cancer cells. Lin et al., 2016Lin et al., , 2017Hu et al., 2019 LncRNA BRCA4 Nuclear Coordinate hippo and hedgehog signaling pathways to aberrantly regulate glycolysis and advance breast cancer development. Xing et al., 2014;Zheng et al., 2017 LncRNA CamK-A Cytoplasm Assist the Ca2 + signaling pathway to aberrantly regulate glycolysis and remodel tumor microenvironment. Sang et al., 2018 LncRNA NEAT1 Nuclear Function as a scaffold for paraspeckle components and sequester specific proteins (such as CARM1) promotes cell differentiation and embryo development. Attenuate activation of p-53 and confer cancer cell drug resistance (LLPS). Chen and Carmichael, 2009;Adriaens et al., 2016;Fox et al., 2018;Hupalowska et al., 2018;Yamazaki et al., 2018 LncRNA MAYA Cytoplasm Mediate heterodimerization of ROR1 and HER3 and promote activation of YAP, thus facilitating breast cancer bone metastasis. LncRNA HOTAIR Nuclear Assist PRC2 complex to recruit to histone and be responsible for the silence transcription of HOXD gene.
Rinn et al., 2007
LincRNA Nuclear Bind to a series of chromatin-modifying proteins to maintain the pluripotent state of ESCs. Guttman et al., 2011 LncRNA NORAD Both nuclear and cytoplasm Assemble a topoisomerase complex at targeted chromatin foci to stabilize genome (Nuclear). Function as a multivalent binding platform for PUM1/2 proteins, and thus maintaining genomic stability (LLPS).
Nuclear
Function as an architectural scaffold interacting with two hnRNPs to promote nuclear stress bodies formation upon thermal stress exposure (LLPS).
Nuclear
Function as an architectural scaffold sequestering PTBP1 in the perinucleolar compartment, thus modulating splicing of PTBP1 protein and promoting cancer cell survival (LLPS).
Nuclear
Accumulate as a perinucleolar aggregate at NBL2 loci and close to SAM68 body and thus responding to nuclear functions and RNA metabolism (LLPS).
Dumbovic et al., 2018
DilncRNA Nuclear Is synthesized at DSB foci and coordinates DDR proteins to promote the formation of DDR foci to respond to DBS (LLPS).
Nuclear
Functions as a scaffold promoting HP1α and SAFB to form PCH foci in Pericentromeric heterochromatin (LLPS).
Nuclear
Considering the enrichment of LncRNA TERRA in APB and interaction between LncRNA TERRA and epigenetic modification factors and RBPs, LncRNA TERRA may also play a functional role in the telomere foci by providing a platform for multiple proteins interaction (LLPS).
Min et al., 2019
formation of heterochromatin. Heterochromatin protein 1 (HP1) is known to finely tune heterochromosome phase separation by participating in weak multivalent interaction of nucleosomes (Larson et al., 2017;Sanulli et al., 2019). H1 histone and the 10n + 5 inter-nucleosome spacing promotes the phase separation of chromatin and decreases dynamics in droplets (Gibson et al., 2019). Those models of heterochromatin formation provide a new perspective to understand phase separation in regulating the conformation of chromatin. Regulation of gene expression by phase separation broadens our understanding of the mechanism of aberrant expression at the transcriptional level in numerous diseases, facilitating the development of new strategies to identify key components involving the formation and maintenance of phase transition. A novel CRISPR-Cas9-based optogenetic technology was used to explore the formation of droplets impacted by the chromatin microenvironment. This study suggested that phase separation is preferentially formed at lowdensity genomic regions and promotes genomic rearrangements, thus contributing to the activation of gene expression. On the contrary, at high-density genomic regions, small droplets ultimately dissolve, contributing to the disappearance of phase separation (Shin et al., 2018). These pieces of evidence indicated that the structure of genome and phase separation affected each other, both of which have an enormous impact on gene expression. The existence of phase separation could explain the aberrant patterns of gene expression well. Phase separation transition from a liquid to a gel or solid leads to degenerative neurological diseases (Wang and Zhang, 2019). Heterogeneous nuclear ribonucleoproteins (hnRNPs) containing IDRs or PrLDs, such as FUS, hnRNPA1, or TAR DNA-binding protein 43 (TDP-43), are found rich in many aging-associated diseases (Kim et al., 2013;Patel et al., 2015;Gui et al., 2019;Mann et al., 2019). Tau droplets formed by phosphorylated or mutant Tau with IDRs undergoing LLPS contributes to Alzheimer's disease (Wegmann et al., 2018). Fused in sarcoma (FUS) is an RNA-binding protein involved in RNA transcription, splicing, transporting, and translation. With classical IDR and lowcomplexity domain (LCD), FUS protein transitions from a liquid to aggregated state, promoting LLPS formation at the sites of DNA damage, which is associated with ALS (Patel et al., 2015). As membraneless organelles, phase separation can sequester specific components to accelerate or inhibit unique cellular function, and thus advance disease development. Mislocalization and aberrant aggregation of misfolded TDP-43 sequester importinα and Nup62 in the cytoplasm. Depletion of importin-α and Nup62 in the nucleus induces RnaGap1, Ran, and Nup107 mislocalization, thus promoting cell death and causing advanced ALS and FTD (Gasset-Rosa et al., 2019). Phase separation can also contribute to the development of degenerative neurological diseases. Degenerative neurological-disease-related mutations can also affect the formation of phase separation. Recent studies reported that ALS/FTD related mutation-induced FUS phase transition from liquid droplets to irreversible hydrogels, which impairs RNP function and advances disease (Murakami et al., 2015;Patel et al., 2015). Similarly, the ALS-related mutations in the TDP-34 C-terminal domain (CTD) disrupt phase separation and impair interaction within the phase droplets, which promotes LLPS transition into solid aggregation, thus aggravating the ALS condition (Conicella et al., 2016). Mutations in prion-like domains in hnRNPA2B1 and hnRNPA1 also contribute to ALS (Kim et al., 2013). Elucidation of the exact mechanism involved in the molecular properties, formation, regulation, and function of membraneless organelles can help us explore novel therapeutic approaches to treating aging-related disorders. Optogenetic approaches used in controlling phase separation formation of TDP-43 reveals that LCD of TDP-43 are competitively bound by RNA. And oligonucleotides composed of the TDP-43 target sequence can moderate the neurotoxicity caused by aggregation of TDP-43 (Mann et al., 2019). Dysregulation of phase separation in aging-related protein accelerates the malignant transition, but is not a one-way process. Extensive exploration of those processes helps us better understand the development of agingrelated diseases.
LncRNAs IN CELL BEHAVIOR
Nearly 98% of human genome encodes as non-coding RNAs (ENCODE Project Consortium et al., 2007;Schmitt and Chang, 2016). For such a large amount of non-coding RNAs (ncRNAs), their cellular function has intrigued many researchers. According to the size, ncRNAs are divided into small ncRNAs and long ncRNAs (Brosnan and Voinnet, 2009;Liu et al., 2019). LncRNAs are poorly conserved in terms of their nucleotide sequences, even though they can be found in many species (Johnsson et al., 2014;Beermann et al., 2016). Secondary structures of lncRNAs enable them to interact with DNAs, proteins, and RNAs, allowing them to participate in multiple cellular processes (Fernandes et al., 2019).
Emerging evidence has revealed that different subcellular localizations of lncRNAs engage in numerous biological processes, including regulation of gene transcription, chromatin remodeling, cancer-related signaling pathways, and organism development Zheng et al., 2017;Sun et al., 2018;Sarropoulos et al., 2019). Nuclear-localized lncRNAs are involved in transcriptional and post-transcriptional modification and chromatin organization (Sun et al., 2018). The intended transcriptional regulation function of lncRNAs largely relies on multiple interactions between lncRNAs and other molecules (such as DNA, proteins, and RNAs). LncRNA HOTAIR assists the PRC2 complex to accumulate at histone and is responsible for the silence of HOXD gene (Rinn et al., 2007). Researchers identified dozens of lncRNAs involved in binding to a series of chromatin-modifying proteins to maintain the pluripotent state of stem cells (Guttman et al., 2011). LncRNA NORAD assembles a topoisomerase complex at targeted chromatin foci to stabilize the genome (Munschauer et al., 2018). Classical lncRNA Xist mediates X-chromosome inactivation by recruiting protein complexes to repress epigenetic marks and encompass the X-chromosome (Heard and Disteche, 2006). LncRNA TCF7 recruits SWI/SNF5 complexes to TCF7 promoter to mobilize nucleosomes and remodel chromatin conformation, promoting liver cancer stem cells self-renewal . All studies mentioned above suggested that lncRNAs are vital for gene expression at the epigenetics level. LncRNAs also act as local regulators to influence the expression of nearby genes by cis regulation (Guil and Esteller, 2012;Engreitz et al., 2016). Enhancer RNAs (eRNAs) transcribed from bidirectional ncRNA can bind to multiple TFs and coactivators to alter the chromosomal architecture and regulate gene expression (Li et al., 2013;Liu et al., 2014;Pnueli et al., 2015). The emerging roles of eRNAs significantly extend our understanding of the function of gene transcription regulated by lncRNAs.
Dysregulation of lncRNAs in cells and tissues is associated with malignant transformation and various pathological processes (Xing et al., 2014;Neumann et al., 2018), which is always coordinated by multiple classical signaling pathways. Our previous studies have suggested that lncRNA LINK-A is involved in breast cancer drug resistance , hypoxia , and immunosuppressive microenvironment . LncRNAs CamK-A, BRCA4, and AGPG wires up NF-kB (Engreitz et al., 2016), Hippo and Hedgehog (Zheng et al., 2017), and PFKFB3 glycolytic enzyme complexes (Liu et al., 2020), respectively, to remodel glucose metabolism and tumor microenvironment, promoting tumor development. LncRNAs are characterized to have specific tissue distribution, which implies their functional role in development and differentiation. Through genome-wide analysis, Luo et al. (2016) found that divergent lncRNAs regulate about 168 genes coding transcription factors and developmental regulators in embryonic stem cells (ESCs), which implies lncRNAs may be developmentally regulated (Schmitz et al., 2016). The developmental lncRNAs atlas constructed by Sarropoulos et al. (2019) revealed that lncRNAs show species specificity and dynamic expression pattern from early organogenesis to adulthood suggesting that the time, lineage-, and organ-specific lncRNAs are responsible for specific functions during organogenesis and organism development. The functions of lncRNAs are much more than what has been mentioned above. Recent advances in deepsequencing technologies have identified that some lncRNAs have the ability to encode functional peptides (Anderson et al., 2015;Nelson et al., 2016;Huang et al., 2017). And more interestingly, coordination of the phase separation formation by lncRNAs has been reported in many recent studies.
LncRNAs AND PHASE SEPARATION
The essential features of phase separation are mainly determined by their components. Phase separation related to the regulation of gene expression always takes place in the nucleus. Nuclear bodies, clustering factors, super-enhancers, and chromatin foci are often related to phase separation and transcriptional regulation. Those phase separation membraneless organelles are storage compartments for many RNAs and RNA binding proteins. RNAs involved in the formation of phase separation function as scaffolds and eRNA, whereas phase separation, in turn, impacts the behavior of RNAs, such as synthesis (Pefanis et al., 2015;Nair et al., 2019). The nucleoplasm is a natural pool abundant with diverse membraneless nuclear bodies regulating gene expression (Chujo and Hirose, 2017;Ninomiya and Hirose, 2020), especially paraspeckle and chromosome loci. Those nuclear bodies are usually composed of multiple RNAs and RBPs containing PrLD and RGG sequence (Van Treeck and Parker, 2018;Alberti et al., 2019). In phase separation nuclear bodies, RNAs critically regulate the phase behavior of RBPs with PrLD. Different RNA/protein ratios exert different influences on phase separation transition. To some extent, RNAs act as a buffer in the nucleus where high RNA concentrations keep RBPs soluble (Van Treeck and Parker, 2018). Changes at RNA levels or RNA binding abilities of RBPs cause aberrant phase transitions (Maharana et al., 2018). This makes us consider that RNAs in phase separation can competitively bind with proteins containing IDR, which attenuates protein self-aggregation. This implies that RNAs with a bigger size, especially lncRNAs, may be more efficient in buffering phase separation.
In addition to the buffering function in the nucleus, many lncRNAs often serve as scaffolds for nuclear bodies' formation. LncRNAs act as seeds to recruit specific component proteins by RNA-proteins interactions . Those RBPs always recruit additional proteins to induce the formation of LLPS and control gene expression under certain stimulations. Among those nuclear bodies, paraspeckle is a sound model system with well-defined RNAs and protein components for the study of phase separation . LncRNA NEAT1 has good architectural functions to provide a scaffold for multiple RNAbinding proteins (RBP) in paraspeckles construction (Adriaens et al., 2016). Gene expression is affected by the size and number of paraspeckles, which can sequester specific RBPs and/or RNA away from nucleoplasm to achieve the regulation (Chen and Carmichael, 2009). The paraspeckles formation is similar to cytoplasmic stress granules, which are another membraneless organelle . Both paraspeckles and stress granules can respond to cellular stress and function by sequestering specific components to regulate stress responserelated gene expression. Those membraneless organelles seem to be more flexible than compartmentalized organelles during stress response due to their dynamic disassembling and assembling. The aberrant gene expression in paraspeckle is often associated with cancer progression (Adriaens et al., 2016). Another typical transcriptional element enhancer is also responsible for gene bursting transcription. Phase separation model suggested that super-enhancers (SEs) consisted of cluster enhancers involved in high transcriptional activity of related genes (Pefanis et al., 2015;Hnisz et al., 2017). In the SEs foci, phase separation regulates the degradation and accumulation of eRNAs, which finally affects the stability of the genome (Pefanis et al., 2015). The function of eRNAs and SEs in phase separation provides us new insights into the regulation of gene expression. In addition to the regulation of nuclear body formation by lncRNAs, delineating the phase behavior mediated by lncRNAs beyond the nucleus can shed light on the impact of cytoplasm condensations on signaling transduction and cellular metabolism. LncRNA NORAD retains PUM1/PUM2 protein in the cytoplasm to form RNP granule, leading to chromatin instability in response to DNA damage. In this study, LncRNA NORAD functions as a platform to sequester PUM1/PUM2, negatively regulating PUMILIO activity in the cytoplasm. This leads to elevated key mitotic, DNA repair, and recruitment of DNA replication factors. In this RNP granule, lncRNA NORAD may coordinate interferon response pathway proteins IFIT1/2/3/5 to regulate this process (Lee et al., 2016;Tichon et al., 2016). Although little is known about the role of lncRNAs in cytoplasm phase separation, one can speculate multiple potential functions of lncRNAs in forming and maintaining phase separation and many other biological processes.
LncRNAs Modulate Phase Separation in Nucleus
In mammalian cells, there are various nuclear bodies. They are mainly involved in the regulation of gene expression by transcriptional epigenetic modification. Chromosome homologous pairing and separation, chromatin remodeling, and RNA splicing are common events in the nucleus, often mediated by phase separation. Many nuclear bodies are well-defined by the enrichment of specific proteins and RNAs (Ninomiya and Hirose, 2020). Beyond function as architectural RNAs, lncRNAs also serve as eRNAs that exist in phase separation droplets . The mechanism of how those nuclear bodies exert their functions remains poorly understood. There may be three reasons. First, those nuclear bodies function as a reaction tank sequestering specified molecules, such as enzymes and their substrates. Second, they act as a sequestering compartment, which can condensate specific molecules and protect them from degradation, or sequester from nucleoplasm, to impair their function. Third, they can form an organizational hub that anchors chromatin loci to remodel chromatin and regulate gene expression.
Paraspeckle
Paraspeckle was first reported in 2002 as a marker of paraspeckle component proteins 1(PSPC1) and subsequently found to be mainly localized in mammal cell nuclei (Fox et al., 2002). In addition to PSPC1, paraspeckle consists of over 40 different proteins and the structure lncRNAs NEAT1 (Mao et al., 2011;Fox et al., 2018). NEAT1 depletion completely abolished the formation of paraspeckle (Sasaki et al., 2009;Shevtsov and Dundr, 2011). PSPC1 was first reported to be enriched in paraspeckle, but later it was reported that PSPC1 together with NONO and SFPQ were dispensable for paraspeckle formation (Sasaki et al., 2009;Naganuma et al., 2012). EM and superresolution microscopy have revealed that paraspeckle is a spherical shape with a shell and core. 3 and 5 ends of lncRNA NEAT1 are extended out of paraspeckle in the form of bundles (West et al., 2016). Once formed, paraspeckle sequestered specific RNAs and proteins to alter the levels of those components, changing the cellular processes (Prasanth et al., 2005;Chen and Carmichael, 2009). Paraspeckle in the nucleus participates in many cellular processes, usually related to stress response and cancer. P53 regulates the transcription of NEAT1, which promotes the formation of paraspeckle and confers breast cancer cell drug resistance (Adriaens et al., 2016). High level PSPC1 expression in cancer cells activates the TGF-β pathway and promotes metastasis (Salvador and Gomis, 2018). Recent studies link paraspeckle to mitochondrial homeostasis against the stress response. Classical paraspeckle-mitochondria crosstalk provides a nice model for understanding the role of NEAT1 and paraspeckle in cancer and neurodegeneration (Nishimoto et al., 2013;Adriaens et al., 2016;Fox, 2018;Wang Y. et al., 2018).
As a classical nuclear body, paraspeckles are involved in gene expression regulation and retention of mRNAs and proteins (Hirose et al., 2014;Wang Y. et al., 2018). Recent studies wired paraspeckles with phase separation and found paraspeckle are more likely to form droplets . Previous studies revealed that many paraspeckle proteins (such as RBM14 and FUS) containing IDR are responsible for phase separation formation (Hennig et al., 2015;Patel et al., 2015;West et al., 2016;Shin et al., 2017). During preimplantation development of mouse embryo, activation of histone by the histone coactivator associated arginine methyltransferase 1 (CARM1) is necessary for the upregulated expression of a subset of pluripotency genes. The function of CARM1 is maintained by paraspeckle integrity and dependent on lncRNA NEAT1 and NONO (Hupalowska et al., 2018). A specific sequence in 3 -UTR of RNA makes it prone to be bound with paraspeckle components. The latest study reported that paraspeckle lncRNA NEAT1 and four major proteins are responsible for retaining circadian mRNA to regulate gene expression at post-transcriptional level (Torres et al., 2016). The size and number of paraspeckles significantly affect gene expression. In contrast, the assembly of paraspeckle is mainly determined by the level of NEAT1 and components proteins such as SFPQ and FUS. An earlier study revealed that the bigger a paraspeckle becomes, the more SFPQ is needed. The decreasing level of SFPQ in the nucleus altered the targeted gene expression, which also occurred in other nuclear bodies (Chen and Carmichael, 2009;Imamura et al., 2014;Wu et al., 2016). As the paraspeckle component proteins, such as SFPQ and NONO, are involved in pri-miRNA processing, sequestering both of those proteins can affect miRNA processing (Jiang et al., 2017). Studies indicate that expression and mutation of core paraspeckle structure NEAT1-2 are often related to multiple cancers (Fujimoto et al., 2016;Rheinbay et al., 2017). All of this evidence indicates that phase separation in paraspeckle can regulate gene expression and RNA-related processes, promoting disease and cancer development.
Paraspeckles are involved in various cellular processes. The core structure of lncRNA NEAT1 is responsible for building paraspeckles. The protein components of paraspeckles usually contain IDR, which promotes the formation of phase separation. Therefore, investigating the structural NEAT1 RNA or phase separation proteins in paraspeckles will help develop new strategies for targeted therapies.
Chromatin Foci
In addition to phase separation related to paraspeckles involved in RNAs and proteins, phase separation formed at chromatin is common to regulate gene transcription and chromosome segregation. Phase separation formed at chromatin is usually affected by the surrounding microenvironment, such as nucleosome state and modification of histone. Exposing histone tails of nucleosome makes the interaction of inter-nucleosome tighter and thus promotes the phase separation formation by HP1 (Gibson et al., 2019;Sanulli et al., 2019). Phase separation prefers to form at low-density chromatin compared to highdensity regions, referred to as heterochromatin. This preference caused by phase separation usually results in reorganization of chromatin and thus alters gene expression (Gibson et al., 2019). Many studies suggest that lncRNAs in specific chromatin loci also function as scaffolds to recruit chromatin-modifying complexes, promoting the epigenetic regulation of gene expression. X-chromosome inactivation (XIC) is a critical epigenetic mechanism for balancing gene dosage between XY and XX in eutherian mammals. Recent studies suggest that the process of X-chromosome inactivation is involved in phase separation mediated by Xist (Cerase et al., 2019). Xist drives phase separation by enriching chromatin remodeling factors, such as Spen, Ptbp1, HnrnpK, and PRC1/2 IDR-protein (Moindrot et al., 2015). This recruitment leads to deacetylation of histone and chromatin condensation. After inactivation, the X-chromosome is sequestered by specific interactions between Xist and Lamin-B receptor (Cerase et al., 2019). Pericentromeric heterochromatin (PCH) formation is also a phase separation process, mainly mediated by HP1α and lncRNA MajSAT. In these PCH foci, the R/G-rich domain of RNP protein SAFB is responsible for recognizing lncRNA MajSAT. SAFB-MajSAT interaction functions as a scaffold for the 3D organization of heterochromatin (Huo et al., 2020). What is impressive in this study is that, although the SAF family proteins SAFA/B have a similar functional domain, only SAFB confers the formation of PCH foci. The factor contributing to this difference is interesting for future studies. Telomeres, a special part of the chromosome, consist of DNA-protein complexes involved in chromosome end protection. It has been reported that many cancer cells can escape senescence by altering the length of telomeres, which is also termed alternative lengthening of telomeres (ALT). LncRNA TElomeric Repeat-containing RNA (LncRNA TERRA), transcribed at telomeres, is a main hallmark of ALT (Roake and Artandi, 2017;Bettin et al., 2019). Evidence has indicated that lncRNA TERRA acts as a scaffold to promote the recruitment of epigenetic modification factors (such as PRC2 and HP1) and diverse RBPs (such TLS/FUS and TRF2) (Deng et al., 2009;Takahama et al., 2013;Montero et al., 2018), which always appear in numerous phase separations. Simultaneously, lncRNA TERRA was reported to be enriched in ALT-associated PML body (APB), one of the promyelocytic leukemia (PML) bodies, which are nuclear membraneless organelles formed by LLPS and are involved in mitosis by recruiting multivalent proteins with small ubiquitin-like modification (SUMO) sites and SUMO-interacting motifs (SIMs) (Arora et al., 2014;Banani et al., 2016). A recent study reported an artificial model system where APB could form telomere cluster condensates by LLPS in vivo. During this process, BLM helicase and RAD52 are responsible for the formation of telomeres' foci (Min et al., 2019). Considering the enrichment of lncRNA TERRA in APB and the interactions between lncRNA TERRA and epigenetic modification factors and RBPs, we speculate that lncRNA TERRA may also play a functional role in the telomere foci. However, the detailed mechanism needs to be further investigated. Of note, why both lncRNA MajSAT and LncRNA TERRA, repetitive RNAs, are preferred to be selected to participate in the formation of phase separation needs further exploration.
Corrective pairing and segregation of homologous chromosomes in meiosis are critical to producing haploids. LncRNA sem2 RNA helps Smp (sme2RNA-associated protein) protein form three chromosome loci and determine the specificity of chromosomal loci for fusion. It indicates the importance of Smp proteins in the accumulation of lncRNA and the critical role of lncRNA-mediated chromosome homologous pairing in Schizosaccharomyces pombe (Ding et al., 2019). In the fission yeast, the meiRNA plays a crucial role in recognizing and pairing homologous chromosomes during meiotic prophase. LncRNA meiRNA recruits Mmi1 protein to sem2 dot to promote meiosis, which is pivotal for selective elimination of meiosisspecific transcripts (Shichino et al., 2014). Enhancers and SEs are good partners to explain the bursting expression of genes. Recent studies reveal that enhancers, SEs, and eRNA may be involved in phase separation. Transcribed from bidirectional ncRNA, eRNAs act as enhancers and alter the chromosomal architecture during the transcription process (Li et al., 2013;Liu et al., 2014;Pnueli et al., 2015). Under the acute stimulation of 17β-estradiol (E2), eRNA and several TFs provide a conductive microenvironment for the assembly of enhancer RNA-dependent ribonucleoprotein (eRNP), regulating signal-inducible transcription (Nair et al., 2019). eRNAs wire DNA and TFs together and thus promote gene expression. RNA-exosome regulates the degradation and terminates transcription of eRNA lncRNA CSR, which coordinates SEs to promote the stability of chromatin in long range (Pefanis et al., 2015). eRNAs highly expressed in many cancers may be responsible for drug resistance by promoting related gene expression, which indicates that certain eRNAs can be diagnostic markers and targets for cancer treatment .
The diverse functions of lncRNA combined with phase separation in chromatin loci show a spectacular panoramic view for understanding the regulation of gene expression at the transcriptional level. The inactivation and segregation of chromatin and gene bursting expression can be well-interpreted by the phase separation model. The chromatin loci formed by phase separation through specific proteins and lncRNA provide us with new strategies to explore the abnormal cellular processes and develop novel therapy.
Nuclear Stress Bodies
The nucleoplasm is a natural pool for diverse nuclear bodies to regulate gene expression (Chujo and Hirose, 2017;Ninomiya and Hirose, 2020). Nuclear bodies accumulated at specific nucleus sites affect the biogenesis, maturation, storage, and sequestration of specific proteins and RNAs, thus altering cellular events to respond to stress stimuli. Under thermal stress, lncRNA HSATIII acts as the structural scaffold for the HNRNPM and SAFB foci formation and retains numerical RBPs to regulate gene expression (Aly et al., 2019). To respond to DNA doublestrand breaks (DSB), damage-induced long non-coding RNA (dilncRNA) is synthesized at DSB foci, also called DNA-damageresponse (DDR) foci. DilncRNA, together with DDR proteins, such as 53BP1, promotes the formation of DDR foci to regulate the transcriptional activity of genes mediating the DSB signal pathway (Pessina et al., 2019). In a wide range of cancers, lncRNAs and related RBPs are often aberrantly transcribed. LncRNA PNCTR recruits RBP PTBP1 to form a nuclear body called peri-nucleolar compartment (PNC), where lncRNA PNCTR modulates cellular localization of PTBP1 by changing the splicing of PTBP1, an activator of the intrinsic branch of apoptosis. The alteration of PTPB1 cellular localization results in its inhibition and thus promotes cell survival (Yap et al., 2018). In colon cancer, upregulated lncRNA TNBL is accumulated at the subset of NBL2 loci and forms dense aggregates, which sequesters SAM68 RBPs and nucleic acids. This SAM68 nuclear body may disrupt nuclear organization (Dumbovic et al., 2018). LncRNAs involved in many events in nuclear bodies can enrich our insights to better understand the function of lncRNAs in phase separation.
PROSPECTIVE
This review mainly summarized the current findings on phase separation and the potential roles of phase-separation related lncRNAs. The formation of phase separation involves multiple molecules, such as RNAs, proteins, and related chromatin. Maintenance of phase separation relies on its surrounding environments, such as pH, temperature, and the concentration of salt solution. Sometimes phase transition is largely determined by the sequence of RNAs, proteins, and the PTM of proteins. Phase separation has expanded our understanding of biochemical reactions and biological processes in membraneless organelles. LncRNAs mainly function as architectural scaffolds for diverse RNA and protein interaction in this process. Phase separation coordinating lncRNAs in multiple nuclear bodies are mainly involved in regulating gene expression, chromatin remodeling, RNA splicing, and homologous chromosome separation in the nucleus. However, lncRNAs involved in cytosolic phase separation are less reported. Several studies have revealed that phase-separation related lncRNAs in cytoplasm participate in signaling transduction (Lee et al., 2016;Tichon et al., 2016). This evidence inspires us to explore more about cytosolic lncRNAsmediated phase separation. Combining the function of lncRNAs and phase separation together, current studies on both are only the tip of the iceberg. Major questions have yet to be answered in these emerging fields about phase separation and lncRNAs. The most concerning problem is identification of the factors that confer the special components in phase droplets. Proteins contained with LDR, PrLD, or RNA with a repetitive sequence are more likely to form phase separation. Maybe the distribution of net charge and the advanced structure of RNAs and proteins are major factors, which have a great influence on multivalent interactions. Other environmental factors, such as pH, temperature, and the concentration of salt solution, are also important for phase separation formation. The second problem is how the subcellular localization of lncRNAs and phase separation-related proteins affects the phase separation formation. It seems that more functional phase separated droplets tend to form in the nucleus, which is mostly related to the formation of heterochromatin. What factors contribute to this preference needs to be further elucidated. Numerous nuclear bodies exert different roles in gene expression and epigenetic regulation. Why different lncRNA are selected in different functional phase separation droplets needs to be further explored. Moreover, which factors and signaling pathways are involved in the dynamically assembled and disassembled phase separation droplets upon different environmental stress is of significance. It is also of great importance to precisely identify the role of lncRNAs in sensing stress stimulations, signal transduction, and maintenance of phase separation. Such discoveries will help better understand and develop better therapeutic treatments for phase-separation related diseases.
AUTHOR CONTRIBUTIONS
AL contributed to the study design and data analysis. JLuo wrote the manuscript. LQ and FG contributed to the figure and table design. AL, JLiu, and JLin edited the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported in part by the National Natural Science Foundation of China (81872300 and 81672791) and the Zhejiang Provincial Natural Science Fund for Distinguished Young Scholars of China (LR18C060002) to AL. | 2021-03-31T13:20:25.706Z | 2021-03-31T00:00:00.000 | {
"year": 2021,
"sha1": "734b67c9d5039ba48ff46f8b4f324ab56711b501",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fgene.2021.626234",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "734b67c9d5039ba48ff46f8b4f324ab56711b501",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9614672 | pes2o/s2orc | v3-fos-license | Adenocarcinoma Originating From a Completely Isolated Duplication Cyst of the Mesentery in an Adult
Alimentary tract duplications are uncommon congenital abnormalities that usually have an anatomical connection with some part of the gastrointestinal tract and have a common blood supply with the adjacent segment of intestine. A completely isolated duplication cyst (CIDC) is a very rare type of gastrointestinal duplication that does not communicate with the normal bowel segment and possesses its own exclusive blood supply. Only 5 CIDC cases in adults have been reported in the English medical literature. Additionally, only 1 case of mucinous cystadenoma from an infected CIDC of the ileum has been reported. This report describes a 52-year-old male patient with a peritoneal CIDC, which upon curative excision was found to have given rise to an adenocarcinoma. The latter was lined internally with malignant glandular cells and contained a smooth muscular outer layer as determined by microscopic examination of the tissue. We believe that this is the first reported case of an adenocarcinoma originating from a CIDC in an adult.
INTRODUCTION
Duplication cysts are rare congenital abnormalities that are usually diagnosed in childhood, and the occurrence of such cysts is rare in adulthood. 1,2 The cysts can occur anywhere from the mouth to the anus. Duplication cysts have been described as congenital malformations that involve the mesenteric side of the gastrointestinal tract and share a common wall or blood supply with the involved bowel. 3 Most cases are diagnosed during early childhood because duplication cysts tend to accompany symptoms such as abdominal pain or palpable masses in this age group. In contrast, these cysts are diagnosed incidentally as asymptomatic disease during adulthood. There are several associated com-Alimentary tract duplications are uncommon congenital abnormalities that usually have an anatomical connection with some part of the gastrointestinal tract and have a common blood supply with the adjacent segment of intestine. A completely isolated duplication cyst (CIDC) is a very rare type of gastrointestinal duplication that does not communicate with the normal bowel segment and possesses its own exclusive blood supply. Only 5 CIDC cases in adults have been reported in the English medical literature. Additionally, only 1 case of mucinous cystadenoma from an infected CIDC of the ileum has been reported. This report describes a 52-year-old male patient with a peritoneal CIDC, which upon curative excision was found to have given rise to an adenocarcinoma. The latter was lined internally with malignant glandular cells and contained a smooth muscular outer layer as determined by microscopic examination of the tissue. We believe that this is the first reported case of an adenocarcinoma originating from a CIDC in an adult. (Intest Res 2014;12:328-332) www.irjournal.org underwent a curative mass excision, along with a literature review.
CASE REPORT
A 52-year-old male patient with abdominal pain was admitted to the Wonju Severance Christian Hospital. His medical history was unremarkable, except for pulmonary tuberculosis, which had been cured with medication 6 months earlier. An intraperitoneal mass was noted on abdominopelvic CT conducted by an external facility (Fig. 1A).
The patient had mild abdominal pain in the middle of his abdomen that was not associated with food intake or positional change. Upon system review, there were no specific findings such as weight loss. Findings of the physical examination were unremarkable and did not reveal a palpable mass or abdominal tenderness. Laboratory analysis revealed a CEA level of 29.3 ng/mL (standard value: <5 ng/mL), and a markedly increased level of CA 19-9 of up to 4881 U/mL (standard value: <5 U/mL). Other laboratory results were within normal ranges.
Chest CT revealed improved pulmonary tuberculosis and an approximately 4.3-cm-sized cystic mass in the sub-gastric area; the internal contents of the mass showed an enhanced density greater than that of the surrounding fluid. The mass contained a partially calcified region and was not attached to an adjacent organ such as the stomach or small intestine.
The endoscopy finding from an external facility indicated only chronic superficial gastritis. There was no visible mass or mass effect lesion. Endoscopic ultrasound (EUS) was available for further evaluation of the mass because it was located near the lower border of the stomach. On EUS, the . The entire cyst was excised without disturbing the normal bowel or mesenteric anatomy (A). In a gross section analysis, it was determined to be a 4×3×3-cm-sized unilocular cyst filled with dark brown necrotic material that appeared to comprise hemorrhagic contents. The cyst wall was evenly thin with a focal, ill-defined, yellowish-brown mural nodule (red arrow) (B).
B
A www.irjournal.org mass was found to be in the extra-gastric area and presented as a cystic feature with an internally mixed echoic pattern, and the cystic mass wall had both an inner smooth hyperechoic mucosal layer (Fig. 1B, arrow a) and an outer hypoechoic muscular layer (Fig. 1B, arrow b).
On the basis of these findings, we diagnosed this case as a malignant gastrointestinal tumor or malignant lymph node and performed surgery. The entire cyst was excised without disturbing the normal bowel or mesenteric anatomy. The mass featured a well-encapsulated oval shape ( Fig. 2A). During surgery, we found that the mass was located in the peritoneum, was not attached to the small bowel or stomach, and had a separate feeding vessel. Upon a gross sectional inspection, the mass was a 4×3×3-cm unilocular cyst filled with dark brown necrotic material that appeared to comprise hemorrhagic contents. The cyst wall was evenly thin with focally located, ill-defined yellowish-brown mural nodules (Fig. 2B). Upon microscopic examination, the cyst wall was found to be composed of an inner columnar epithelial lining, 2 smooth muscle layers, and serosa, thus mimicking the intestine. However, the epithelial lining of the entire cyst consisted of gland-forming neoplastic columnar epithelium, which was characterized by a loss of nuclear polarity, a high nuclear/cytoplasmic ratio, and hyperchromasia, without spared non-neoplastic epithelium (Fig. 3A). Focal areas of invasion into the smooth muscle were observed, and the
D C B
A www.irjournal.org cyst wall contained multifocal cholesterol granulomas that matched the yellowish nodules observed during the gross examination (Fig. 3B). The immunohistochemical stains were positive for cytokeratin 20 and negative for cytokeratin 7 in the neoplastic epithelial lining of the cyst (Fig. 3C, D). These findings were consistent with an adenocarcinoma that had arisen from the intestinal duplication cyst.
There were no metastases or local cancer invasion, according to 18F-fluorodeoxyglucose PET-CT. No postoperative complications occurred, the patient remains well, and the tumor marker levels had decreased to within normal ranges at follow-up.
DISCUSSION
Duplication cysts in the alimentary tract are rare developmental anomalies that involve the mesenteric side of the associated alimentary tract and share a common blood supply with the native bowel. 2 These cysts can occur anywhere along the alimentary tract from the mouth to the anus, although the ileum is the most frequently involved region (35%) followed by the esophagus (19%), jejunum (10%), stomach (9%), and colon (7%). 15 CIDC is an extremely rare variant form of gastrointestinal duplication that has its own blood supply and does not communicate with the normal bowel segment.
To our knowledge, only 5 CIDC cases have been found in adults among 8 reported cases to date. [10][11][12][13][14] Only 1 case of a malignant transformation from CIDC (mucinous cystadenoma) in an adult has been reported. 10 In the present case, during abdominal surgery, we diagnosed an adenocarcinoma that originated from a CIDC in a 52-year-old male with nonspecific abdominal pain. To our knowledge, this is the first report of such a case.
Several theories were proposed to account for the development of enteric duplications, including recanalization after the solid epithelial stage of embryonic bowel development, persistent embryologic diverticula, and the intrauterine vascular accident theory. [15][16][17] Bremer' s "aberrant vacuolization" theory suggests that epithelial proliferation occurs during embryonic intestine development and occludes the bowel lumen in the 6-week embryo ("solid stage"); thereafter, a vacuolization of the entire alimentary tract occurs, thus transforming the digestive system into a tube with a single lumen. Throughout the process of vacuole coalescence, an error might occur ("aberrant vacuolization"), resulting in the formation of 2 (or more) parallel channels that may or may not communicate with each other. 8 However, no single theory adequately explains all known duplications. 8 Malignant changes of in communicating type duplication cysts were most frequently observed in the small bowel, followed by the colon, rectum, and stomach. 17,18 Such changes can originate from the underlying mucosa and develop into adenocarcinoma in almost all cases. However, in our case, there was no remaining normal mucosa, only heterotrophic mucosa. Once a malignant change occurs, the prognosis is generally poor because of the high metastasis rate and rare presentation of symptoms in adults. Therefore, early diagnosis and management are essential for duplication cysts.
Diagnoses of duplication cysts are generally made by abdominal CT, MRI, and ultrasonography. 19,20 Duplication cysts manifest as smooth, rounded, fluid-filled cysts or tubular structures with thin, slightly enhancing walls on CT scans. ultrasonography and MRI might indicate a duplication via the identification of a 3-layered image that represents the walled layer of the duplication cyst. Additionally, if a CIDC is located in an available area, EUS might be helpful because it can allow a detailed identification of the internal contents of cysts and discriminate the cyst wall layers. The pathologic characteristics of duplication cysts include spherical or tubular structures that are firmly attached to at least 1 point in the alimentary tract, possess well-developed smooth muscle layers, and have an epithelial lining that resembles some part of the alimentary tract.
In conclusion, we report herein a rare case of adenocarcinoma that originated from a CIDC in an adult. This case suggests that the various duplication cyst forms and the associated possibilities of malignant transformation should be considered for the early detection and treatment of uncommon duplication cysts. | 2016-05-12T22:15:10.714Z | 2014-10-01T00:00:00.000 | {
"year": 2014,
"sha1": "b157ad86facb39d4b899bddf1474cd854a647985",
"oa_license": "CCBYNC",
"oa_url": "http://www.irjournal.org/upload/pdf/ir-12-328.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b157ad86facb39d4b899bddf1474cd854a647985",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8800312 | pes2o/s2orc | v3-fos-license | Economic evaluations of health technologies in Dutch healthcare decision-making: a qualitative study of the current and potential use, barriers, and facilitators
Background The use of economic evaluations in healthcare decision-making can potentially help decision-makers in allocating scarce resources as efficiently as possible. Over a decade ago, the use of such studies was found to be limited in Dutch healthcare decision-making, but their current use is unknown. Therefore, this study aimed to provide insight into the current and potential use of economic evaluations in Dutch healthcare decision-making and to identify barriers and facilitators to the use of such studies. Methods Interviews containing semi-structured and structured questions were conducted among Dutch healthcare decision-makers. Participants were purposefully selected and special efforts were made to include decision-makers working at the macro- (national), meso- (local/regional), and micro-level (patient setting). During the interviews, a topic list was used that was based on the research questions and a literature search, and was developed in consultation with the Dutch National Healthcare Institute. Responses to the semi-structured questions were analyzed using a constant comparative approach. As for the structured questions, participants’ definitions of various economic evaluation concepts were scored as either being “correct” or “incorrect” by two researchers, and summary statistics were prepared. Results Sixteen healthcare decision-makers were interviewed and two health economists. Decision-makers’ knowledge of economic evaluations was only modest, and their current use appeared to be limited. Nonetheless, decision-makers recognized the importance of economic evaluations and saw several opportunities for extending their use at the macro- and meso-level, but not at the micro-level. The disparity between the limited use and recognition of the importance of economic evaluations is likely due to the many barriers decision-makers experience preventing their use (e.g. lack of resources, lack of formal willingness-to-pay threshold). Possible facilitators for extending the use of economic evaluations include, amongst others, educating decision-makers and the general population about economic evaluations and presenting economic evaluation results in a clearer and more understandable way. Conclusions This study demonstrated that the current use and impact of economic evaluations in Dutch healthcare decision-making is limited at best. Therefore, strategies are needed to overcome the barriers that currently prevent economic evaluations from being used extensively. Electronic supplementary material The online version of this article (doi:10.1186/s12913-017-1986-9) contains supplementary material, which is available to authorized users.
Background
As in other developed countries, a large proportion of the Dutch Gross Domestic Product is spent on healthcare, and this proportion has grown during the last decade, from 9% in 2004 to 11% in 2014 [1]. It has been suggested that such increases can be explained by factors outside the healthcare sector (e.g. ageing population, increased prevalence of chronic diseases), the absence of a competitive market within healthcare systems, the absence of strong cost-containment measures, and technological innovation [2,3]. Of them, technological innovation is often cited as the main driver of the long-term growth in healthcare costs [3][4][5]. It must be noted that high and rising healthcare costs are not necessarily associated with negative connotations. Cost increases may be synonymous with improved health outcomes, increases in job opportunities in the sector, and improved quality of services delivered [2,6]. However, since the rate of increase in healthcare costs currently exceeds economic growth, its continued growth at current rates is not sustainable and public spending on healthcare is crowding out public spending on other services. As a consequence, there is a strong (political) call for healthcare cost-containment and healthcare decision-makers are increasingly being confronted with choices about which treatments to reimburse and which to not reimburse [7].
Economic evaluations provide an indication of the relative efficiency of treatments by comparing the costs and consequences of alternative programs or interventions [8]. Such studies can help healthcare decision-makers to determine how best to allocate scarce resources at the macro-, meso-, and micro-level. At the macro-level, the Dutch Ministry of Health, Welfare, and Sports has to decide upon the content of the basic health insurance package (i.e. a compulsory insurance for all Dutch citizens). In making such decisions they are advised by the Dutch National Healthcare Institute, an independent governing body that, amongst others, provides evidence-based guidance and advice on the in-or exclusion of healthcare services in the basic health insurance package as well as the conduct of economic evaluations [9,10]. The majority of the content of the basic health insurance package, however, is somewhat openly formulated [10]. This means that 'insured care' is defined in terms of functions of care rather than in specific healthcare services [10]. As a consequence, the responsibility for 'appropriate use' of insured care, and thus the allocation of the healthcare budget, is partly transferred to institutions and healthcare providers working at the regional or local level (i.e. meso-level) and in the individual patient setting (i.e. micro-level) [11].
Various studies indicated that healthcare decision-makers in many Western countries have a positive attitude towards the use of economic evaluations for resource allocation decision-making, but that their use and knowledge of economic evaluations is limited [12][13][14][15][16][17]. This discrepancy is likely due to the various barriers that decisionmakers experience preventing their use in day-to-day decision-making, such as a lack of resources, political opposition, and a lack of relevant studies [12][13][14][15][16][17]. The only Dutch study to explore the use of economic evaluations in healthcare decision-making was performed more than a decade ago (i.e. 1998-1999) [17]. However, in an effort to improve the quality and efficiency of care, the Dutch government introduced a new Health Insurance Act in 2006, changing the healthcare system from a partly public and partly private, predominantly government-run system, into a universal insurance market that aims to be competitive [10]. Amongst others, the new act mandates all Dutch citizens to purchase the basic health insurance package, all insurance companies have to offer the basic health insurance package, and competing health insurance companies are obliged to accept all applicants during an annual enrollment period [10]. As one of the main aims of the Dutch healthcare reform was to improve the efficiency of healthcare [10,18], it is conceivable that the decision-makers' knowledge and use of economic evaluations have increased since then. Whether this is indeed the case, however, is currently unknown. Therefore, this study aimed to gain insight into the current and potential use of economic evaluations in Dutch healthcare decisionmaking and to identify barriers and facilitators to the use of such studies.
Methods
Interviews containing both semi-structured and structured questions were conducted among Dutch healthcare decision-makers to explore their economic evaluation knowledge and skill set, the current and potential use of economic evaluations in the healthcare decisionmaking context, as well as barriers and facilitators to the use of such studies. For the purpose of this study, "healthcare decision-maker" was defined as a professional who has influence on the allocation of the Dutch healthcare budget at the macro-, meso-, and/or microlevel. A predominantly qualitative approach was used in order to explore the questions under study in greater detail and to obtain in-depth information on the views and opinions of the participants [19].
Study population
Participants were purposefully selected using a combination of critical case and maximum variation sampling, in which a small, but heterogeneous, sample of information-rich cases was selected [20,21]. Special efforts were made to include decision-makers working at the macro-, meso-, and micro-level, and in different regions of the Netherlands. Additionally, two health economists with a deep understanding of the Dutch healthcare system were included.
The Dutch National Healthcare Institute assisted in identifying participants for this study. Participants were also selected by means of snowballing, i.e. they were referred by other participants on the basis that they were expected to be able to provide relevant information [22]. Potential participants were contacted via email.
At the start of the interviews, all participants were informed about the study purpose, were reassured of confidentiality, and provided verbal informed consent. The present study was conducted in accordance with the Declaration of Helsinki. Under Dutch law, ethical approval was not necessary for this study, since it is not required for studies that do not infringe the participants' physical and/or psychological integrity (according to the Dutch Medical Research Involving Human Subjects Act).
Interviews
Interviews were conducted in Dutch and were carried out by one researcher (KR) between April and June 2014 at a time and location convenient to the participants. Participants were informed that they did not have to prepare for the interviews and were assured that there were no right or wrong answers. All interviews started with several short questions about the participants' demographic and employment characteristics. Subsequently, semi-structured questions were asked about their current and potential use of economic evaluations for decision-making, as well as barriers and facilitators to their use. The topic list for the interviews was developed based on the research questions and a literature search in PubMed, the Cochrane-library, and Google Scholar. The literature search was aimed at getting a general overview of previous studies evaluating the use of economic evaluations in healthcare decision-making. The search was conducted by KR, with the help of an information specialist of the VU University Medical Centre. Search terms included: "economic evaluation", "cost-effectiveness", "cost-benefit analysis", "decisionmaking", "healthcare rationing", "treatment decision", "attitude", "health personnel attitude", "knowledge", and "economic evaluation skills". For completeness, reference lists of included studies were screened. Out of 44 publications, 15 articles were selected and studied indepth by KR to formulate (sub-) topics. The topic list was further refined in consultation with two employees of the Dutch National Healthcare Institute. The final topic list can be found in Additional file 1.
During the interviews, the topic list was used as a guide, but participants were allowed to discuss other topics that they considered to be important as well. Throughout the interviews, participants were asked to clarify their answers by providing daily decision-making examples. To investigate the decision-makers' knowledge of economic evaluations (excluding health economists), interviews ended with a number of structured questions. First, participants were asked what they associated with the term 'economic evaluation' and whether they had previously received training in economic evaluation-related topics. Subsequently, they were asked whether they were familiar with and could define various economic evaluation designs, including cost-effectiveness analysis (CEA), costbenefit analysis (CBA), and cost-utility analysis (CUA). During the interviews, field notes were taken and interviews were audio-taped and transcribed verbatim directly after the interview. As no additional information emerged from the data after 18 interviews (i.e. data saturation was reached), the data collection process was terminated.
Data analysis
Using Nvivo 10, data derived from the semi-structured questions were analysed using a constant comparative approach. That is, analytic categories were inductively established by constantly comparing and checking items with the rest of the data [23]. By starting with open coding, descriptive themes and subthemes were generated by KR. That is, transcripts were read line by line and relevant passages were selected and coded. Throughout this process, efforts were made to detect further examples of previously identified (sub-) themes and, if applicable, to identify new ones. The final codes were developed through discussion between two researchers (KR and JvD). During these discussions, similar codes were grouped into analytical categories and the different properties of these categories were explored (i.e. the characteristics of these categories) as well as the relationships between them (i.e. relating categories to each other) [24,25]. For the purpose of this article, quotes were translated from Dutch to English by the research team and were carefully edited slightly to make them more readable without losing their meaning.
Data derived from the structured-questions were analyzed using descriptive statistics. For this purpose, KR and JvD independently scored the participants' definitions of CEA, CUA, and CBA as 'correct' or 'incorrect'. Definitions were scored as 'correct' if they included some combination of the following information: CEA, a comparison of costs and effects, in which effects are expressed in terms of health effects (other than those expressed in terms of a quality of life measure or a monetary value); CUA, a comparison of costs and effects, where effects are expressed in QALYs (quality adjusted life years) or an appropriate variant taking quality of life into account; CBA, a comparison of costs and benefits, where the benefits get a monetary value [25]. In all other cases, they were scored as 'incorrect'. After both researchers independently scored the definitions, scores were compared and disagreements (n = 6) were resolved through discussion between KR and JvD [25].
Participants
In total, 20 potential participants were approached, of whom two declined to participate; one due to time constraints and one considered him/herself not suitable for this study. Eventually, seventeen interviews were conducted face-to-face and one participant answered the questions by email. Interviews (excluding email contact) lasted on average 49 min [range:31-72 min]. Twelve participants were male (66.7%). Participants had a mean age of 49.7 years (SD = 8.4). Sixteen participants were healthcare decision-makers working at the macro-(n = 5; labelled as MA1-5), meso-(n = 4; labelled as ME1-4), or micro-level (n = 7; labelled as MI1-7), and two were health economists (labelled as HE1-2). Macro-level decision-makers worked at the Dutch National Healthcare Institute (n = 4) or the Dutch Ministry of Health, Welfare and Sports (n = 1). Meso-level decision-makers worked for a health insurance company (n = 2) or as a guideline development consultant (n = 2). All micro-level decision-makers were physicians (n = 7). Participants had various educational backgrounds, but mostly medical (55.6%) and economic (27.8%) ( Table 1).
Results of the structured interview questions Knowledge of economic evaluations
While 63% of the participants working at the macro-and meso-level (excluding health economists) indicated that they had received some training in economic evaluationrelated topics, none of those working at the micro-level had received such training.
When asked about what they associated with the term 'economic evaluation' , participants frequently thought it to be solely a calculation of costs and many were not aware of the existence of various kinds of economic evaluations.
"I think that this would provide a proper overview of all costs. (…) Possibly, also compare those costs with one another, but in my opinion an economic evaluation does not amount to much more than that. I wouldn't actually correlate it with effectiveness." (ME4) Of the participants, 36% were able to give a correct definition of the concept of CEA. Other participants thought it to solely include costs or thought its outcome depended on a predetermined goal, as illustrated by the following quote; "Cost-effectiveness of course really depends on the stated goal: when is a treatment effective? What is the objective you are aiming for?" (MI5) Although 36% of participants were able to give a correct definition of the concept of CUA, many had never heard of the term, or thought it to be related to 'utilization'. One participant, for example, responded; "It seems to me more a technical term, frankly. (…) Utilization no, I don't have an immediate explanation." (MI5). Most participants indicated familiarity with the concept of CBA, but only half was able to give a correct definition. Some participants thought it to be a broader concept compared to CEA, i.e. more costs and effects are taken into account. As one participant stated: "And with CBA you weigh all the effects of the intervention. So also the effects on production loss. Well, all social effects, I think?" (MI4).
Results of the semi-structured interview questions
The (sub-) themes that emerged during the analysis of the current and potential use of economic evaluations for healthcare decision-making, as well as the experienced barriers and facilitators to the use of such studies will be discussed below, and will be illustrated by quotes.
Current use of economic evaluations
Participants generally agreed that there is a need for improving the efficiency of healthcare. The use of economic evaluations was therefore thought to be inevitable. One participant, for example, stated; "I believe that a 30% cost increase took place there [in Mental Healthcare] in just a few short years (…) Well, in that case it's definitely worthwhile to pay attention to economic evaluations. If we allow costs to keep on rising, we will soon have no schools left and no asphalt on our roads." (MA1) Nonetheless, the current use and impact of economic evaluations in healthcare decision-making seemed to be limited. Participants generally indicated that economic evaluations were not a dominant factor in the decision-making process and that economic evaluations hardly ever impacted the inclusion or exclusion of a specific treatment in the basic health insurance package. One participant, for example, stated; "Up until now, I have had to conclude that economic evaluations do not form the deciding factor, or hardly ever, in reaching a negative package advice. We always examine cost-effectiveness, but when matters come to a head, you realise that neither the government nor society Health economists 2 (11.1) Note: Macro-level: decision-makers working at the national level; Meso-level: decision-makers working at the regional or local level; Micro-level: decision-makers working in the individual patient setting is ready to make negative reimbursement decisions on cost-effectiveness results." (MA2). Participants indicated that reimbursement and/or treatment decisions are typically based on the effectiveness of a treatment, rather than on its cost-effectiveness, as well as physicians' desire to provide a certain treatment to their patients. Some participants attributed the limited use of economic evaluations to the fact that their necessity is not sufficiently recognized, both by healthcare decision-makers and the general population. As one health economist metaphorically stated; "One way or another, it's as if the water has to rise even higher before we decide that we need to build dikes." (HE1).
Even though the role of economic evaluations in healthcare decision-making was considered limited, some participants were able to provide examples of decision-making processes in which economic evaluations have been consulted. At the macro-level, for example, economic evaluations were used in the process of determining the content of the basic health insurance package, during price negotiations between the Dutch Ministry of Health, Welfare, and Sports and pharmaceutical companies, and during the implementation of a population wide screening tool. At the meso-level, economic evaluations have been used during the development of clinical guidelines and the implementation of innovations within healthcare organizations. Within healthcare organizations, however, only CBAs were used. Participants were not able to provide examples of the use of economic evaluations at the micro-level.
Potential use of economic evaluations
Participants provided various examples of decision-making processes during which the use of economic evaluations could prove to be beneficial. At the macro-level, the government could use economic evaluations to determine what expenditures, within the healthcare sector or even in other sectors, are likely to provide the best value for money. Furthermore, economic evaluations could be used during price negotiations between health insurers and healthcare providers (meso-level) as a means to generate the lowest healthcare prices possible. Other options for using economic evaluations at the macro-level include the narrowing of medical indications (e.g. defining patient groups for whom specific treatments are cost-effective and for whom they are not) and improvements in the organization of healthcare processes (e.g. decisions to shift certain treatments from secondary to primary care, and vice versa). Even though participants saw several opportunities for extending the use of economic evaluations at the macro-and meso-level, almost all agreed that there was no room for using economic evaluations in the individual patient setting (micro-level). One of their main arguments was that talking about costs would potentially disrupt the doctor-patient relationship. This is illustrated by the following quote; "Patients become very suspicious when you start talking about costs. (…) In fact, it can stand in the way of a doctor-patient relationship." (MI4). Some participants also emphasized that improving the efficiency of healthcare ought not to be the responsibility of the individual healthcare provider. As one health economist noted; "In my opinion, the preconditions under which physicians work, that is, the financial framework that we succeed in creating with one another, are not the responsibility of individual doctors." (HE1).
Barriers to the use of economic evaluations
Participants identified various factors that currently prevent economic evaluations from being extensively used in healthcare decision-making.
Lack of resources: All participants indicated that a lack of resources (i.e. time, money, skills) often prevent economic evaluations from being used in healthcare decision-making. As indicated by one participant: "But the only thing the profession says is, bring on the money. (…) Money really is the only thing." (ME4).
Methodological factors: Participants identified various kinds of barriers related to the methodology of economic evaluations. An important barrier was the way in which costs are typically valued in economic evaluations. Some participants pointed out that, while costs are often based on standard prices in economic evaluations, they may vary extensively in reality due to factors such as the jurisdiction in which a decision is made as well as the size and degree of specialisation of a healthcare facility. As one participant noted; "But costs differ enormously across countries, due to differences in the cost of equipment, personnel etc. So it has to be determined per country separately. (…) Even within the Netherlands, the price of a day spent in a hospital differs across hospitals." (MI5) Many prices in the Dutch healthcare sector are set during negotiations between health insurers and healthcare providers, and can therefore differ enormously from standard prices. Likewise, economic evaluations are often conducted with only a couple of perspectives, such as the societal one that considers all costs and consequences including those outside of the healthcare sector. Consequently, many economic evaluations are not directly applicable to specific decision-making contexts. As one participant noted; "An example of a concrete matter that we are struggling with is whether you should only include direct costs or also indirect costs. (…) We calculated that, in a given scenario, there will be a certain number of prescriptions and that this will lead to a certain reduction in complications and mortalities. If we would also include the prevention of mortalities, however, the cost-saving of €18 million would change in a cost-increase of about €240 million…" (ME3). Another barrier is that healthcare decision-makers often poorly understand outcome measures used in economic evaluations. Participants had difficulties with interpreting QALYs, viewed them of limited value to long-term care, and were uncertain about their transferability across countries. As for their limited value to long-term care, one health economist stated "The trade-off between curative and long-term care is always a theme, as you won't get far using QALYs. Then you need something else and can you still weigh those results then?" (HE2) Some participants were also concerned about the existence of interventions of which the results are difficult to measure. As indicated by one participant: "In paediatrics, it is the professions whose effectiveness is not so obvious that we need most, such as psychologists, physiotherapists; the paramedics in fact. But I think it will be no easy task to demonstrate their cost-effectiveness." (ME3).
Lack of confidence in economic evaluations:
In the case of model-based economic evaluations, participants doubted their reliability due to the complexity of the healthcare system, which they considered to be hard to account for in a model. As one participant noted; "A model is a simplification of reality. (…) I think that is where the problem in healthcare begins. There are so many influencing factors (…) making it very difficult to construct such a model." (MA1). Also, some participants indicated to lack confidence in economic evaluations in general. They attributed this to the fact that economic evaluations are sometimes funded or performed by the supplier of the treatment under study (i.e. there can be an inherent conflict of interest). This is illustrated by the following comment: "I think there is also research showing that if a supplier initiates it [the study], the results often turn out more positive." (MI3).
Lack of a formal willingness-to-pay threshold: Participants indicated that the lack of a formal willingness-to-pay threshold is an important barrier to the use of economic evaluations in healthcare decision-making. Therefore, most participants were in favour of establishing such threshold. As one participant noted; "I find it inconsistent that we aren't brave enough to talk about what we want to spend on the life of a human being in the Netherlands. What is a life-year gained allowed to cost?" (MI6) Aside from the fact that the establishment of a formal willingness-to-pay threshold may help decision-makers in choosing between alternatives, it may also release physicians of the responsibility of incorporating efficiency considerations into the individual patient setting. This reasoning is substantiated by the following comment; "I can quite understand that physicians say it is going too far to expect them to determine the limits (…) If you ask me, it is entirely reasonable that physicians need some help with this." (MA1) Although a formal willingness-to-pay threshold may improve the uptake of economic evaluations, participants generally agreed that it would be difficult to set such a threshold. Participants emphasized that when determining the maximum cost per additional unit of effect, many factors should be taken into account, including the prevalence and severity of a disease, the patients' age, preferences, and prognosis, and the availability of alternative options. As one participant noted "It matters a lot whether alternative treatment options are available. (…) If not, healthcare decisions concern questions about life and death" (MA2). According to some participants, it is therefore unethical to use a fixed willingness-to-pay threshold, particularly when it concerns life-threatening diseases. As such, it was suggested by some participants that a willingness-to-pay threshold should be extendible or categorized. The few participants who disagreed with the need for a formal willingness-to-pay threshold did so because they thought that savings could still be made in other areas (e.g. by reducing the number of unnecessary diagnostic tests) or because they felt that they would lose some of their authority as physicians. Another participant feared that the introduction of a formal willingness-to-pay threshold would lead to future economic evaluations producing incremental costeffectiveness ratios below the threshold, particularly in the case of modelling studies; "I think models are easily influenced. And if you're going to choose a fixed threshold then you will see that the results of all models go toward that threshold." (ME3).Lack of relevant economic evaluations: Participants indicated a need for economic evaluations that are directly applicable to their decision-making context, whereas economic evaluations are often based on restrictive patient populations and/or settings, and are sometimes already outdated at the time of publication. As one participant indicated; "An awful lot of economic evaluations are based on selective groups of patients (…) Generally, trial patients are not the same as those you encounter in daily practice" (MI1).
Public resistance: Participants indicated that for economic evaluations to be used in healthcare decisionmaking, it is essential that not only decision-makers, but also members of the general population understand, appreciate, and support their use. This is due to the Dutch healthcare system being largely publicly funded. Decisions made within this system therefore involve essentially the entire population, because everyone contributes to the financing of healthcare. Several participants, however, thought the general population is 'not yet ready' to accept efficiency considerations as a factor in healthcare decision-making. They attributed this to the fact that most people either misunderstand the purpose of economic evaluations or consider the idea of a maximum price for healthcare unacceptable, particularly when they are the ones being sick. One participant explained this as follows; "When you have a problem, you want the best possible care. At such a moment, you are not interested in the macro-economic aspect that it will lead to enormous costs if it happens a thousand times." (MI2) Some participants also believed the general population's knowledge of economic evaluations to be insufficient. As one participant noted; "Total costs and cost-effectiveness are all jumbled up together and we have discovered …. [in a research project] that even the most well-educated people really do not know how it works." (MA2) Participants therefore believed that the general population needs to have some basic knowledge of (the purpose of) economic evaluations in order for them to be able to substantiate their acceptance or rejection of the use of economic evaluations. As indicated by one participant: "The price of a drug is something patients can comprehend, whereas concepts such as QALYs are not. While, on the other hand, I think it is a good tool [economic evaluations], because people might think 'my life has no price, I should get the best possible care' , but costeffectiveness results are more nuanced in that the costs are weighed against the effectiveness of a drug" (MA3).
Ambiguity among decision-makers about the physicians' responsibility for improving the efficiency of healthcare: Participants indicated that in order for economic evaluations to be used in healthcare decision-making, decisionmakers themselves need to feel more responsibility for improving the efficiency of healthcare. However, since the main role of physicians is to act in a patient's best interest, it is not self-evident that they can embrace such a responsibility. This is underscored by the following statement of a participating physician: "I definitely want to keep it out of surgery. Because I find it very difficult if at a certain stage I have to make choices based on financial criteria only." (MI3) While participants working at the macro-and meso-level generally agreed that physicians should contribute to improving the efficiency of healthcare, physicians themselves did not agree on what this should entail. Some of them noted that the use of economic evaluations in the individual patient setting may raise ethical concerns, as it can appear to be in contradiction with the Hippocratic Oath. As one participant noted; "Well, I think (…) naturally, we want to give patients what we think they need and this represents a big barrier if a cost-effectiveness analysis shows that it is actually too expensive, while we feel that the patient does have a right to it. (…) this is at odds with our oath." (MI4) The Hippocratic Oath reminds physicians of their social responsibility as well as their responsibility for the individual patient; i.e. two tasks that are often incompatible when costs are considered. This is illustrated by the fact that one participating physician stated; "Up until a few years ago, the crux of the matter was in fact that you always gave people the care they needed and no-one even considered the cost. I think that, as physicians, we find this inversion extremely difficult, really I do." (MI7), whereas a decision-maker working at the meso-level argued; "This is actually mentioned in the Hippocratic Oath (….) cost-effectiveness is a part of the oath that we all take. This is something that physicians sometimes forget (….)" (ME1).
Incentives to treat: Some participants regarded the abundance of 'incentives to treat' in the healthcare system as a barrier to the use of economic evaluations. As the Dutch healthcare system currently has a fee-for-service system, physicians and/or the organizations they work for, make money by treating patients, whereas abstention from treatment generates less income. Moreover, patients often prefer some treatment than no treatment. As a consequence, if physicians feel responsibility for improving the efficiency of healthcare, it is still difficult for them to act accordingly. As one participant noted; "The human body is an inexhaustible source of income. As long as these stimuli exist, the problem will never be solved. There are simply financial stimuli for taking action, for doing something. The system does not have stimuli for making economic use of your resources" (MI1).
Limited ability to shift resources across sectors: Another barrier to the use of economic evaluations concerns the decision-makers' limited ability to shift resources between and within sectors. To illustrate, if an economic evaluation shows that a specific treatment is more likely to be costeffective if provided in primary care than in secondary care, it would be necessary to shift resources within the healthcare sector, but this is not easily executed. If an economic evaluation is performed from the societal perspective, savings flowing into non-healthcare sectors can lead to positive cost-effectiveness results, whereas the costeffectiveness results for the healthcare sector itself may be unfavourable. The latter is illustrated by the following comment; "How do you process, for instance, benefits that are accrued outside healthcare? (…) Because then the outcome of an analysis would be that everything [within the healthcare sector] would become much more expensive and that could hamper implementation, while it does actually lead to results, just not in healthcare." (ME3).
Facilitators for the use of economic evaluations
The most frequently mentioned facilitator for extending the use of economic evaluations was educating decisionmakers about how to understand and interpret economic evaluations of health technologies and how to use them in resource allocation decision-making. Participants emphasized that some basic training about health economics, and economic evaluations in particular, should be included in the medical curriculum. One participant, for example, stated; "This is an aspect of training that is completely neglected, even in medical follow-up training." (MI1) According to the participants, health economists could do their bit by presenting their results in a clearer and more understandable way. This is exemplified by the following comment; "Well, I think that perhaps the language used by health economists should be more neutral, with more layman's terms, so that it is at least clearer; they should use plainer language, especially for those who are less well educated." (MA3) Participants also recognized the necessity of educating the general population about (the purpose of) economic evaluations in order to build the necessary public acceptance. As one participant noted; "It is often very difficult to get such abstract ideas about cost-effectiveness across and it actually demands a lot of insight." (MA2) Moreover, financial and intellectual support was emphasized as a requirement for making the use of economic evaluations feasible and to create incentives for decision-makers to start using them. As one participant noted; "You need support for cost-effectiveness analysis, because it is not so easy" (MI4) Some participants were also of the opinion that the reliability, consistency, and transparency of economic evaluations themselves ought to be improved and that industryfunded studies should be assessed more critically. As one participant stated; "I am actually in favour of spending more money on carrying out, or improving, economic evaluations, because this would result in less bias. Because currently many of them are carried out by private manufacturers, so from that point of view, you would like a more trustworthy assessment" (MA3).
Main findings
This study illustrates a need to advance Dutch healthcare decision-makers' economic evaluation skill set, as their current knowledge of economic evaluations is quite modest. Moreover, even though participants were able to provide some examples of Dutch healthcare decisions in which economic evaluations were consulted, the current use and impact of such studies appeared to be limited at best. Nonetheless, decision-makers recognized the importance of economic evaluations and saw several opportunities for extending their use at the macro-and meso-level, but not at the micro-level. This disparity between the limited use of economic evaluations and the decision-makers' recognition of their importance might be explained by the many barriers decision-makers experienced with their use (e.g. lack of required resources, lack of formal willingness-to-pay threshold).
Comparison with the literature
The present findings are in line with those of previous studies demonstrating that the use and impact of economic evaluations of health technologies in healthcare decision-making is limited in most Western countries [12][13][14][15][16][17]. Also, in most of the previous studies comparable barriers and facilitators to the use of economic evaluations were identified [12]. The Dutch study that explored the use of economic evaluations in healthcare decision-making prior to the Dutch healthcare reform also found decision-makers to have a positive attitude towards economic evaluations, whereas their actual use and knowledge of such studies was limited [17]. The decision-makers participating in that study also indicated difficulties with moving resources across sectors, a lack of relevant studies, a lack of confidence in modelbased and industry-sponsored economic evaluations, and a lack of economic evaluation knowledge to obstruct to the use of such studies in daily decision-making practice, whereas other barriers identified in the present study (e.g. lack of a formal willingness-to-pay threshold, public resistance) were not mentioned. In both studies, macro-, meso-, as well as micro-level decision-makers were included and semi-structured interviews were used for answering the research questions. All in all, this indicates that the use of economic evaluations has not increased extensively over the last decade, whereas there seem to have been some developments since then, with the most important one being the fact that the majority of the participants in the present study argued in favour of the establishment of a formal willingness-to-pay threshold, whereas participants in the previous study were generally against the introduction of such a threshold [17].
Strengths and limitations
A strength of the present study is that it is qualitative. An important advantage of a qualitative design is that it allows for the questions being studied to be explored in greater detail and for the collection of in-depth information on participants' views and opinions. The latter is particularly important because many of the previous studies in this area, and those conducted outside the Netherlands in particular, relied heavily on survey methods, which limited the participants' freedom of response [12]. As such, the present findings are likely to be relevant to other jurisdictions as well, as many other countries are also trying to deal with high and rising healthcare costs by searching for means to maximize health effects within their fixed healthcare budget. Another strength concerns the fact that this was the first study since the Dutch healthcare reform in 2006 to explore the current and potential use of economic evaluations in healthcare decision-making. Given that the reform was inter alia aimed at improving the efficiency of healthcare, our study serves a critical role in assessing whether the reform influenced whether economic evaluations are used to a greater extent than in the past to address the issue of efficiency.
A first limitation of the present study concerns the fact that, even though conscious efforts were made to include healthcare decision-makers working at the macro-, meso-, and micro-level, it is uncertain if representatives from all stakeholders were included in the study. Another limitation is the risk of selection bias, as some participants were selected with the help of the Dutch National Healthcare Institute. This may have resulted in participants having a greater interest in the topic of this study than the average healthcare decision-maker, resulting in an overestimation of the actual use and knowledge of economic evaluations in the healthcare sector.
Recommendations for research and practice
The use of economic evaluations in healthcare decisionmaking has the potential to improve the efficient use of resources, though this study suggests that the current use and impact of such studies is generally limited in the Dutch healthcare system. Therefore, strategies are needed to overcome the barriers that currently prevent economic evaluations from being used more extensively. Some preliminary recommendations as to how to overcome these barriers will be discussed below [26]. These recommendations are based on the present findings as well as relevant literature, but further research is needed to establish what recommendations will eventually be most effective in improving the uptake of economic evaluation in daily decision-making practice.
In order for economic evaluations to reach relevant decision-makers, it is essential to publish such studies in (non-scientific) journals and/or on websites that are easily accessible (e.g. open access journals). To provide healthcare decision-makers with relevant and 'ready-to-use' information, there may be value in developing a national database in which all relevant economic evaluations are collected and complemented by critical, easy-to-read summaries (e.g. a database comparable to the UK NHS-EED database) [27]. The addition of critical summaries is essential, as decision-makers often lack the time and skill set required to critically appraise economic evaluations [28]. Another possible means to improve the decision-makers' economic evaluation skill set might be educating them about economic evaluation methods. Healthcare decisionmakers may be educated through a variety of avenues, including the development of (online) handbooks and workshops, integrating economic evaluation methods into the medical curriculum, and by involving them in the commissioning and/or execution of studies [15,25,29]. To improve the perceived credibility of economic evaluations, guidelines are needed on how to conduct, report, and critically appraise such studies. Though several guidelines for the conduct and reporting of economic evaluations are already available (e.g. [9,30,31]), these could be supplemented with more user friendly guidelines designed specifically for practitioners. Another possible means to improve the use of economic evaluations is to establish a formal willingness-to-pay threshold for QALYs. Participants in this study were generally in favour of the introduction of such a threshold, but they rejected the idea of an explicit cut-off point. Instead, they were in favour of a bandwidth or categorization. Few of them, however, were willing to make a statement about the possible level of the threshold.
Participants also indicated a need for economic evaluations that are directly applicable to their decision-making context. Therefore, additional economic evaluations that are timely and relevant for healthcare decision-makers need to be performed. The fact that economic evaluations performed from the societal perspective are not directly applicable to a specific decision-making context may be overcome by using the two-perspective approach advocated by Brouwer et al. [32]. When using such an approach, economic evaluations are conducted from both the healthcare system and the societal perspective. In this way, economic evaluations are directly applicable to the healthcare sector, while simultaneously providing an indication of whether the "local perspective" of the healthcare sector is consistent with social optimality (i.e. societal welfare maximization) [32].
Incentives are needed for healthcare providers to provide care that is most likely to be cost-effective. One way to deal with the incentives to "over treat" may be to move away from a predominant "fee-for-service" system to a "pay-forperformance" system, in which healthcare providers receive a bonus if they meet or exceed certain agreed-upon quality or performance measures [33,34].
Ethical considerations, such as the physicians' responsibility to act in their patients' best interest, may partially be dealt with by raising the physicians' awareness of the fact that the provision of all possible treatment options, irrespective of their (cost-) effectiveness, might reduce the accessibility and quality of care to other clients and to Dutch society in general [35]. However, it is unrealistic to expect healthcare providers to make such decisions on their own [36,37]. Therefore, identifications of cost-effective treatment options are best undertaken at the macro-and meso-level. This could take the form of narrowing medical indications and concurrently the definition of the basic healthcare package (at a macro-level), as well as developing clinical, best practice guidelines (at a meso-level).
Finally, strategies should be developed to overcome the public resistance to the use of economic evaluations in healthcare decision-making. Such strategies may include national campaigns (e.g. educating people about the importance of improving the efficiency of healthcare through the use of economic evaluations) as well as an increased transparency about the actual cost of treatment. | 2018-04-03T01:23:54.463Z | 2017-01-26T00:00:00.000 | {
"year": 2017,
"sha1": "04abe3ec5f22371c0a54776d133473c9203a5667",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-017-1986-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "04abe3ec5f22371c0a54776d133473c9203a5667",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225380857 | pes2o/s2orc | v3-fos-license | Building of AMPA‐type glutamate receptors in the endoplasmic reticulum and its implication for excitatory neurotransmission
AMPA‐type glutamate receptors (AMPARs), the key elements of fast excitatory neurotransmission in the brain, are receptor ion channels whose core is assembled from pore‐forming and three distinct types of auxiliary subunits. While it is well established that this assembly occurs in the endoplasmic reticulum (ER), it has remained largely enigmatic how this receptor‐building happens. Here we review recent findings on the biogenesis of AMPARs in native neurons as a multistep production line that is defined and operated by distinct ER‐resident helper proteins, and we discuss how impairment of these operators by mutations or targeted gene‐inactivation leads to severe phenotypes in both humans and rodents. We suggest that the recent data on AMPAR biogenesis provide new insights into a process that is key to the formation and operation of excitatory synapses and their activity‐dependent dynamics, as well as for the operation of the mammalian brain under normal and pathological conditions.
Introduction
AMPA-type glutamate receptors (AMPARs) are key to virtually any aspect of excitatory neurotransmission in the mammalian brain: they drive synapse formation, mediate the excitatory current required for the synaptic point-to-point response and endow synapses with activity-dependent dynamics that are fundamental for memory formation and learning (Silver et al. 1992;Cull-Candy et al. 2006;Isaac et al. 2007;Collingridge et al. 2010). Most famous in this respect is the phenomenon of 'long-term potentiation' (LTP), an enhancement of synaptic signal transduction that results from activity-triggered recruitment of additional receptor channels into the postsynaptic membrane (Bliss & Collingridge, 1993;Shi et al. 1999;Kennedy et al. 2010;Granger et al. 2013;Huganir & Nicoll, 2013;Penn et al. 2017;Choquet, 2018).
The decisive, but easily forgotten, prerequisite for all AMPAR-mediated functions located at (distinct sites of) the plasma membrane is the building of the receptor channels in the intracellular membrane compartment(s) of the endoplasmic reticulum (ER; Fig. 1). There, functional AMPARs must be properly assembled from two types of subunits: the pore-forming GluA1-4 proteins and the members of three families of auxiliary proteins, the transmembrane AMPA-receptor regulating proteins (TARPs), the cornichon homologues (CNIHs) and the germ cell-specific gene 1-like (GSG1l) protein ( Fig. 2 and Table 1; Chen et al. 2000;Tomita et al. 2005;Schwenk et al. 2012;Shanks et al. 2012).
Extensive research performed within this framework over roughly one and a half decades has provided detailed insights into the molecular architecture, structure and operation of AMPARs that may be summed up as the following hallmarks. First, the AMPA-receptor channels are tetramers of either identical or different GluA proteins, thus reconstituting homo-or hetero-tetrameric channels (Boulter et al. 1990;Keinanen et al. 1990;Greger et al. 2017;Herguedas et al. 2019;Zhao et al. 2019). In fact, hetero-tetramers such as GluA2-A1 or GluA2-A3 are far more abundant in the rodent brain than homomers (Wenthold et al. 1996;Lu et al. 2009;Schwenk et al. 2012Schwenk et al. , 2014Zhao et al. 2019), and the preferred formation of heteromers is (predominantly) determined by the higher affinity of interactions between the extracellular N-terminal domains (termed NTD or ATD) of distinct GluA subunits compared to those of identical GluAs (Rossmann et al. 2011;Greger et al. 2017). Second, basic properties of the receptor channels such as agonist-triggered opening and closing (gating), ion permeation and pore-block by polyamines are determined by the GluA tetramers, with the distinct GluA subunits endowing the channels with distinct basic properties (Hollmann et al. 1991;Mosbacher et al. 1994;Bowie & Mayer, 1995;Koh et al. 1995;Sobolevsky et al. 2009;Schwenk et al. 2012;Greger et al. 2017;Twomey & Sobolevsky, 2018). Third, the vast majority of AMPARs in the rodent brain are not GluA tetramers, but rather are hetero-oligomeric assemblies of GluA tetramers and the aforementioned auxiliary proteins: TARPs, CNIHs and GSG1l (Kim et al. 2010;Schwenk et al. 2012Schwenk et al. , 2014Zhao et al. 2019). These auxiliary proteins, which all exhibit a four transmembrane (TM) domain architecture, bind to the GluA tetramers at two distinct pairs of binding sites (with full occupancy of these sites being the preferred appearance of AMPARs in the rodent brain) and together with the tetrameric pores build the core of the receptor channels (Schwenk et al. 2012;Twomey et al. 2017a;Herguedas et al. 2019;Nakagawa, 2019;Zhao et al. 2019). Functionally, the auxiliary subunits profoundly impact the aforementioned channel properties of the AMPAR assemblies, in particular ion permeation and gating kinetics, and they distinctly control the various processes behind the trafficking of the receptor complexes, as well as their stability/dwell-time at the plasma membrane and their subcellular localization (Table 1; Tomita et al. 2005;Milstein et al. 2007;Coombs et al. 2012;Shanks et al. 2012;Boudkkazi et al. 2014;Bowie, 2018;Choquet, 2018; Hetero-oligomeric AMPAR cores assembled from four GluAs and up to four auxiliary subunits, as observed in the rodent brain, have been successfully reconstituted in virtually any type of heterologous expression system without requirement of additional partner proteins. In this respect, it came as a surprise that comprehensive proteomic analyses of native AMPARs recently identified a set of proteins that assemble with the core subunits in the ER (Brechet et al. 2017), but are not part of the AMPARs at the cell surface where an additional set of proteins, mostly transmembrane or secreted proteins, interact with the core subunits (detailed in Table 1, 'GluA-interactome'). Interestingly, these selective ER interactors, which all (previously) lacked annotations of primary functions or links to AMPARs in public databases, were found to be fundamental for the assembly of AMPAR complexes in native neurons and their deletion profoundly hampered excitatory synaptic transmission and its dynamics (Brechet et al. 2017;). In addition, several recent genetic studies using whole-exome sequencing on patients suffering from severe intellectual disability unravelled protein FRRS1l, a key player among the ER-selective interactors of the receptor core, as the disease-causing gene, thus highlighting the importance of AMPAR biogenesis and the underlying molecular machinery in the ER (Madeo et al. 2016;Shaheen et al. 2016;Brechet et al. 2017).
In light of these recent findings, we will review our current understanding of the building of AMPAR assemblies in the ER (Fig. 1) and discusses its significance for excitatory synaptic transmission in the mammalian brain and its implications for the formation of ion channel proteins and protein complexes in general.
Biogenesis of native AMPARs
The first definitive view on the appearance of GluA proteins in the ER of native tissue was provided by native gel-separations of ER-derived membrane fractions from the mammalian brain (with only minor 'contaminations' from other (intra)cellular membrane compartments): They demonstrated that (i) GluA1-4 proteins are part of several complexes with distinct molecular mass(es) and that (ii) these complexes represent co-assemblies of the GluAs with distinct interaction partners . Importantly, these interaction partners, predominantly proteins FRRS1l, carnitine palmitoyltransferase 1c (CPT1c), and α/β-hydrolase domain-containing protein 6 (ABHD6) (and in more rare cases porcupine (PORCN)) were exclusively found in ER-located GluA complexes, but were not detected in AMPARs at the plasma membrane (Brechet et al. 2017;. And, while FRRS1l and CPT1c are predominantly found in the CNS, ABHD6 and PORCN exhibit a more ubiquitous expression pattern across cell-types and tissues. Subsequent co-expression of GluAs with the identified ER-interactors in heterologous expression systems combined with native gel-electrophoresis and recordings of AMPAR-mediated currents finally identified the biogenesis of functional (core) AMPARs in the ER as a stepwise process reminiscent of an industrial 'assembly line' (Fig. 1, lower part; . In this line the GluA1-4 proteins pass through discrete 'production stages' determined by the distinct ER-interactors and thereby assemble from monomers to tetramers. Biochemically, the assembly line should Figure 1. Biogenesis of AMPAR cores in the ER Schematic representation of the assembly line of AMPA core receptors as occurs in native ER membranes. The individual stages of the assembly line are numbered (1) to (5) and together represent an equilibrium reaction as detailed in the text. The depicted stages are: (1) ABHD6-associated GluA monomers; (2) formation of GluA dimers driven by co-assembly of bimolecular FRRS1l-CPT1c complexes; (3) dimer-of-dimer formation and dissociation of ABHD6; (4) binding of CNIHs and TARPs and dissociation of FRRS1l-CPT1c complexes; and (5) initiation of ER export via induction of transport vesicles. Hetero-octameric AMPA core receptors are finally transported to synaptic and extrasynaptic sites of the plasma membrane. J Physiol 599.10 be envisaged as an equilibrium reaction with multiple intermediate states that are closely interlinked and thus influence each other's occupancy.
At the first stage in this production line, the GluA monomer, after release from the translocon, is grasped by protein ABHD6, a single-span transmembrane protein with a short extracellular (or ER-luminal) and a large cytoplasmic domain. The resulting bi-molecular GluA-ABHD6 complexes (Fig. 1, state 1) are rather stable and have two important consequences for the GluA protein: first, it is effectively protected from ER-associated degradation (ERAD; Wu & Rapoport, 2018) and, second, it is locked in a monomeric state that is unable to form dimers with other GluAs . This is in contrast to heterologous (over)expressions (also in neurons and neuronal cultures), where effective dimerization between GluA proteins (and subsequent receptor assembly) occurs readily, most likely driven by high-affinity interaction(s) between both their TM domains (Kim et al. 2010;Gan et al. 2015) and their N-terminal domains (NTDs or ATDs; Ayalon & Stern-Bach, 2001;Rossmann et al. 2011; see also section below). And it is also different from the action of the two classical J-domain-containing chaperones (DNAJBs 12 and 14) that were recently reported to bind to the nascent chain(s) of some voltage-gated K + channel subunits and, in cooperation with heat shock protein 70 (HSP70), promote their assembly into tetramers in the ER (Li et al. 2017). How ABHD6 stabilizes GluA monomers and/or inhibits their dimerization is currently unclear, but, due to the topology of ABHD6 (with most parts of the protein located cytoplasmically opposite to the NTDs), inhibition may originate from the interaction site(s) with the GluA protein in the membrane plane. In addition, this Figure 2. Structural determinants of channel gating and ER export differ between auxiliary subunits of the receptor core A, structure of the GluA tetramer (3KG2) indicating the distinct domains of the AMPAR pore subunit(s) in the membrane plane (channel pore) and on the extracellular (or ER-luminal) side of the membrane (NTD/ATD, N-terminal domain; LBD, ligand-binding domain). B, structures of CNIH-3 (PDB database entry: 6PEQ, in ribbon representation), TARP-8 (6QKC) and GSG1l (5WEK) illustrate the distinct domains responsible for the distinct actions of the three types of auxiliary subunits on channel gating and initiation of ER-export. CNIH-3 may impact channel gating via the cytoplasmic extension formed by TMs 1 and 2 (shown in orange, N-terminal part of helical TM2 (in red) is unique to CNIHs 2 and 3), and binding of Sec proteins should occur at the TM3,4-linker (in green) that is conserved among all four mammalian CNIH proteins. The claudin homologues TARP and GSG1l impact channel gating through their extensions on the extracellular side (orange), and initiation of ER-export may involve cytoplasmic domain(s) (in green). intra-membraneous interaction may also be the decisive feature behind the ABHD6-mediated protection from ERAD envisaged as a result of shielding the 'hydrophilic pore-lining' face(s) of the GluA protein from the lipid phase of the membrane environment (Deutsch, 2003;Schwappach, 2008). Interestingly, both shielding from ERAD and prevention of dimerization are also achieved by PORCN, a multi-span ER-membrane protein , which seems to replace or cooperate with ABHD6 in the rodent brain in some cases. And similar to with ABHD6, the enzymatic activity of the protein is not required for its role in the assembly line as has been verified by respective loss-of-function mutants . In contrast, enzymatic activities are fundamental for the processes that have been reported for either protein in the mammalian brain. Accordingly, ABHD6 serves as a regulator in endocannabinoid signalling via its hydrolase activity (Blankman et al. 2007;Marrs et al. 2010;Lord et al. 2013), while PORCN controls Wnt signalling through palmitoylation of distinct Wnt isoforms in the ER lumen (Galli et al. 2007). For successful receptor assembly, the stabilized GluA monomers must proceed to the second stage of the assembly line, which is promoted by binding of the dimeric FRRS1l-CPT1c complex to ABHD6-GluAs (Fig. 1, state 2). Both FRRS1l and CPT1c are integral membrane proteins with one and two TMs, respectively, that extend to distinct sides of the membrane (FRRS1l into the ER lumen, CPT1c into the cytoplasm) and combine for stable binding to the ABHD6-GluA monomers (via interactions with both N-and C-terminal domains of the GluA protein; Brechet et al. 2017). As a consequence of their binding, the inhibitory effect of ABHD6 is released, most likely via its partial (or complete) dissociation from the GluAs, and two GluA proteins are assembled into GluA-FRRS1l-CPT1c dimers. Subsequently, two of these dimers are co-assembled into 'dimers-of-dimers' , thus forming GluA tetramers associated with up to four FRRS1l-CPT1c complexes as could be derived from their apparent molecular mass (Fig. 1, state 3). While dimer-of-dimer formation is a common theme in symmetric ion channels (Deutsch, 2003;Greger & Esteban, 2007;Schwappach, 2008;Isacoff et al. 2013), FRRS1l-CPT1c complexes may impact receptor assembly beyond the sheer catalysis of GluA tetramerization. In this sense, it is particularly tempting to speculate that FRRS1l-CPT1c complexes may induce the preferred positioning of distinct GluA subtypes in heteromeric channels as has been reported for GluA2 and its almost exclusive appearance in the BD position (versus the AC position) in recent structural work (Herguedas et al. 2019;Zhao et al. 2019). It is important to note, ABHD6 proteins presumably are no longer part of these GluA tetramers as they must dissociate from the pore-lining surface(s) of the GluAs in order to enable formation of the channel pore.
Similar to the monomeric GluA-ABHD6, FRRS1l-CPT1c-associated GluA tetramers appear rather stable and effectively prevent further processing of these intermediates to functional receptor channels that could be released from the ER at particular exit sites (ERES). This final step only occurs at stage 4 of the production line, when CNIHs and members of the TARP family take their places at the two distinct pairs of binding sites on the GluA tetramers and thereby squeeze off the FRRS1l-CPT1c complexes (Fig. 1, state 4). Dissociation of FRRS1l-CPT1c and binding of CNIHs and TARPs, finally leads to the hetero-oligomeric/hetero-octameric core-AMPARs that are fully functional and ready for ER exit . The latter was shown in experiments that quantitatively monitored ER-exit of functional AMPARs by their appearance at the plasma membrane. And, interestingly, while both CNIH-2 and TARP-2 were able to initiate ER-export, the efficiency of CNIH-2 markedly exceeded that of TARP-2 (Fig. 1, state 5; . How AMPARs exit the ER at ERES, however, has not yet been sorted out. Nonetheless, it appears reasonable to assume that CNIHs 2/3 promote ER exit via formation of COPII vesicles initiated through interaction with the Sec23/24 protein(s) similar to what has been established for the 'classical' CNIH ('cornichon') protein(s) in flies and yeast (CNIHs 1 and 4 in mammals) (Roth et al. 1995;Bokel et al. 2006;Herzig et al. 2012;Adolf et al. 2019). The latter act as cargo-receptors that recognize their targets, membrane proteins and/or proteins to be secreted, and connect them to the Sec machinery. And in fact, the recently resolved structure of CNIH-3 co-assembled with tetrameric GluA2-receptor pores provided the first view for both cargo recognition and connectivity to COPII vesicles (Nakagawa, 2019;Kamalova & Nakagawa, 2021). Interaction with the GluA cargo is defined by the extended helical TM1 domain, while the connectivity to the Sec proteins should be represented by the cytoplasmic TM3-4 linker (Fig. 2, highlighted in green). This linker is well conserved among the CNIH family and represents the only domain of the protein that is neither buried in the membrane plane nor covered by the GluA protein(s) ( Fig. 2; Nakagawa, 2019). Whether TARPs, predominantly subtypes 2 and 3, can act as cargo receptors similar to the CNIHs, or rather trigger ER-export through a distinct mechanism (or mechanisms) must currently remain unresolved. The structural data currently in hand leave both possibilities open, but suggest the long subtype-dependent C-termini (CTD) as the only protein domains suited for interactions with cytoplasmic transport factors (Fig. 2, highlighted in green).
Fundamentally different from classical cargo receptors, however, both CNIHs and TARPs do not dissociate from their target proteins upon ER-export, but remain associated with the GluA tetramers and travel to the plasma membrane as subunits of the AMPAR core (Fig. 1, upper part; Schwenk et al. 2009;Kato et al. 2010;Schwenk et al. 2012;Herring et al. 2013;Boudkkazi et al. 2014). As such, they largely determine and/or tune the channel properties of the AMPAR complexes in a subtype-specific manner, and also impact their dynamics and subcellular localization. Structurally, the specificity in gating is determined by distinct domains identified in the various high-resolution structures (highlighted in Fig. 2): TARPs and GSG1l influence channel gating via interaction of their extracellular loops with the ligand-binding domain of the GluA subunits (Dawe et al. 2016;Twomey et al. 2016Twomey et al. , 2017bTwomey et al. , 2019, while CNIHs 2 and 3 most likely act via their helical TM1 and 2 domains on the selectivity filter region located at the cytoplasmic opening of the channel pore (Hawken et al. 2017;Nakagawa, 2019;Fig. 2, gating determinants in orange).
In contrast to TARPs and CNIHs, FRRS1l-CPT1c complexes do not leave the ER after dissociation from the GluA assemblies, CPT1c because it is a 'bona fide ER protein' (different from the well-known mitochondrial lipid-transferases CPT1a and CPT1b), and FRRS1l because it is captured in the ER through binding to CPT1c (Casals et al. 2016;Brechet et al. 2017). A portion of the FRRS1l pool which is not bound to CPT1c, however, is subjected to processing by the ER-resident GPI-transamidase machinery (Brechet et al. 2017). This enzyme replaces the transmembrane domain of FRRS1l by a lipid anchor and sends the protein to the plasma membrane via the secretory pathway . Whether GPI anchoring is linked to AMPAR biogenesis or occurs independently and whether the GPI-anchored FRRS1l also targets AMPARs or has AMPAR-independent function(s) at the plasma membrane is currently unknown.
Finally, the AMPAR-containing transport vesicles fuse with the plasma membrane, most likely at both synaptic and extrasynaptic sites, and thus get the core AMPARs ready for signal transduction and assembly with additional partner proteins. Interestingly, these partners apparently interact with the AMPAR cores only at the plasma membrane, but are generated and transported to the surface independently of the receptors (summarized GluA interactome in Table 1). For some of these surface interactors, first data related to their cell physiological significance are already in hand (and have been reviewed/reported elsewhere; Haering et al. 2014;Greger et al. 2017;Matt et al. 2018;Bissen et al. 2019;von Engelhardt, 2019; see also Table 1); for others, further investigations are necessary.
Impairment and disturbance of receptor assembly
The assembly of functional AMPARs in the ER represents an equilibrium reaction in which several states (1-5, Fig. 1) are closely interlinked and thereby influence the occupancy of, as well as the transition between, each other. Consequently, alterations of the participating proteins are expected to impact the receptor building and, as a result, the molecular appearance of all AMPAR assemblies reflected by the GluA1-4 interactome (Schwenk et al. 2012; Table 1).
As yet, a limited number of such 'directed alterations in expression' of assembly line determinants have been performed and experimentally monitored. These are, in particular, (i) overexpression of ABHD6, (ii) J Physiol 599.10 loss-of-function mutations or knock-out, as well as overexpression of FRRS1l, and finally, (iii) knock-out of the two CNIH proteins 2 and 3. In the following, we will briefly review the observed consequences of these alterations for receptor physiology and brain function and provide mechanistic insights derived from AMPAR biogenesis (or receptor assembly).
Overexpression of ABHD6 performed in heterologous expression systems (culture cells, Xenopus oocytes) and in neurons of the rodent brain essentially led to the consistent observation of a pronounced decrease or a complete vanishing of AMPARs at or from the surface membrane (Wei et al. 2017;; in hippocampal neurons, excitatory synaptic currents (EPSCs) were largely reduced as a result of the AMPAR decrease in synapses (Wei et al. 2016(Wei et al. , 2017. In light of the assembly line, these observations may be considered an immediate consequence of state 1 stabilization induced by the excess of free ABHD6 proteins that impairs the binding of FRRS1l-CPT1c and thus reduces progression of GluAs along the assembly line. In fact, such increased occupancy of state 1 could be directly visualized by native gel-electrophoresis and showed similar efficiency for all four GluA proteins . For FRRS1l, a series of loss-of-function mutations have been identified in humans, most of them inducing frameshifts that prevent synthesis of a stable protein, thus leading to a loss or knock-out of the protein (Madeo et al. 2016;Shaheen et al. 2016;Brechet et al. 2017); bona fide knock-out and/or knock-down of FRRS1l has recently become available in rodents and enabled detailed investigation of the respective phenotype(s) Stewart et al. 2019). In humans, FRRS1l mutations cause a fatal disease phenotype known as 'severe form of intellectual disability' with marked cognitive impairment, strongly restricted speech development, seizures, muscular hypotonia and neuro-regression (finally leading to death) (Madeo et al. 2016;Shaheen et al. 2016;Brechet et al. 2017). Knock-out mice recapitulate several aspects of this disease phenotype, including severe deficits in learning or goal-oriented behaviour Stewart et al. 2019). Moreover, these animals showed several knock-out-induced alterations on the molecular and cellular level (summarized in Fig. 3): (i) the total amount of GluA1-4 protein(s) in all membranes of the whole brain was reduced to ß50% (of wild-type), without any obvious changes in transcription indicated by the unaltered amounts of the respective mRNAs; (ii) binding of CPT1c to GluA proteins was largely abolished; (iii) the amounts of ABHDs 6, 12 and PORCN bound to GluAs were increased by several-fold; and (iv) the majority of interactors found in AMPAR complexes at the surface (Table 1) were decreased by 30-80% (with the exception of TARP-2) , thus leading to major alterations in the subunit composition of the surface AMPARs. All these observations can be directly derived from the assembly line. First, in the absence of FRRS1l, binding of CPT1c to GluA is lost. Second, impaired dimerization (and tetramer formation) will shift the equilibrium towards state 1, thus leading to the observed increase in GluAs associated with ABHDs 6, 12 and PORCN. Third, after exhausting the pool of available ABHDs and PORCN, newly synthesized, now unprotected, GluA proteins will be readily degraded via ERAD (and maybe additional elements of the cellular quality control system), thus resulting in strongly reduced overall amounts of GluAs (Fig. 3A). And fourth, assembly of hetero-octameric AMPARs is strongly impaired, reducing the amount of receptors delivered to the plasma membrane and able to form complexes with the surface constituents (Fig. 3B).
Further investigation of FRRS1l knock-out mice revealed additional changes that may be considered indirect consequences of the altered AMPAR assembly in the ER and the concomitant changes in subunit composition of the receptor complexes . Thus, the remaining surface AMPARs displayed profound asymmetry in their distribution with strong preference for localization to synapses versus extrasynaptic sites (Fig. 3B), EPSCs were decreased by ß50%, activity-dependent recruitment of AMPARs to synapses, the process underlying long-term potentiation (LTP), was entirely abolished (independent of its initiation by pulse-pairing or tetanic stimulation; Fig. 3C), and formation and maturation of synapses, as well as dendritic arborization were severely disturbed (number of immature synapses increased by almost 10-fold; Fig. 3B). But, as profound as these alterations may be, they were all successfully reversed by virus-driven re-expression of the FRRS1l protein . Interestingly, though, virally induced re-expression not only restored normal AMPAR-function and dynamics, but also showed 'over-compensation' in all aspects including LTP, thus indicating that AMPAR physiology can be effectively controlled by the assembly line in the ER.
Finally, genetic deletion of the two CNIHs, the most effective drivers of AMPAR export from the ER, revealed a phenotype that partially overlapped with that of the FRRS1l knockout. Thus, investigations in CA1 pyramidal cells indicated strongly decreased EPSCs recorded upon electrical stimulation as a consequence of the reduced number of AMPARs in the Schaffer collateral-CA1 pyramidal cell synapses (Herring et al. 2013). Of course, the reduced number of AMPARs at the synapses may well result from the reduced ER-export expected upon deletion of the CNIH proteins. Whether the reported changes in EPSC kinetics reflect a preference of the CNIHs for selected GluA subtypes as suggested (Herring et al. 2013) needs more detailed investigations by quantitative proteomic analyses although published proteomic data have not provided any evidence for such subtype preferences of CNIHs (Schwenk et al. 2012;Boudkkazi et al. 2014;).
Assembly of exogenously expressed AMPARs
Effective assembly of functional AMPARs in the rodent brain requires the concerted action of several biogenesis factors that operate in a consecutive manner and promote protection of GluA monomers, formation of GluA dimers/tetramers and their export from the ER. Disturbance of these actions as induced by removal of FRRS1l leads to prominent degradation of the GluA proteins together with decreased receptor building and surface delivery despite unaltered amounts of mRNAs coding for GluAs1-4 and other interactome constituents ( Fig. 3; . In contrast to this 'native assembly' , building of functional AMPARs following exogenous expression in neurons and heterologous expression systems (cultured cells, Xenopus oocytes) does not require any additional factor(s) prompting the questions of how AMPAR assembly can occur under these conditions and of why evolution established a complex 'production line' for AMPARs given that their assembly can also occur spontaneously.
As the key to the first question, highly effective (over)expression of GluA proteins driven by the exogenous cDNAs/cRNAs should be considered that, in the light of the assembly line, exerts several synergistic effects. First, high amounts of GluA protein(s) saturate the protein degradation system(s) and thus render protection by ABHD6 dispensable. Second, the binding capacity of endogenous ABHD6 (present in cultured cells and neurons, but not in Xenopus oocytes) is overwhelmed, which will leave a significant portion of newly synthesized GluA protein in an ABHD-free state ready to form dimers and subsequently dimers-of-dimers driven by several structural determinants including the NTD/ATDs, the LBDs and the transmembrane domains (Ayalon & Stern-Bach, 2001;Kim et al. 2010;Rossmann et al. 2011;Gan et al. 2015). Such 'spontaneous dimer-formation' could also be induced with native GluA-ABHD6 complexes after detergent-induced dissociation of ABHD6 . Third, accumulation of GluA tetramers will 'enforce' export from the ER using pathway(s) independent of CNIHs and/or TARPs ('early vs late secretory trafficking'; Tomita et al. 2003Tomita et al. , 2005Schwenk et al. 2009;Harmel et al. 2012) and thus finally traffic to the plasma membrane. In summary, exogenous (over)-expression is able to generate functional receptors at large amounts by effectively bypassing the cellular system(s) of 'quality control' .
It must be emphasized, though, that addition of the biogenesis factors ABHD6, FRRS1l/CPT1c and CNIHs/TARPs to the exogenously expressed GluA proteins successfully reinstalled native quality control in heterologous expression systems and closely recapitulated receptor assembly as detailed above for the mammalian brain . Interestingly, under these conditions some perfectly reproducible observations obtained with sole GluA (over)expression were less prominent. Thus, receptor assembly and ER-export appeared only modestly affected by the editing events at the Q/R site (in the pore loop (Greger et al. 2002(Greger et al. , 2003 or the R/G-site (LBD; Greger & Esteban, 2007;Penn & Greger, 2009)), and, similarly, alternate splicing (flip/flop versions; Penn et al. 2008) failed to prominently impact the biogenesis processes (unpublished results). The mechanistic details behind these differences are currently unknown and their molecular resolution will certainly require a series of combined structural and functional analyses.
The answer to the question on evolutionary reasoning of a multi-state assembly line is less obvious. But, given its effectiveness in controlling production and surface delivery of core AMPARs, we may speculate about a 'post-transcriptional regulation mechanism' that secures operation of excitatory neurotransmission on a safe level. Accordingly, the assembly line may counterbalance both increased and decreased levels of GluA synthesis (based on transcriptional and/or translational activities). Thus, at high GluA levels, the line dampens receptor generation by 'trapping' GluAs via binding to free pools of ABHD6, PORCN and FRRS1l/CPT1c, while at low availability of GluAs, the line offers compensation by protecting newly synthesized GluAs from ERAD (which appears rather effective in neurons compared to cell lines). Furthermore, the assembly line may regulate the subunit composition and arrangement of individual GluA sub-units within heteromeric GluA tetramers, and may thereby control the number and the subtype of auxiliary proteins that co-assemble with the GluAs into hetero-oligomeric AMPARs (Lu et al. 2009;Kim et al. 2010;Schwenk et al. 2012;Hastie et al. 2013;Schwenk et al. 2014;Zhao et al. 2019). In this sense, the assembly line can impact the ER-export of the core AMPARs and, thus control the number of the core receptors to be inserted into the plasma membrane under homeostatic and activity-dependent conditions. Moreover, the assembly line exerts control over the assembly of the AMPAR cores with the large set of surface subunits/interactors that form the receptor periphery (Table 1) and that are known to differ between brain regions and/or types of neurons (von Engelhardt et al. 2010;Schwenk et al. 2014;Matt et al. 2018). And finally, the biogenesis factors may be expected to trigger post-translational modification(s) of the GluA subunits and GluA assemblies in the ER that could impact their subsequent processing and/or their ability to interact with the peripheral subunits at distinct subcellular sites of the plasma membrane.
Outlook and future avenues
The assembly of AMPARs in a production line with distinct stages represents the first example that shows how a functional (receptor) ion channel is put together in the ER membrane(s) of a 'native tissue' (in contrast to heterologous expression systems). It highlights how profoundly biogenesis in the ER can impact the cell physiology of the receptors at the plasma membrane. The individual stages of the AMPAR production line are determined by proteins that were originally identified by unbiased GluA-targeting high-resolution proteomics and that surprisingly showed complete specificity for GluA proteins (or AMPARs), implying that they do not interact with other ion channels or other ion-handling membrane proteins in the mammalian brain.
According to these distinct aspects, the AMPAR assembly line may be expected to impact future research on (receptor) ion channel biology and their cellular/systemic function(s) in one or more of several directions as follows.
General principle for building membrane protein complexes. Despite the specificity of its key players for GluAs and AMPARs, the assembly line finally provided solutions to long-standing issues in ion channel assembly including protection of newly synthesized pore-forming subunits from ERAD or formation of dimers (and dimers-of-dimers). In this sense, the AMPAR assembly line may represent a blueprint or general procedure for the building of ion channels in native ER membranes. Of course, respective verification requires much more investigations of assembly processes in the ER membranes of cells in native tissues. Ideally, such studies should start out from unbiased analyses using quantitative proteomic technologies in combination with native gel separations as first approaches for identification of distinct assembly complexes of given target protein(s) and their respective subunit composition and interaction partners.
New opportunities for research on AMPARs and AMPAR physiology. With respect to AMPARs, the assembly line described above opened a new window for both the analysis of the biogenesis of the receptors and studying their role in neuronal signalling transduction, its dynamics and its impact on higher brain function(s). For biogenesis, the assembly line and its determinants provide a molecular framework whose detailed operation should now be unravelled by structural analyses, detailed biochemical investigations and monitoring of protein dynamics applying cryo-EM, proteomics and high-resolution fluorescence and electron microscopy. For investigating AMPAR physiology, the assembly line offers targets for defined manipulation of receptor building and thus for switching on/off synapse maturation, local receptor synthesis and activity-dependent synaptic plasticity. The latter may even be of therapeutic relevance by interfering with the impairments/restrictions enforced via neuro(degenerative) diseases.
Novel concept for regulation of excitatory synaptic transmission. As briefly introduced above, the ER assembly line(s) may establish a novel (posttranslational) level or mechanism for the control of synaptic signal transduction beyond the presumed control by transcription of the participating genes. Based on currently available data, the biogenesis line effectively shapes the number and subunit composition of core AMPARs present in the postsynaptic membrane and at extrasynaptic sites under homeostatic conditions, as well as in the reserve pool(s) providing the additional AMPARs that are inserted into the post-synapse upon (increased) synaptic activity. Consequently, biogenesis represents an effective means to determine both the time course of synaptic transmission and its (activity-dependent) dynamics with high accuracy. Furthermore, it appears possible that the assembly line itself is not static, but may rather change during postnatal development or be altered in response to cellular activity or potential feedback loops. In any case, the phenotype observed in humans and rodents upon disturbance of the AMPAR assembly highlights the fundamental importance of the ER-based biogenesis machinery for synaptic signalling and higher brain function(s).
To what extent these outlooks will be corroborated experimentally and in fact impact future research is hard to say, but as of now, the assembly line opens a new field and poses many questions whose answering will require quite a bit of experimental work to be completed. | 2020-08-05T13:06:30.778Z | 2020-08-04T00:00:00.000 | {
"year": 2020,
"sha1": "e2a223fe5e17f8c5732aa8460b3ac253af7d1e4f",
"oa_license": "CCBY",
"oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.1113/JP279025",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "657d51788ae0c2e7e24afe549411027aaa005b13",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
219931760 | pes2o/s2orc | v3-fos-license | Dataset for atmospheric transport of nutrients during a harmful algal bloom
The data presented in this article are related to the research article entitled “Atmospheric transport of nutrient matter during a harmful algal bloom”[1]. These data provide the concentration of nutrients (nitrate, ammonium and FeⅡ) in the atmosphere and their deposited flux in the East China Sea prior to the harmful algal bloom on May 3–8, 2006. They can be helpful for analyzing the source of nutrients causing the harmful algal blooms.
Description of data collection
• The large-scale harmful algal bloom events were collected from the Marine Environmental Quality Bulletin of China (http://www.coi.gov.cn/hygb) • Global Nested Air Quality Prediction Modeling System [2][3][4] were used to simulate the atmospheric transport and deposition flux of nutrient. Parameters for data collection • The harmful algal bloom event with an area larger than 10 0 0km 2
Value of the Data
• The data could be used to assess the contribution of the atmospheric nutrients to algal blooms • The data can be used to identify areas of elevated nutrient concentrations and deposition flux in the East China Sea. • The data can be used as a reference for predicting harmful algal bloom.
Data description
The harmful algal bloom event developed between May 3 and 8, 2006 was the first bloom with a spatial coverage of 10 0 0 km 2 that year. The considerable nutrient matter (nitrate, ammonium, ferrous iron) were transported to the algal bloom area via the atmosphere before occurring the harmful algal bloom. Figs. 1-3 show the Atmospheric transport of nitrate, ammonium and Fe Ⅱ , respectively. Fig. 4 shows the deposition flux of nutrients.
Experimental design, materials and methods
There were 6 harmful algal bloom events which were larger than 10 0 0 km 2 in East China Sea during 2006. The first harmful algal bloom with a spatial coverage of 10 0 0 km 2 developed on May 3-8, 2006. April and May are the transition time between winter and summer in the Northern Hemisphere. The atmospheric circulation patterns and oceanic currents in East China Sea were shifting from winter patterns to summer patterns during this period. In consideration of the seasonal change and time when the first harmful algla bloom with a spatial coverage of 10 0 0km 2 , we used the Global Nested Air Quality Prediction Modeling System (GNAQPMS) to calculate the atmospherically transported nutrients (nitrate, ammonium and Fe Ⅱ ) and their deposition flux by setting the simulation time from April 15 to May 6, 2006.
The GNAQPMS was independently developed by Institute of Atmospheric Physics, Chinese Academy of Sciences (IAP/CAS) [5] . The GNAQPMS model and its running schedule were provided by IAP/CAS [2] . A summary is presented here: The GNAQPMS utilized in this study is a fully modularized three-dimensional regional Eulerian chemical transport model, driven by the meteorological model the Weather Research and Forecasting (WRF) model. GNAQPMS reproduces the physical and chemical evolution of reactive pollutants by solving the mass balance equation in terrain-following coordinates [ 5 , 7-9 ]. It includes advection, diffusion and convection processes, gas/aqueous/aerosol chemistry, and parameterization of dry/wet deposition. GNAQPMS is composed of input, physic-chemical process and output: Input. The input items are the meteorological fields and emission inputs. The output of the WRF model was used for the meteorological fields. The meteorological data on April 15, 2006, as the initial meteorological fields inputted to the WRF model, comes from http://rda.ucar.edu. The emission inputs consist of anthropogenic inputs (aerosol and trace gas) and natural emissions (vegetation, soil, volcano and lighting). In this study, we used the Fifth Assessment Report of the United Nations Intergovernmental Panel on Climate Change as anthropogenic input (1850-20 0 0 decade, 0.5 °× 0.5 °), Global Emissions Inventory Activity and Model of Emissions of Gases and Aerosols from Nature as biological input (20 0 0 decade, 0.5 °× 0.5 °). Regional Emission inventory in Asia as soil NOx input (year of 2001, 1 °× 1 °) and Global Emissions Inventory Activity as lighting NOx input (average value from 1983-1990, 1 °× 1 °) [6].
Physic-chemical process. The physical-chemical process included gas/aqueous/ heterogeneous/aerosol chemistry, advection, diffusion and convection processes, and modules for dry and wet deposition, and dust and sea salt dynamic emissions reactions and processes [ 4 , 6 , 9 ].
Output. The output items included the wet and dry deposition and spatial distribution of chemical species [6] .
The GNAQPMS had been modified on the basis of the topography and pollution pattern of East Asia. It has been widely applied to simulate the transport of air pollutants and to provide operational air quality forecasts in East Asia [ 3 , 4 , 7-15 ]
Declaration of Competing Interest
The authors declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2020-06-11T09:02:19.743Z | 2020-06-07T00:00:00.000 | {
"year": 2020,
"sha1": "2fc6d3f4776a2c84380e62d3221a716d11b302cc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.dib.2020.105839",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5fb94aba35da9c231c62855b4086fb0b07639ff5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Environmental Science"
]
} |
239334762 | pes2o/s2orc | v3-fos-license | Investigation of the effect of raster angle, build orientation, and infill density on the elastic response of 3D printed parts using finite element microstructural modeling and homogenization techniques
Although the literature is abundant with the experimental methods to characterize mechanical behavior of parts made by fused filament fabrication 3D printing, less attention has been paid in using computational models to predict the mechanical properties of these parts. In the present paper, a numerical homogenization technique is developed to predict the effect of printing process parameters on the elastic response of 3D printed parts with cellular lattice structures. The development of finite element computational models of printed parts is based on a multi scale approach. Initially, at the micro scale level, the analysis of micro-mechanical models of a representative volume element is used to calculate the effective orthotropic properties. The finite element models include different infill densities and building/raster orientation maintaining the bonded region between the adjacent fibers and layers. The elastic constants obtained by this method are then used as an input for the creation of macro scale finite element models enabling the simulation of the mechanical response of printed samples subjected to the bending, shear, and tensile loads. Finally, the results obtained by the homogenization technique are validated against more realistic finite element explicit microstructural models and experimental measurements. The results show that, providing an accurate characterization of the properties to be fed into the macro scale model, the use of the homogenization technique is a reliable tool to predict the elastic response of 3D printed parts. The outlined approach provides faster iterative design of 3D printed parts, contributing to reducing the number of experimental replicates and fabrication costs.
Introduction
Due to the recent advances in additive manufacturing (AM) technology, lightweight parts with complex geometries have seen an increased popularity in sectors such as automotive, aerospace, military, marine, and biomedical and the electronics industries. Among these lightweight parts, 3D printed structures with cellular lattice are of special importance as the desirable mechanical properties can be obtained by designing appropriate microstructures. In fused filament fabrication (FFF), a very popular technique in AM for rapid prototyping, the lightweight components with the cellular lattice structures can be printed through the extrusion of filament material in a layer-by-layer deposition process. Although in these methods lattice structures with the high repeatability can be fabricated, it is important to characterize their mechanical properties in order to ensure the in-service performance requirements are met. The layer-by-layer deposition in FFF creates material microstructures different from traditional manufacturing methods. Different FFF process parameters such as build and raster orientations, layer height, filament width, and infill patterns and densities can significantly affect the mechanical properties of printed parts. Although in the literature experimental analysis using design of experiments have been developed to investigate the effect of processing parameters on the mechanical behavior of 3D printed parts with internal lattice structures [1,2], the materials' uncertainty and variability, as well as time and cost of the experimental procedure, are challenging issues.
Recently new approaches have been introduced in the literature in order to achieve maximum functional performance in additive manufacturing [3] where "replicative" structures in different sizes and orders of magnitude are used to manufacture parts with the minimum weight but achieving the required mechanical properties. The effect of feed rate using an algorithm for homogeneous material deposition was also investigated and, as a result, the importance of process control in the direct manufacturing processes of components was highlighted [4]. Although the above studies have been successful in producing parts with complex geometries whose total weight is minimized while their mechanical strength is optimized, the use of topology and lattice optimization by employing numerical methods (e.g., finite element modeling) is less welldeveloped currently and is high priority for research. This is mainly due to the use of additive manufacturing (3D printing) in a wider range of industries to achieve cost reductions with material savings. Therefore, analytical and numerical methods with the ability to predict the mechanical properties of 3D printed structures are required resulting in a significant reduction in the number of experimental procedures, associated costs, and time to market.
In terms of modeling with either analytical or semianalytical methods, classical laminate theory (CLT)-based approaches have been developed to characterize the behavior of printed structures. The use of CLT combined with experimental characterization to study the mechanical properties of FFF-based 3D printed parts has resulted in successful prediction of the elastic constants [5][6][7][8]; however, the approach is limited to parts fabricated with 100% infill density meaning that no separation between the deposited filaments is assumed (i.e., the mechanical response of structures with partial infill cannot be estimated based on CLT approaches). In addition, in the aforementioned works, the effects of build orientation on mechanical behavior of printed parts as well as the effect of internal features of meso-structures are not taken into consideration. To address this, micromechanics-based approaches focusing on the analysis of a repeating unit cell have been developed [9][10][11][12], and this has resulted in the derivation of analytical expressions for the structure-property relationships.
Due to the complex microstructures and inherent anisotropic mechanical behavior of parts obtained by FFF and also because of the many process parameters involved, computational simulation using finite element analysis (FEA) has been found to be useful when estimating the structural performance. In FFF-based 3D printed parts that have cellular lattice structures, FE models have been used to study the effect of different infill patterns [13] and infill densities [14] on the mechanical behavior of parts, to interpret anisotropic damage occurring during severe compression loading [15], to predict the anisotropy induced by 3D printing [16], and to evaluate the effect of microstructural defects by analyzing the stress/ strain fields for different build orientations [17]. Among the different FE-based approaches, the use of space frame and shell models to predict the linear elastic behavior of the printed parts has received particular attention in the literature. For example, it has been shown that a beam-based FE model can predict the elastic modulus with good accuracy [18]. Using the same approach [19], it was found that FE-computed mechanical properties of cellular lattice structures (with the layers of filaments laid up at ±45°alternately) are in good agreement with tensile, compressive, and shear tests of 3D printed specimens. In another work [20], a frame FE model was used to analyze the effect of infill design of printed parts and it was found that for the optimized part the FE calculated structural response was in good agreement with experiment. In addition, it has been shown that the use of frame-based FE modeling is not only limited to FFF-based 3D printing structures, but also it is applicable in other AM to estimate the mechanical properties [21,22]. The main issue of using space frame and shell FE model is the efficiency, when the number of elements increases, and the consequent analysis can become computationally very expensive. To address this, a homogenizationbased approach has been developed. Analytical and numerical methods of homogenization to predict the mechanical response of 3D printed and composite structures have been investigated thoroughly in the literature [23][24][25][26][27][28][29], and the results show that representative volume-based FE model is a good option for modeling of such parts with regular repeating cellular lattice structures; however, attention must be paid to consider the effect of boundary conditions and border effects. In fact, when it comes to the FEA of 3D-printed cellular structure using virtual experiments, in order to exploit the advantage of using homogenization procedures and therefore avoiding the computationally expensive explicit microstructural modeling, the size of representative volume element (RVE) should be large enough, such that the effective properties will not depend on the boundary conditions and border effects. The chosen RVEs, however, should not be very large to make the computational modeling too expensive [30]. A reference for the size of RVEs when it comes to the virtual experiments and homogenization procedures of 3D printed parts is the micromechanical analysis of stochastically distributed short fiber-reinforced polymer composites where a cube with side of 50 times bigger than the size of individual fiber is considered adequately large RVE [31].
In the FE homogenization technique, the prediction of effective macro-scale material properties is based on the constituent's properties (i.e., the virgin materials used for printing) and geometrical features of the microstructure. In this technique, the printed part is considered a continuum and a small volume element (unit cell) which periodicity fills the 3D printed part is considered for numerical homogenization. This periodic unit cell is known as RVE. Usually, a twostep homogenization approach is used for the analysis of 3D printed structures [28]. In this method, estimated effective engineering constants are used for subsequent mechanical simulation of global elastic response. Experimental characterization via tensile testing is carried out to obtain the orthotropic elastic constants of FFF printed samples [32]. In this study, the properties obtained from experiments were used as an input into the FE models to estimate the structural response of elements and a good agreement between FEA predictions and experimental data was obtained. In different works [33,34], the authors developed FE models of the RVE which was subjected to tensile and shear loading, and then homogenized engineering constants obtained from the analysis of the RVE were used for the FE analysis of structures with more complex design.
Although the homogenization procedure can make accurate prediction of the macro-scale properties of 3D printed parts from the micro-scale properties of their constituents, the use of this technique is limited when accounting for the effect of important features of micro mechanics such as stress localization which is important for predicting the local failure mechanisms. To address this, FE explicit microstructural modeling formed by extruded filaments have been used in some studies [12,35,36]. The CAD models built by this type of geometry modeling are more like the real microstructure of 3D printed parts; however, this method is computationally expensive due to the increased number of elements required for meshing.
In the present study, the FE homogenization approach is applied to generate homogenized mechanical properties for the internal cellular lattice structures of FFF-based 3D printed parts. The RVE of the lattice structure was analyzed by the FE method to determine the bulk properties of 3D printed parts. Then, the obtained properties were used for the subsequent mechanical simulation of printed bending, shear, and tensile testing samples where the effect of different processing parameters was also investigated. Although previous studies have used this technique to predict the elastic response of 3D printed parts, the present study focuses on the experimental validation of the FE results (both homogenization and explicit microstructural modeling methods). In addition, capturing the effect of raster angle, build orientation and infill density using the FE methodologies used in the present work is a previously unexplored research area. In the present study, the use of a micromechanics plugin in the FE software ANSYS (material designer) integrated with ANSYS Composite Pre-Processor (ACP) allowed the definition of different layer thicknesses as well as build/raster orientations; therefore, the effect of these parameters was considered in the simulation. Compared to the lattice FE model, the homogenized continuum FE model uses a much lower number of elements. While reducing the FEA time, the homogenization-based approach can effectively estimate the elastic behavior of 3D printed parts. This would enable engineers and manufacturers in many sectors (e.g., automotive, aerospace, and biomedical (implants) industries) to use a mathematical methodology (such as topology and lattice optimization tools) to optimize material layout within a given design space, for a given set of masses and loads, materials, and boundary conditions as well as constraints with the objective of maximizing performance (quasi static and dynamic mechanical behavior) of the system. This will help designers to conduct iterative analysis and select process parameter settings to optimize the shape and the density of infill for FFF-based 3D printed parts.
Sample preparation and mechanical testing
In this study, in order to validate the FE simulation results of 3D printed tensile, shear, and three-point bending (3PB) testing, specimens were produced using a fused filament fabrication (FFF) 3D printer (Ultimaker 3), and then mechanical testing in conjunction with digital image correlation (DIC) detailed in [37] was carried out to obtain the full field strain maps and the stress-strain curves. A polylactic acid (PLA) filament provided by Ultimaker (standard silver metallic PLA, 2.85 mm/750 gram) was used to obtain the 3D printed specimens. The Ultimaker Cura 4.8 edition was used to generate the machine code for the FFF 3D printer from the 3D model files. Simple 3D printed test sample designs based on ASTM standards were used in all cases. The geometry and dimension of the tensile, Iosipescu, 3PB, and inter-laminar shear (ILS) test specimens are in accordance with ASTM D638, ASTM D5379, ASTM D7264, and ASTM D2344 standards respectively [38][39][40][41]. The 3D printing process parameters used to produce the test specimens are provided in Table 1. In order to examine the effect of raster and build orientation, 3PB, tensile, and Iosipescu shear test specimens were printed with four different build orientations (on edge 0°,on edge 45°, on edge 90°, and flat) and three different raster angles (0°,45°, and 90°) all using parallel deposited filaments ( Figure 1). Conducting tensile and Iosipescu shear testing on the printed specimen results in the calculation of all engineering constants of the RVE (detailed in Sections 3.1 and 3.5) defined for 3D printed parts when parallel filaments are used. Of course, the inter-laminar shear modulus (G 23 ) needs to be calculated. Therefore, short-beam shear test specimen with 90°raster angle was also printed and tested. In order to examine the effect of infill density, 3PB and tensile test specimens with two infill densities of 50% and 100% with the partial infill patterns of rectilinear, where the filaments are oriented at (0°/90°), were also printed. The summary of printing patterns and orientation for different types of test specimen are listed in Table 2. For the on-edge samples at 45°and 90°, a support structure using polyvinyl alcohol (PVA) provided by Ultimaker (PVA Natural, Standard PVA, 2.85 mm/750 gram) was used to ensure the geometry was maintained. To remove the PVA support structures from vertically 3D printed samples, cold water immersion was used. The 3D printed samples were then dried using hot air at 60°C for a few seconds and allowed to cool to ambient temperature before mechanical testing. Following the recommendation of the ASTM standards that were mentioned above, for each case in Table 2, five specimens were tested. In terms of failure location and depending on the failure modes (i.e., interlayer and intra-layer fracture), for each case, most specimens failed within the gauge length; however, occasionally, some samples failed outside the gauge length. In these cases, the test specimens were 3D printed anew, and mechanical tests were repeated until a successful result was produced.
FE microstructural model of bending, shear, and tensile test samples
In the present study, FE explicit microstructural simulation is carried out for FFF-based 3D printed specimens using the FE package ANSYS. The isotropic properties of PLA, i.e., E=3500 MPa and ν=0.35 determined by Bollard style tensile grips [42,43], were used as input for the FE models. Given the internal microstructure and infill patterns, models of 3PB, Iosipescu, short-beam shear, and tensile specimens were created in the design modeler tool of ANSYS. The specimens were modeled with different build orientations and raster angles described earlier in Section 2.1 all using parallel fibers. Details of build orientation and raster angle arrangements are schematically shown in Figure 2. Two infill densities of 50% and 100% for the partial infill patterns of rectilinear design where the filaments are oriented at (0°/90°) were also analyzed by FE ( Figure 3). To replicate the bonding between filaments and layers due to compression by the nozzle ("squish"), instead of using the circular cross-section of filaments, they are approximated as a rounded rectangular cross-section with a certain small amount of overlap between the adjacent fibers. This is due to the diffusion of two raster layers at the interface during solidification. The shape of individual filaments and the overlap region observed under microscope can be clearly seen in Figure 4. Using a calibrated light microscope, the height and width of the filament (h and w) are measured as 0.2 mm and 0.4 mm. These measurements were used to generate a more realistic geometry model for FE of the microstructure in the FFF test specimens. To construct the full model of all mechanical test specimens with the infill structures, first the cross-section of filament is created using the dimensions obtained from the microscopic analysis ( Figure 4) and then patterns schematically shown in Figure 2 and 3 are generated to prepare a rectangular model. Finally, the model is trimmed as per the exact dimension of the bending, shear, and tensile test specimens.
Macro scale FE modeling of 3D printed test samples based on homogenization approach
The macro scale FE model characterizes the design of bending, shear, and tensile test specimens using orthotropic properties of RVE for different printing process parameters. The FE model incorporates the boundary conditions with the internal lay-up of RVE. In the first stage of FE modeling of the bending, tensile, and shear test, a design modeling tool is used to create a shell model of the test specimen. The model integrates the geometry of test specimens according to the standard methods described earlier. The Surface function is used to generate a thin surface then it is transferred into the ANSYS Composite Processor (ACP) where effective engineering constants of the RVE, stacking sequences (i.e., infill patterns, build orientation, and raster angle), and thicknesses are all defined. Figure 5 shows the FE mesh and the boundary conditions imposed on the FE model of tensile, shear, and bending test coupons. In the case of FE model of 3PB, the contact between support/loading rollers and the sample was considered frictional (friction coefficient of 0.2). A mesh sensitivity study was also conducted and the convergence criterion (i.e., stabilization of stress) is met at the mesh density used. To provide input data (i.e., orthotropic engineering constants of the RVE) for the FE model of 3D printed samples in bending, tension, and shear, initially, FE analysis of RVE using the homogenization method was conducted. The RVEs of 3D printed specimens with the infill patterns of parallel filaments as well as rectilinear filaments (0°/90°) with two infill densities of 50% and 100% are shown in Figure 6. These are taken from the microstructure of the 3D printed parts as seen in Figure 2 and 3. In this work, four-node tetrahedral elements in ANSYS were used to mesh the micro models of the RVE and then homogenization is done using the micro mechanics plugin in ANSYS (Material Designer). To avoid the mesh dependency in the RVE, smaller elements were used. The micro models of the RVE shown in Figure 6 is subjected to six different strains ( Figure 7) applied individually using the periodic boundary condition (detailed in the following sections). Therefore, effective orthotropic engineering constants of RVE which are subsequently used as the input data for the FE simulation of bending, tensile, and shear testing are obtained. It must be noted that in this study, in both FE homogenization and explicit microstructural modeling techniques, only the elastic response of 3D printed mechanical test specimens are simulated, and viscoelastic and plastic behavior of PLA materials were not taken into account in the constitutive material behavior.
Constitutive material behavior of 3D printed specimens
To account for the material behavior in the FE stress analysis, constitutive behavior of horizontally and vertically printed bending, tensile, and shear samples are evaluated in this study. The nine elastic constants in orthotropic constitutive equations are as follows: three Young's moduli (E x , E y , and E z ), three Poisson's ratios ( v xy , v yz , and v xz ) and three shear moduli (G xy , G yz , and G xz ). The stress-strain relation for an orthotropic material is defined as: where S is the compliance matrix: Therefore, to consider the material behavior in the FE stress analysis of the 3D printed specimens, the coefficients of the constitutive matrix (stiffness values) need to be determined. This is done in this study using the numerical homogenization technique.
Homogenization
The method of prediction of the constitutive matrix (effective orthotropic properties) of materials based on the properties of constituent and geometrical aspects of the microstructure is called homogenization. In 3D printing of specimens, these properties are calculated from the properties of the raw PLA used for printing, where two constants are required, Young's modulus and Poisson's ratio. A small volume of material with repeating unit cells which is called an RVE ( Figure 6) is considered for the numerical homogenization analysis. In the homogenization technique, given the assumption that the stored strain energy in the heterogeneous volume of the RVE (V rve ) is the same as a homogeneous RVE, the effective properties of heterogeneous materials can be obtained. The stored strain energy (E) in the heterogeneous RVE of volume (V rve ) is calculated as Also, the strain energy of an equivalent homogeneous RVE is defined as: where σ ij and ε ij are calculated by averaging the local stress and strain fields over the RVE's volume (V RVE ) By defining periodic boundary conditions on the RVE and substituting Eq. 1 into Eq. 4, the components of compliance matrix (orthotropic elastic constants) in Eq. 2 can be calculated. This is done when the micro-model of the RVE is loaded in accordance with the boundary conditions representing uniaxial strain (a, b, and c) and shear strain (d, e, and f) states of the RVE positioned in the origin of the coordinate system (Figure 7). For each state of strain in Figure 7, the volume average stress, strain, and total strain energy are obtained from the FE results to construct the numerical prediction of orthotropic elastic properties. More details of expressions used to calculate the elements of compliance matrix is available in the literature [33,44].
Generating periodic boundary conditions
In the numerical homogenization method, uniform strains are applied to the RVE model to compute the effective elastic properties. By applying these strains in independent sets, specific elastic properties are calculated for each set. The RVE is part of a periodic material; therefore, before and after imposing the strains, the periodicity of the RVE with the surrounding material needs to be simulated in the FE software. This is achieved by imposing node-to-node periodic boundary conditions to the deformed boundary surfaces of the RVE. In FE software, this is done either by coupling the degrees of freedom (DoF) of the corresponding nodes in the corresponding directions or by using a constraint equation to define the specific relationship between the corresponding nodes in the boundary. Given the definition of top/bottom, left/right and back/front surfaces, corners, and edges in the RVE, sets of Eq. 6 and Eq. 7 between the two opposite nodes in the RVE are defined. The common nodes on edges and corners of the RVE were also defined once.
For the measurement of elastic modulus (E x ): X at front nodes−X at back nodes ¼ Δ X at top; left nodes−X at bottom; right nodes ¼ 0 Y at top; front; left nodes−Y at back; bottom; right nodes ¼ 0 Z at front; top; left nodes−Y at back; bottom; right nodes ¼ 0 where X, Y, and Z are the components of displacements along the X, Y, and Z axes. Δ is the applied displacement.
Calculating Young's modulus, Poisson's ratio, and shear modulus
By applying a displacement on the surfaces of the RVE, boundary nodal forces are created at the affected boundary surfaces. Therefore, dividing the sum of the boundary nodal forces at the affected boundary nodes (reaction force as denoted by F in Eq. 8 and Eq. 9) by the area of affected surface yields the stress value corresponding to the applied strain (applied displacement divided by the length of the RVE); therefore, Young's modulus as well as shear modulus are calculated as shown in Eq. 8, Eq. 9, and Fig. 8. Correspondingly by calculating the transverse strain and dividing it by the applied strain, Poisson's ratio is also estimated.
where F * is the sum of the front surface nodal forces along the x axis: where Fe * * is the sum of the top surface nodal forces along the x axis:
Results and discussion
In this study, horizontally and vertically 3D printed structures are considered for material modeling to compute their orthotropic engineering constants using homogenization technique. The effect of build and raster orientation of the respective vertical and horizontal structures on the bending, tensile, and Iosipescu properties is discussed. In addition, the effect of infill densities with the rectilinear infill pattern on the tensile and bending properties is discussed. This is done by calculating the effective orthotropic properties of RVEs and then using the properties as an input for the FE homogenization simulation of bending, tensile, and shear tests. The results of FE homogenization are finally compared with FE explicit microstructural simulations and experiment.
RVE with parallel filaments
The FE model of an RVE (with parallel filaments) using the periodic meshing is shown in Figure 9, then the FE simulation Table 3.
RVE with rectilinear pattern
This section shows the results of numerical homogenization of the 3D printed structures using the following process parameter rectilinear infill patterns (stacking sequence of the layers with defined raster angle in horizontal part is (0°/90°)). Two different infill densities of 50% and 100% are investigated. The FE model of the RVEs using the periodic meshing is shown in Figure 10, then the FE simulation for homogenization of the material is conducted and therefore the elements of elastic moduli for the orthotropic material are calculated (Table 3).
Numerical versus experimental 3PB and shear testing
Using the orthotropic engineering constants of RVE (Table 3 and Table 4) as an input data for the FE model of test samples as described in Section 2.3, 3D printed bending, tensile, and shear tests are simulated. Representative DIC and FE calculated bending and shear strain fields are depicted in Figure 11 indicating that there is a good agreement between experimentally and numerically calculated strain distribution. However, the effect of raster angle, build orientation, and infill density on the stress localization in FE homogenization cannot be studied. Although changing the raster angle, build orientation, and infill density results in different strain values, the strain distribution maps in FE homogenization technique remain unchanged. Figure 12 and 13 show the effect of raster angle, build orientation, and infill density on experimentally and numerically generated bending and shear properties. As can be seen from these figures, when the build orientation and raster angles increase from 0°to 90°, the experimentally calculated flexural modulus reduces by 31.2% and 32.3%, while the FE calculated flexural modulus reduces by 25.4% and 25.7% respectively. In addition, when the build orientation and raster angles increase from 0°to 45°, the experimentally calculated in plane shear modulus increases by 25.4% and 23.6% while the FE calculated in plane shear modulus increases by 19.8% and 18.7%. This shows that the FE model can predict reasonably well the effect of build/raster angle on the bending and shear modulus of 3D printed parts when using parallel filaments.
The difference between FE and experimental flexural and shear modulus when changing build/raster orientation and infill density are shown in Figure 14 a and b. As it can be seen from this figure, the difference between experimental and numerical flexural modulus is less than 10% when build and raster orientation are 0°; however, when the orientation increases to 45°and 90°, the difference increases by about 11% and 14% respectively. Conversely, the difference between experimental and numerical shear modulus decreases from 11 to 7% when the raster angle/build orientation increases from 0°to 45°. The reason for variation in the numerically and experimentally calculated elastic moduli is that in the FE analysis, it is assumed that the bonding between adjacent filaments is perfect; however, this is not the case for the real 3D printed parts. In Section 3.3.1, it is shown by FE explicit microstructural modeling that when performing offaxis mechanical testing (bending and shear), the bonding between filaments is a controlling factor in mechanical properties. As the bonds in FE model are assumed to be perfect, the difference between FE and experiment becomes more highlighted when build/raster orientation changes. Table 4 Elastic moduli of the RVEs for two infill densities of 50% and 100%-infill patterns of rectilinear with stacking sequence of (0°/90°) While the difference between FE and experimentally determined bending and shear modulus for 3D printed samples with 100% density is around 9% (Figure 14c), the difference becomes significant when the infill density is 50%. In Figure 14 c, it is shown that even the FE model underestimates the bending and shear modulus when the infill density is 50%. In addition, when infill density increases from 50 to 100%, the experimentally determined flexural modulus and shear modulus increase by 2.65 and 61.6 times, while FE calculated flexural and shear modulus increase by 4.4 and 96.6 times, indicating that infill density has significant impact on the mechanical properties of 3D printed parts. The main reason for the difference between FE and experimental results when using partial infill patterns is the gap between filaments in the developed FE model of the RVE in Figure 10 a, while this gap does not exist in the 3D printed parts. Nevertheless, apart from the partial infill patterns (i.e., infill density of 50%), the present FE analysis is an alternative to the experimental and can provide accurate results compared with that experimental work.
FE microstructural analysis of 3PB and shear testing
As discussed earlier, the effect of raster angle, build orientation, and infill density on the stress localization in FE homogenization calculated strain fields cannot be studied. In order to demonstrate the effect of these parameters, and also to predict the local failure mechanisms and to investigate important features of micromechanics such as the effect of raster angle and build orientation on stress localization, FE microstructural simulation of bending and shear tests was conducted in this work. The stress contours of bending and shear test specimens and the effect of build orientation and raster angles are shown in Figure 15, 16, 17, 18. In addition, FE microstructural Figure 12 Effect of a, b raster angle; c, d build orientation; and e, f infill density on numerically calculated 3PB load deflection (elastic regime) and experimentally generated 3PB load deflection plots (representative 3PB loaddeflection plots) simulation of the bending test for the 3D printed samples with two infill densities of 50% and 100% has been carried out ( Figure 19). As can been seen in all these figures, the maximum stress in all cases occurs at the interface of the PLA filaments. Therefore, the weakest section in the microstructure of 3D printed parts is the interface between deposited filaments or layers and this is more susceptible to crack initiation during deformation. As a result, de-bonding between PLA Figure 13 Effect of a, b raster angle; c, d build orientation; and e, f infill density on numerically calculated shear stress strain (elastic regime) and experimentally generated shear stress strain plots (representative shear stress-strain plots) Figure 14 Difference between FE and experimentally determined flexural modulus and shear modulus. a Effect of build/ raster orientation on flexural modulus, b effect of build/raster orientation on shear modulus, and c effect of infill density on flexural and shear modulus Figure 15 FE calculated stress fields (longitudinal stress) during 3PB test simulation on horizontally 3D printed samples where a raster angle is 0°, b raster angle is 45°, and c raster angle is 90°. d Effect of raster angle on FE calculated flexural stress-strain plots Figure 16 FE calculated stress fields (longitudinal stress) during 3PB test simulation on vertically 3D printed samples where a build orientation is 0°, b build orientation is 45°, and c build orientation is 90°. d Effect of build orientation on FE calculated flexural stress-strain plots Figure 17 Effect of build orientation on FE calculated shear stress fields. a 0°, b 45°, c 90°. d Effect of build orientation on FE calculated shear stressstrain plots Figure 18 Effect of raster angle on FE calculated shear stress fields. a 0°, b 45°, c 90°. d Effect of raster angle on FE calculated shear stress-strain plots Figure 19 FE calculated stress fields (longitudinal stress) during 3PB test simulation on 3D printed samples with the infill pattern of rectilinear and infill densities of a 100%, and b 50%. c Effect of infill density on FE calculated flexural stress strain plots filaments can occur during the bending and shear loads, finally leading to the failure of the FFF-based 3D printed parts. In addition, in Figures 15, 16, 17, 18 and 19, the corresponding load-deflection and stress-strain behavior (and therefore bending and shear modulus in the elastic regimes) are shown. Comparing the bending and shear modulus obtained by FE microstructural simulation with modulus obtained by FE homogenization (detailed in Figure 12 and 13) shows that the two FE methods agree well with each other.
Numerical versus experimental tensile testing
Unlike bending or shear test, the effect of raster and build orientation on the experimentally generated localized tensile strain fields can be observed and analyzed. In previous work [37], the effect of build orientation on DIC-generated tensile strain fields has been investigated. In the present study, when raster angle changes from 0°to 90°, DIC computed tensile strain fields are shown in Figure 20. The highest localized strains in this figure indicates the effect of defects produced during the printing process. When the raster angle is 45°or 90°, the highest localized strain occurs at the interface between filaments which are oriented in the 45°and 90°planes. This is verified by the results of FE explicit microstructural analysis in Section 3.4.1 (Figure 24) where the maximum stress/strain occurs at the interface of PLA filaments indicating that the interface is more susceptible to crack initiation during the deformation, and therefore, de-bonding between PLA filaments can be predicted because of the tensile loads.
Comparison of the fracture surfaces shows that the failure mode changes as a function of raster angle. Failure from 0°t o 90°orientation changes from ductile to brittle; the transition in behavior from ductile to brittle fracture is mainly due to the layer deposition direction. In 0°raster angle, the layer deposition direction was parallel to the specimen axis and the load was applied parallel to the layers; therefore, ductile fracture is observed with significant plastic deformation. As the raster angle increases, the specimens display an intermediate brittle-ductile fracture behavior. Noticeably, when the raster angle increases (≥ 45°), the specimen demonstrates the transition to brittle failure, with little plastic deformation. The 90°r aster angle fails by brittle fracture due to inter-layer fusion bond failure as the load is applied perpendicular to their layers; the stress strain curve exhibits a linear trend followed by sudden failure.
Using the orthotropic engineering constants of the RVE (Table 3 and Table 4) as input data for the FE model of the samples, the tensile test is simulated. Representative FE (homogenization) calculated tensile strain fields are depicted in Figure 21; however, the effect of raster angle, build orientation, and infill density on the localized stress in FE homogenization cannot be studied. This means that although changing the raster angle, build orientation, and infill density result in different strain values and strain distribution maps in FE, the calculated strain fields remain unchanged. Figure 22 shows the effect of raster angle, build orientation, and infill density on experimentally and numerically generated tensile properties in terms of their respective stress-strain plots. As can be seen from this figure, by comparing FE and experimentally determined tensile moduli when build/raster orientation and infill density change, a similar discussion to the effect of processing parameters on bending properties (Section 3.3) can be made when analyzing the tensile data. This means that apart from the partial infill patterns (infill density of 50%), the FE model can predict well the effect of build/ raster angle on the tensile modulus of 3D printed parts. The variation between the numerically and experimentally calculated elastic moduli is due to the assumption of a perfect bond between adjacent deposited filaments in the FE model, while this is not the case for 3D printed parts. In Section 3.4.1, it is shown by FE explicit microstructural modeling that when conducting off-axis tensile testing, the bond between filaments is a determinant factor in tensile properties. As the bonds in FE model are assumed to be perfect, the difference between FE and experiment becomes more highlighted when build/raster orientation changes. Figure 21 FE (homogenization) calculated tensile stress distribution for horizontally (raster angle of 0°) 3D printed parts Figure 22 Effect of a, b raster angle; c, d build orientation; and c infill density on numerically calculated tensile stress-strain plots (elastic regime) and experimentally generated tensile stress-strain curves (representative tensile stress/ strain plots)
FE microstructural analysis of tensile testing
To investigate the effect of raster angle/build orientation and infill density on the stress localization, FE microstructural simulation of tensile tests was conducted in this work. In Figure 23 and Figure 24, it is shown that the maximum stress occurs at the interface of PLA filaments. This indicates that the interface is more susceptible to crack initiation during the deformation, and therefore, debonding between PLA filaments can occur due to the tensile loads. The localized stress contours of tensile test specimens and the effect of infill densities are shown in Figure 25. As can be seen in this figure, when samples with 50% infill density are subjected to tensile loads, most loads are sustained by the longitudinal PLA filaments and in the interface between filaments stress transfer can also be observed. In the case of 100% infill density, although most tensile loads are taken by longitudinal filaments, localized stress between filaments are greater in terms of magnitude; therefore, failure (de-bonding between filaments) is predicted at the interface.
Verification of RVE properties
One of the main objectives in the present study is to validate the effective orthotropic engineering constants of the RVE (Figure 9) obtained by FE homogenization against the experimentally determined values. Table 5 shows the method of tensile and shear testing used in this work to experimentally determine the elements of the elastic moduli and validate the properties of the RVE in Table 3. In Table 6, numerically and experimentally determined elements of the elastic moduli are compared. While for most of the elements less than 10% difference between FE and experimental results are observed, a bigger difference (around 1315%) for the transverse, through the thickness modulus and inter-laminar shear modulus (i.e. E 2 , E 3 and G 23 ), is seen, mainly due to the effect of bonding at the interface between deposited filaments. As was explained in Sections 3.3.1 and 3.4.1, the bond between filaments at the interfaces are assumed to be perfect in the FE model, while this is not true in the 3D printed parts subjected to mechanical testing. In addition, the elements of the elastic moduli obtained from FE microstructural analysis agree well (less than 2% difference) with the elements of elastic moduli obtained from FE homogenization, indicating that the results of FE homogenization is validated against the microstructural simulation as well.
Industrial applications
The validation of FEA results against experimental data in the context of additive manufacturing (3D printing) obtained in this study is fundamental in automotive, aerospace, and biomedical industries where the optimal material distribution in a certain volume exposed to mechanical constraints can be determined resulting in significant reduction in the cost and time of manufacturing process of load-bearing components and structures. This can be obtained through the use of FEM techniques such as ANSYS space-claim design tools where the boundary conditions, types of materials, and 3D printing process parameters such as internal microstructures, infill densities, and layer height can be integrated and optimized. In particular, the automotive industry seeks to solve challenges in cost and time to manufacture with material savings in mass production. In fact, a small drop (a few grams) per automobile, on an assembly of several thousand units, results in considerable material savings. The aerospace industry is certainly another area very keen in FEA analysis of 3D printing by the means of topology optimization to reduce weight and costs. A lighter aeroplane uses less energy, which in turn causes substantial savings for an airline company. Finally, the medical industry is very interested in designing methodologies, especially to produce bespoke implants where FEA tools in 3D printing such as lattice optimization tools allow it to replicate the density of bone, while decreasing the component weight. Many implants incorporate lattice structures and are as robust as those conventionally designed and manufactured.
Conclusion
The constitutive material behavior of FFF-based 3D printed parts depends on processing parameters such as build orientation, raster angles, infill patterns, and densities. Although an isotropic material such as PLA is used for 3D printing, the structure, and the mechanical behavior of the part is orthotropic. In the present study, the computation of the effective orthotropic properties of printed parts using the numerical homogenization method based on a multi scale approach was presented. The technique was used to predict the influence of printing process parameters on the elastic response of 3D printed mechanical test samples. The analysis of micromechanic models of an RVE is used to calculate the effective elastic constants which were subsequently used as an input for the creation of macro scale FE models of 3PB, tensile, and shear samples. Finally, the results obtained by homogenization technique were validated against experimental as well as FE explicit microstructural models. Some key conclusions are as follows: & Although FE explicit microstructural simulation is computationally much more expensive compared to the multi scale numerical homogenization technique, it is useful to identify the localized stress at the interfaces between the adjacent fibers and layers and therefore to predict the types of failure modes in FFF-based 3D printed parts. can predict well the elastic properties of 3D printed parts with 100% infill density, for the partial infill pattern, the FE models need to be improved further. & The numerical methods developed in this study showed the ability to predict the elastic properties of 3D printed structures. This can result in a significant reduction in the number of mechanical tests which are usually needed for evaluating the behavior of 3D printed parts; as a result, significant time and cost can be saved using the FE approach in this study. & The approach used in this work also enables the designer to conduct faster iterative analysis and choose optimized printing process parameters based on FE in order to produce high-quality FFF-based 3D printed parts. | 2021-10-20T15:36:49.071Z | 2021-09-17T00:00:00.000 | {
"year": 2021,
"sha1": "5caa2838e1ff2f7d92e5df2d0f9deee88669c6ff",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00170-021-07940-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "3ee8b8b84c78246a23a436aa9c54997c10d9048a",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
54504169 | pes2o/s2orc | v3-fos-license | Trends in coastal upwelling intensity during the late 20th century
This study presents linear trends of coastal upwelling intensity in the later part of the 20th century (1960– 2001) employing various indices of upwelling, derived from meridional wind stress and sea surface temperature. The analysis was conducted in the four major coastal upwelling regions in the world, which are off North-West Africa, Lüderitz, California and Peru. The trends in meridional wind stress showed a steady increase of intensity from 1960–2001, which was also reflected in the SST index calculated for the same time period. The steady cooling observed in the instrumental records of SST off California substantiated this observation further. It was also noted that the trends in meridional wind stress obtained from different datasets differ substantially from each other. Correlation analysis showed that basin-scale oscillations like the Atlantic Multidecadal Oscillation (AMO) and the Pacific Decadal Oscillation (PDO) could not be directly linked to the observed increase of upwelling intensity off NW Africa and California respectively. The relationship of the North Atlantic Oscillation (NAO) with coastal upwelling off NW Africa turned out to be ambiguous due to a negative correlation between the NAO index and the meridional wind stress and a lack of correlation with the SST index. Our results give additional support to the hypothesis that the coastal upwelling intensity increases globally because of raising greenhouse gas concentrations in the atmosphere and an associated increase of the land-sea pressure gradient and meridional wind stress. Correspondence to: N. Narayan (nnarayan@marum.de)
Introduction
Coastal upwelling systems are characterized by seasonally low sea-surface temperature (SST).Coastal upwelling results from the response of the coastal ocean to alongshore winds, leading to the production of a relatively intense current with a small offshore and a large alongshore component (e.g.Pedlosky, 1978).This causes the pumping of cooler and nutrient-rich water from the subsurface (from 50-150 m approximately) to the ocean surface.
Due to the enhanced primary production, these regions are economically important, accounting for nearly 20% of the global fish catch, even though the area constituted by the upwelling regions are less than 1% of the global ocean (Pauly and Christensen, 1994).They also play an important role in the air sea exchange of CO 2 .Moreover, coastal upwelling has also a profound effect on local climate.
Based on pre-1985 data, Bakun (1990) observes an increase in coastal upwelling at a global scale.He hypothesizes that this increase is due to global warming.The underlying mechanisms involve an intensification of the land-sea pressure gradient due to differential heating, which in turn causes a strengthening of upwelling-favorable winds.
In support of the "Bakun hypotheis", a significant cooling of surface waters in the coastal upwelling area off Cape Ghir (North West Africa near 30.5 • N) during the later part of the 20th century has been reconstruced by McGregor et al. (2007).However, Lemos and Pires (2004) find a decrease in coastal upwelling intensity off the coast of Portugal in the later part of the 20th century.Furthermore, Dunbar (1983) suggests a decrease of upwelling between 1850 and the present.While taking into account a longer timescale of 3000 years, Julliet- Leclerc and Schrader (1987) also argue that the coastal upwelling in the Gulf of California is weaker today than 1500 to 2000 years before present.These contrasting results prompted us to study the change of coastal upwelling intensity during the 20th century in further detail.
N. Narayan et al.: Trends in coastal upwelling intensity
In this study we test the Bakun hypothesis at a global scale by exploiting available datasets covering a longer time period and extending to the present day.To this end, we compared the linear trends of coastal upwelling intensity, which we derived from meridional wind stress and SST, in the four major upwelling regions of the world.We also tested if basin-scale climate oscillations exert a primary control on the intensity of coastal upwelling.The analysis revealed contrasting trends, which suggested large discrepancies between the wind-stress datasets.The datasets that we regard as more reliable support an increase of coastal upwelling intensity over the later part of 20th century, which is consistent with the observation by Bakun (1990).
Data and methods
Our analysis focuses on the coastal-upwelling areas off North West Africa (near 30.5 • N), California (near 39 • N), Lüderitz (near 27.5 • S) and Peru (near 12.5 • S).Due to the lack of long-term and regional-scale measurements of vertical velocities, we used wind speed and SST as an indirect measure for assessing upwelling strength.We employed the meridional wind speed data of the Comprehensive Ocean Atmosphere Dataset (COADS; Slutz et al., 1985), the National Center for Environmental Prediction NCEP/NCAR reanalysis (Kalnay et al., 1996) and the ERA-40 reanalysis (Uppala et al., 2005) from the European Centre for Medium Range Weather Forecast.The COADS dataset has a spatial resolution of 1 • × 1 • , while the NCEP/NCAR reanalysis and the ERA-40 reanalysis both have a spatial resolution of 2.5 • ×2.5 • .For obtaining the timeseries, a small region (3 • in the cross-shore direction and 5 • in the alongshore direction) was defined in each of the coastal upwelling areas and the meridional wind stress was area-averaged.The data over land areas were masked out.All data were obtained at a monthly resolution and averaged over time to produce annual data.The time period covered by the wind data is from 1960 to 2001.An increase in equatorward meridional wind stress was taken to indicate an increase in coastal upwelling.Wind stress was calculated from wind speed using a constant drag coefficient of 1.2.The COADS wind stress at a monthly resolution was used for calculating Pearson's correlation coefficient and the cross-correlation coefficients with climatic indices indicative of the Atlantic Multidecadal Oscillation (AMO), the North Atlantic Oscillation (NAO), and the Pacific Decadal Oscillation (PDO).
We also used the SST data from the Hadley Centre (HadISST; Rayner et al., 2003), which is a monthly dataset with a spatial resolution of 1 • × 1 • that covers the time period 1870-2006.The monthly data was averaged over time to produce annual data and was used to calculate an index of coastal upwelling, which is defined as the difference of SST from an offshore location to a near shore location at the same latitude (Nykjaer and Van Camp, 1994).For this purpose, a series of locations was determined on the coast separated (Rayner et al., 2003).
by 1 • in the meridional direction.A location 5 • offshore from the coastal point was taken at the same latitude as the offshore data point (Fig. 1).The SST index was calculated by subtracting the SST at the coastal point from the SST at the offshore location.Through this method five different time series were obtained for each upwelling region.The average of these time series was then taken as the upwelling index.An increase of this index is taken to indicate an increase of the upwelling intensity.An SST index with monthly temporal resolution was also calculated by the above method for the correlation and cross-correlation analysis with various climatic indices.
In addition, the instrumental SST dataset provided by the California Cooperative Fisheries Investigation (CALCOFI; Bograd et al., 2003) was used in the California upwelling region to also calculate an upwelling index.The SST data east of CALCOFI station number 52 were taken as coastal data and the SST data between west of CALCOFI station number 80 were considered offshore data (Fig. 2).The data points in the Sea of Cortez were excluded.An upwelling index time series was produced by subtracting the coastal SST from the offshore SST.A time series of temperature of the top 100 m of the water column in the coastal area (east of CAL-COFI station number 52) was also taken.Though the CAL-COFI data extends from 1949-2006, there are gaps in the time series when the CALCOFI cruises were not frequent, (Bograd et al., 2003) data used to calculate the SST index.It is calculated by subtracting the area-averaged SST over coastal locations (black) from the area-averaged SST of the offshore locations (red).
especially between 1970 and 1980.However, overall trends in the data could be used as shown in the study by Roemmich (1992).The resulting time series had a temporal resolution of three months (starting in January) and was area-averaged.
The following climatic indices at monthly resolution were used in the study: 1.The Atlantic Multi-decadal Oscillation Index (AMOI; Enfield et al., 2001), calculated from the SST data of Kaplan et al. (1998) as the de-trended area-weighted average over the North Atlantic (0 • -70 • N).
2. The North Atlantic Oscillation Index (NAOI; Barnston and Livezey, 1987), which is the normalised pressure difference between the Azores and Iceland averaged over the months of December, January and February.4. The Multivariate El Niño Southern Oscillation Index (MEI; Wolter and Timlin, 1993), based on the sea-level pressure, zonal and meridional components of the surface wind, sea surface temperature, surface air temperature and the total cloudiness fraction of the sky.
The time series of meridional wind stress and SST index were low-pass filtered using a Butterworth filter with a cutoff period of 8 years and order 12.This was done to reduce the effect of interannual variability on the long term trend.Due to the presence of gaps in the CALCOFI dataset, the low pass filtering could not be performed on it and the raw data (Slutz et al., 1985) calculated by the method of least squares.All regions show a significant increase of upwelling.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
was used for the analysis.Linear trends in time series were estimated using the method of least squares.The statistical significance of the trends was estimated using a Student's ttest with the null hypothesis of a zero slope of the trend line at a significance level of 0.05.In order to account for the autocorrelation in the time series, an effective sample size was used (Dawdy and Matalas, 1964).The correlation between time series along with the bootstrap confidence interval was estimated taking into account the serial dependence in the timeseries (Mudelsee, 2003).The cross-correlation function was calculated using the algorithm described by Orfanidis (1996).Linear trends were removed from the datasets before estimating the cross-correlation function.
Results
The COADS wind stress reveals significant increasing trends in all coastal upwelling regions (Fig. 3, Table 1).In contrast, the NCEP/NCAR wind stress (Fig. 4, Table 1) indicated a significant decrease in upwelling off NW Africa, whereas an increasing trend was observed for Lüderitz.The trends for California and Peruvian upwelling regions were statistically insignificant.
The ERA-40 (Fig. 5, Table 1) dataset showed an increasing trend in the NW African and Peru upwelling regions and a decreasing trend in the California upwelling region.In the Lüderitz upwelling region the trend observed is insignificant.
www.ocean-sci.net/6/815/2010/Ocean Sci., 6, 815-823, 2010 (Kalnay et al., 1996) calculated by the method of least squares.NW Africa and Peru show a decrease of upwelling.There is an increase of upwelling in Lüderitz and an insignificant trend in California.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
As an additional proxy for upwelling intensity, the SST index was calculated for the period between 1870 and 2006 and analysed for trends (Fig. 6, Table 1).It revealed significantly decreasing trends off NW Africa, Lüderitz and Peru.In contrast, trends in the more recent part of the time series (from 1960 onwards) suggested an increase in upwelling in all regions except off Peru (Fig. 7, Table 1).
For the California system, this result is also supported by the SST index derived from the CALCOFI dataset (Fig. 8a).The coastal SST indicated a significant cooling trend throughout the sampling period (Fig. 8b).The time series produced by averaging the temperature of the top 100 m of the water column in the California coastal region also showed a significant cooling (Fig. 8c).Both findings confirmed the results obtained by analysing the wind and SST data from the global datasets.There is a decrease of upwelling in California and insignificant trend in Lüderitz.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Discussion
On one hand, the results from analysing trends in the COADS wind stress are consistent with the hypothesis by Bakun (1990), later taken up by McGregor et al. (2007) for NW Africa, which proposes a general increase in coastal upwelling in the later part of the 20th century due to global warming.On the other hand, trends obtained from the NCEP/NCAR and ERA-40 wind stress for the areas off NW Africa as well as the study by Lemos and Pires (2004), which argues that the upwelling intensity has decreased over the last century at the coast of Portugal, suggest that coastal upwelling intensity is increasing in some upwelling regions and decreasing in others.
At first sight the lack of significant trends in the Lüderitz (ERA-40 dataset) and California (NCEP/NCAR dataset) and the existence of significant decreasing trends revealed by the NCEP/NCAR dataset for the NW African and Peruvian upwelling regions indeed seem to contradict the global nature of increasing coastal upwelling intensity as proposed by Bakun (1990).However, Smith et al. (2001) argue that the NCEP/NCAR reanalysis dataset underestimates the strength of wind globally.They also suggest that the surface pressure is significantly weaker in the tropics, which leads to an underestimation of the strength of subtropical highs and the wind strength, specifically in the subtropics.Moreover, the comparison of NCEP/NCAR winds with COADS winds by Wu and Xie (2003) revealed that the COADS inter-decadal wind changes are more consistent with independent observations.Based on these findings we assume that the trends observed in the COADS dataset are likely to be more reliable.Ramage (1987) and Cardone (1990) give many reasons for the likelihood of an artificial long-term trend contaminating the wind stress time series (especially the COADS dataset), for example, the one related to the monotonically increasing proportion of anemometer measurements to Beaufort estimates in the available distribution of maritime wind reports.Bakun (1992) analysed the wind stress trends obtained off the Iberian peninsula and detected that there exist two overlapping trends, one related to the artifact and one thought to be associated with the gradual strengthening of continental thermal low pressure cells.Since separating the effect of their respective roles was difficult, Bakun (1992) analyzed the spatial patterns of the wind stress trend in the periphery of the North Atlantic gyre and determined that the long-term trends adjacent to the seasonally heated land masses showed The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 • C yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
increasing trends, whereas the locations away from these regions showed decreasing trends.Additionally, Mendelssohn and Schwing (2002) show that the increasing wind stress is confined to the main upwelling zone as well as the seasonal period in which the thermal low pressure zone develops.Fortunately, the problem arising from the increasing proportion of anemometer measurements to Beaufort estimates is more prevalent in the time period 1900-1950(Cardone, 1990)).Since our analysis is mainly based on the data after the 1960s the effect of the artificially generated trend will be minimal.
The high degree of scatter in the time series which is independent of the increasing trends, could be from the above mentioned reasons.SST has been used as an indicator of coastal upwelling in previous studies (Nykjaer andVan Camp (1994) McGregor et al., 2007).But the SST along the upwelling-affected nearcoastal segment is a mixed signal, which could be altered by various factors.For example, decrease of surface mixing in the ocean could affect the offshore SST gradient.Similarly, intense storm activity in the offshore regions could deepen the mixed layer offshore while entraining cooler waters into the surface affecting the SST.Long-term changes, such as climate change related relaxation of the equatorial Walker circulation (Vechi et al., 2006) could also change the SST gradient.Therefore an increase/decrease of SST along the coastal upwelling zone cannot be used as a primary indicator of coastal upwelling intensity, but it can be used as a secondary indicator of coastal upwelling intensity when there is an associated increase in the upwelling favorable wind.A * indicates that the slope is statistically significant at the 0.05 level.
The trend observed in the SST index derived from the HadISST in the later part of the 20th century also showed a significant increase of upwelling in all regions except Peru and is thus consistent with the wind stress derived from the COADS data.It should be noted that the trend obtained from the HadISST data after 1960 off Peru demonstrates a significant decrease of upwelling even when upwelling favourable winds derived from the COADS dataset and the ERA 40 dataset show a significant increase.A comparison of the filtered and unfiltered SST index for the Peruvian upwelling region with the MEI (Fig. 9) reveals that the time interval 1962-1975 was predominantly in the cooler than normal (La Niña) phase, whereas the MEI indicates predominantly warmer than normal conditions after 1975.The presence of a relatively cool phase in the earlier part of the time series and a relatively warm phase in the later part effectively led to an apparent decrease of coastal upwelling.This is reflected even in the filtered time series where the peaks associated with the El Niño/La Niña are removed.
The trends obtained from the CALCOFI SST index and coastal temperatures indicate a significant cooling trend.This further substantiates the result obtained from COADS SST index (mean removed) off Peru upwelling region with filtered (red) and unfiltered (red dash) Multivariate ENSO Index (MEI; Wolter and Timlin, 1993).
wind stress and the HadISST index.It is also consistent with the increase in net primary production inferred from satellite observations from 1997 to 2007 (Kahru et al., 2009).
The coastal upwelling areas especially off NW Africa and California are subject to basin-scale climate oscillations like the Atlantic Multidecadal Oscillation (AMO), the North Atlantic Oscillation (NAO) and the Pacific Decadal Oscillation (PDO).So the trends observed in the upwelling intensity could be affected by these basin-scale oscillations.In the following we want to exclude the possibility that these oscillations exert a primary control over the intensity of coastal upwelling.
With regard to the possible control of the upwelling intensity by basin-scale climate oscillation, Pearson's correlation coefficient (see Table 2) showed that the correlation between the upwelling indices off NW Africa and the AMOI is insignificant.Furthermore, the NAOI shows a significant negative correlation with the meridional wind stress off NW Africa, but the correlation with the SST index is insignificant.Finally, the correlation coefficient between the PDOI and the SST index of coastal upwelling indices off California showed a weak but significant correlation, but the correlation with alongshore wind stress was found to be insignificant.
Cross-correlation analyses (not shown) between the upwelling indices off NW Africa and the AMOI revealed the lack of correlation at all lags.The cross-correlation between NAOI and upwelling indices off NW Africa also showed no significant correlation at any lag.In the North Pacific, the PDOI and upwelling indices off California also failed to show any substantial cross-correlation at any lag.Francis et al. (1998) observed that during the positive phase of the PDO, salmon fish catches have been significantly reduced in the California Current System and the associated upwelling region.Since the PDO reversed its direction in 1977 to its positive phase and remained in it until late 1990s, the majority of the data used in our study originate from a positive phase of the PDO.Correlation and cross-correlation analyses were done to check the influence of the PDO on the coastal upwelling off California.A weak but significant negative correlation −0.304 (−0.383, −0.221) was observed with the SST index, that is, weaker upwelling during a positive phase of the PDO.However, the correlation between the PDO and wind stress was insignificant.According to Roemmich and McGowan (1995), the warming associated with the shift towards the positive phase of the PDO increases the stratification, which in turn would result in the reduced displacement of the thermocline and increase the temperature of the upwelled water.Therefore the PDO may exert a certain amount of control on SST in the California region, but not neccessarily on the wind stress.In line with the Bakun hypothesis, the increasing trend in wind stress could be due to global warming and, hence, exert an independent control on SST.The North Atlantic Oscillation could influence the coastal upwelling intensity off the NW-African region because of its influence on the Azores high (Knippertz et al., 2003).The NAO also has a very important role in the long-term variability of the wind in the North Atlantic (Santos et al., 2005).The NAO was in the negative phase at the start of the data used in our analysis, changing to its positive phase during the early 1980s.Since two different phases of the NAO were present in the period of our study, we may expect an influence of the NAO on the trend of coastal upwelling.Hence a correlation analysis was conducted between the NAO index and the upwelling indices off NW Africa to disentangle any plausible relation between the two (Table 2).The correlation analysis revealed a significant negative correlation with the alongshore wind stress but an insignificant correlation with the SST index.The cross-correlation analysis also did not reveal any relation between the upwelling index and the NAO.The lack of correlation between the SST index and the NAO is quite ambiguous considering a significant negative correlation with the alongshore wind stress.Hence the influence of the NAO on the increasing trend of coastal upwelling could not be substantiated.
Similarly, the AMO is also a main factor in the long term evolution of wind and SST in the North Atlantic.Knight et al. (2006) argue that during the warm phase of the AMO there are consistent changes of the trade winds over the Sahel region and also a northward displacement of the mean Inter Tropical Convergence Zone.The North Atlantic experienced a change from a warm phase to a cold phase in the mid-1960s, and the AMO again shifted to a warm phase during the mid 1990s.Accordingly, change in the trade-wind patterns associated with the changing phase of the AMO could be a considerable factor in determining the long-term trend of coastal upwelling intensity.However, the correlation between the AMO index and coastal upwelling indices were statistically insignificant, which allows us to disregard any primary control of the AMO over the intensity of coastal upwelling off NW Africa.
The major physical factor that controls coastal upwelling intensity along the eastern boundaries of the oceans is the equator-ward alongshore wind stress component.The hypothesis proposed by Bakun (1990) puts forth a mechanism by which the wind stress that favours the upwelling increases due to the greenhouse gas-induced warming and subsequent changes in the land-sea pressure gradient.This mechanism may also serve as an explanation for the trends in the COADS wind stress data and the SST indices derived from the HadISST and the CALCOFI SST datasets.
The effect of atmospheric aerosols on the strength of upwelling favorable winds is not very well understood.However, atmospheric aerosols that absorb and scatter solar radiation tend to decrease near surface wind speeds by up to 8% locally (Jacobson and Kaufman , 2006).Therefore, the presence of aerosols, soot and dust from both anthropogenic and natural (e.g.volcanism) sources might be an important factor, which could influence the intensity of coastal upwelling locally.Similarly, solar insolation variability could also affect upwelling favorable winds, thereby altering the intensity of coastal upwelling.Stratification is another important factor in determining the depth from which the water upwells, which in turn affects the coastal SST and nutrient concentration.Upwelling due to divergence in the alongshore current and topographic steering is another possible process by which the rate of upwelling can be altered over time.But the effect of these processes on long-term variability in a coastal upwelling system is not well documented.
From the analysis of trends in wind stress obtained from the COADS, NCEP/NCAR and ERA 40 datasets, we found that there were large discrepancies between the datasets.Based on the comparisons done in previous studies, we consider the trends obtained from the COADS dataset to be most reliable.These trends indicate an increase of coastal upwelling in all major upwelling regions.
The SST index obtained from the HadISST data suggests a decrease of coastal upwelling after 1870.However, after 1960 the same SST index also shows a significant increase of coastal upwelling in all regions except for Peru.Additionally, the CALCOFI dataset presents strong evidence for the intensification of upwelling in the California upwelling region.
Our study revealed that the AMO does not directly interact with upwelling off NW Africa.The influence of the NAO with upwelling off NW Africa seems to be quite ambiguous, as a negative correlation between the NAOI and meridional wind stress is observed, but a complete lack of correlation with the SST index was found.In the Pacific the PDOI also shows a weak correlation with upwelling off California, indicating a lack of any direct interaction.
In summary, the hypothesis proposed by Bakun (1990) and later taken up by McGregor et al. (2007), which states there is an intensification of coastal upwelling in relation to global climate change, gains some additional support by our analysis of the COADS wind stress data, the SST index derived from the HadISST data (after 1960) and the SST index derived from the CALCOFI data set.The lack of correlation between the basin-scale oscillations like the AMO, the NAO and the PDO also rules out an alteration of upwelling intensity other than due to enhanced upwelling-favourable winds by the mechanism proposed by Bakun (1990), although other physical factors like changes in stratification, atmospheric aerosols and solar variability could not be excluded.
Fig. 2 .
Fig. 2. CALCOFI(Bograd et al., 2003) data used to calculate the SST index.It is calculated by subtracting the area-averaged SST over coastal locations (black) from the area-averaged SST of the offshore locations (red).
3.
The Pacific Decadal Oscillation Index (PDOI;Mantua et al., 1997), derived as the leading principal component of monthly SST anomalies in the North Pacific Ocean, poleward of 20 • N with monthly means removed.
Fig. 3 .
Fig.3.Linear trends (red line) of meridional wind stress from COADS(Slutz et al., 1985) calculated by the method of least squares.All regions show a significant increase of upwelling.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Fig. 4 .
Fig.4.Linear trends (red line) of meridional wind stress from the NCEP/NCAR reanalysis dataset(Kalnay et al., 1996) calculated by the method of least squares.NW Africa and Peru show a decrease of upwelling.There is an increase of upwelling in Lüderitz and an insignificant trend in California.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Fig. 5 .
Fig.5.Linear trends (red line) of meridional wind stress from the ERA-40 dataset(Uppala et al., 2005) estimated by the method of least squares.NW Africa and Peru show an increase of upwelling.There is a decrease of upwelling in California and insignificant trend in Lüderitz.In the Northern Hemisphere a negative slope indicates increase of upwelling.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 Nm −2 yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Fig. 6 .
Fig.6.Trends of coastal upwelling derived from the HadISST dataset(Rayner et al., 2003), for the period 1870-2006.Significant decreasing trends are observed at NW Africa, Lüderitz and Peru.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 • C yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Fig. 7 .
Fig. 7. Trends of coastal upwelling derived from the HadISST dataset(Rayner et al., 2003) for the period 1960-2006.The upwelling shows an increase after 1960 in all regions except Peru.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −3 • C yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.Also shown are the unsmoothed time series (thin lines).
Fig. 8. (a)Linear trend of the upwelling index derived from the CALCOFI(Bograd et al., 2003) SST dataset estimated using the method of least squares and using the datapoints shown in Fig.2.(b) Linear trend of SST in coastal California estimated from the CALCOFI dataset.The trend indicates a significant cooling over the last 45 years.(c) Linear trend of the coastal temperature averaged over top 100 m of the water column off California estimated from the CALCOFI dataset.The value of the slope and its 95% confidence interval are given in each panel (in units of 10 −2 • C yr −1 ).A * indicates that the slope is statistically significant at the 0.05 level.
Table 1 .
Summary of the inferred changes in 20th century upwelling intensity.A + sign represents an increasing trend, a − sign a decreasing trend and 0 a statistically insignificant trend.Deviation from analysis done on unsmoothed time series is shown by values in paranthesis.
Table 2 .
Pearson's correlation coefficients for the basin-scale oscillations AMO, NAO with the upwelling indices off NW Africa and PDO with upwelling index off California.Values in parentheses denote the 95%-bootstrap confidence intervals for the correlation coefficient.A * indicates that the correlation is statistically significant at the 0.05 level. | 2018-12-03T00:53:39.538Z | 2010-09-22T00:00:00.000 | {
"year": 2010,
"sha1": "c08ef87424200380ba8c02d917e1756e8ee09e20",
"oa_license": "CCBY",
"oa_url": "https://os.copernicus.org/articles/6/815/2010/os-6-815-2010.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "17f434173703d19c01583673e6a35d0c25026990",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
238582094 | pes2o/s2orc | v3-fos-license | Early and short-term intensive management after discharge for patients hospitalized with acute heart failure: a randomized study (ECAD-HF)
Aims Hospitalization for acute heart failure (HF) is followed by a vulnerable time with increased risk of readmission or death, thus requiring particular attention after discharge. In this study we examined the impact of intensive, early follow-up among patients at high readmission risk at discharge after treatment for acute HF. Methods and Results Hospitalized acute HF patients were included with at least one of the following: previous acute HF < 6 months, systolic blood pressure ≤ 110mmHg, creatininemia ≥ 180µmol/L, or BNP ≥ 350 pg/mL or NTproBNP ≥ 2200 pg/mL. Patients were randomized to either optimized care and education with serial consultations with HF specialist and dietician during the first 2-3 weeks, or to standard post-discharge care according to guidelines. Primary end-point was all-cause death or first unplanned hospitalization during 6-month follow-up. Among 482 randomized patients (median age 77 and median left ventricular ejection fraction 35%), 224 were hospitalized or died. In the intensive group, loop diuretics (46%), betablockers (49%), ACE-inhibitors or angiotensin receptors blockers (39%) and mineralocorticoid receptor antagonists (47%) were titrated. No difference was observed between the two groups for primary end-point (HR 0.97; 95CI 0.74-1.26), nor for mortality at 6 or 12 months or unplanned HF rehospitalization. Additionally, no difference between the two groups according to age, previous HF and left ventricular ejection fraction was found.
Introduction
Following hospitalization for acute heart failure (HF), the level of rehospitalization or death is greatly elevated during the first few months, reaching 30-50% after 6 months [1][2][3] .Little progress has been made in managing acute HF in recent decades, although significant advances have been made in chronic, stable HF management.Critical time points are discharge from hospital and the transition between discharge and ambulatory care.As observed in other chronic diseases, HF management is subject to medical inertia 4 .In recent years, published guidelines have emphasized the need to follow-up these patients within a multidisciplinary program 5 and hospital staff have been encouraged to apply these guidelines.Indeed, some countries have even introduced financial penalties if excessive numbers of patients are readmitted early 6 .Disease management programs include various measures, such as: a discharge checklist and planning, telephone follow-up, therapeutic education, telemonitoring, home nurse visits, early follow-up and early commencement of rehabilitation before discharge.Though meta-analyses generally suggest an overall benefit 7,8 , results of several of these studies, as well as recent randomized studies, are more equivocal 9,10 , despite having strict inclusion criteria.In terms of relevance in clinical practice, many multidisciplinary programs are severely constrained by human and/or financial resources.
At hospital discharge, many patients are clinically unstable, as they are discharged with continuing hypervolemia and symptoms 11 .In-hospital treatment optimization is difficult as the inpatient period is insufficient to evaluate whether therapy additions or changes improve or worsen a patient's clinical status or co-morbid conditions.Also, coordination between inpatient and outpatient care is often poor 12 .Thus, early review post-discharge is necessary and is an opportune time to check clinical and biochemical parameters, rectify any prescription errors, titrate specific HF treatment, reinforce therapeutic education and instigate multidisciplinary tools that had not been established prior to discharge, such as rehabilitation to effort.The ESC/HFA recommendations advise patients be reviewed at 1 week by their general practitioner and, if possible, at 2 weeks by the hospital cardiology team.The efficacy of immediate follow-up has been suggested by observational studies but not been shown by randomized clinical trials [13][14][15][16] .
In this study, we aimed to specifically analyze the clinical impact of immediate follow up after hospital discharge for patients treated for acute HF and at high readmission risk.A sufficiently simple format that was compatible with practice was chosen, in the form of two consultations with a HF specialist cardiologist and a dietetic education consultation in the first two weeks after hospital discharge.
Methods
The Early Care After Discharge of Heart Failure patients (ECAD-HF, NCT01820780) was a randomized, multicentre, open-label study to investigate the effectiveness of serial consultations or standard care to prevent hospitalization or death in patients discharged after an acute heart failure event.The protocol and amendments were approved by the French institutional review board ('Comité de Protection des Personnes').The study was conducted according to French laws, Good Clinical Practice guidelines and the Declaration of Helsinki.All patients provided written informed consent.
Study participants
The study population consisted of adults ≥18 years of age, who were eligible for treatment through the French social security system, who were hospitalized for acute HF.
Patients were eligible when they met at least one of the four following criteria at discharge or one day prior to discharge: previous HF hospitalization during the 6 months before inclusion, blood levels of B-type natriuretic peptides (BNP) ≥ 350 pg/ml or NTproBNP ≥2200 pg/ml and/or serum creatinine ≥180 µmol/L and/or systolic blood pressure ≤110mmHg.
Patients were excluded if they had acute coronary syndrome, acute myocarditis, isolated right HF related to pulmonary disease, reversible cause of HF such as tachycardiomyopathy, planned cardiac surgery within a few weeks following discharge, enrolment in another clinical trial, pregnant or breastfeeding, and patients under guardianship or wardship.
Patients were also excluded if they had a planned disease management program during the first month after discharge, except if enrolled in the home health assistance program, PRADO, a French national health program that involves weekly nurse visits from discharge to 2 months, in addition to usual care.
Study procedures
The enrolment visit was performed either on or one day before discharge.Patients were randomly assigned (1:1), in a centralized manner, to either intensive or standard care.
In standard care, all patients were discharged with a medical report as were prescribed blood tests including plasma electrolytes, natriuretic peptides and a renal function panel.All Accepted manuscript / Final version patients were encouraged to perform their blood test and obtain their first follow-up appointment with their general practitioner within the first week after discharge and visit their referring cardiologist within the first month.Investigators were also encouraged to have these appointments scheduled by their staff.
Intensive care comprised of planned in-person consultations with a HF specialist (investigators in each centre) and a dietitian at day 7 and day 14 after discharge, in addition to conventional follow-up with a general practitioner and referring cardiologist.A further consultation was encouraged at day 21.Before each consultation, at least plasma electrolytes, natriuretic peptide levels and a renal function panel was obtained.During each consultation, the patient was reviewed to optimize care, including titration of their HF drugs.
Study outcomes
The primary outcome was the composite of all-cause deaths or unplanned hospitalizations at 6 months.The secondary outcomes were: all-cause deaths, unplanned hospitalizations or unplanned hospitalizations for HF at 6 and 12 months.Also, evidencebased HF treatment at 6 months, changes in natriuretic peptide BNP or NTproBNP levels between discharge and the second consultation in the intensive group, natriuretic peptide levels between discharge and 6 months in both groups and the cumulative number of days alive and hospitalization-free days at 6 and 12 months.The following sub-group analyses were planned: left ventricular ejection fraction (LVEF) < or ≥ 40%; age < or ≥75 years; previous history of HF or not.Adverse event data was collected during follow-up consultations with investigators or technicians contacted either patients, family or referring doctors by telephone.
Study management
Patient follow-up, monitoring of study centers and statistical analyses were performed by the clinical research unit at the University Hospital Lariboisiere (Paris, France).
For each adverse event during follow-up, medical report was collected and blinded.A clinical events committee adjudicated each death and hospital stay, assessed whether or not it was were planned and assigned the relationship with a cardiovascular or HF event.
Statistical analysis
Continuous variables were reported as means with standard deviation (SD) or as medians with interquartile range (Q1-Q3), if appropriate and were compared using Accepted manuscript / Final version standardized mean differences.Categorical variables were reported as numbers with percentages and were compared using standardized proportion differences.
The primary analysis was based on the intention-to-treat population and the primary efficacy endpoint was the proportion of patients with all-cause death or unplanned hospitalization within 6 months after hospital discharge.The primary efficacy endpoint related to time to an event, was compared between the intensive group and usual group using a survival analysis based on a Cox model with study center as a random effect.An adjusted Cox model was performed including center as a random effect and prior known risk factors as covariates (age, number of hospitalizations in the last 12 months, chronic obstructive pulmonary disease/chronic respiratory failure, stroke, depression, LVEF, systolic blood pressure, hemoglobin, estimated glomerular filtration rate and natriuretic peptide blood levels (BNP >350 or NTproBNP >1500pg/mL).The survival status was described using Kaplan-Meier curves, hazard ratio (HR), and 95% confidence intervals of the adjusted HR.
The time-to-event secondary endpoints were analyzed using the same methods as for the primary endpoint.The secondary binary endpoints were analyzed using the Cochran-Mantel-Haenzel (CMH) test stratified by center, and Breslow-Day test was performed for homogeneity of the odds ratios.For secondary continuous endpoints that are normally distributed, ANCOVA was used and included baseline endpoints and the covariates mentioned above, with the management groups and center as fixed effects.For secondary continuous endpoints that are non-normally distributed, the non-parametric Van Elteren test stratified by center was used, the same covariates were used as for normally-distributed data.
Pre-specified subgroup analyses to evaluate variations in treatment effect were done using Cox regression models, with terms for treatment, subgroup, and interaction of treatment with subgroup.All reported subgroup analyses were pre-specified.
Assuming an estimated frequency of the composite primary criterion of 35% in the control group, with a reduction in the relative risk of 32% with intensive care, it was estimated that a sample of 554 patients, including a loss to follow-up rate of 10%, would have 80% power to detect a difference in the primary outcome, at a significance level of 0.05 for a 2-sided test.
The two-sided significance level was fixed at 5%.All tests were performed using SAS version 9.4 statistical software (SAS Institute Inc., Cary, NC, USA).
Results
Between July 2014 and May 2018, 495 patients were included in 22 study centers and 13 patients were excluded from statistical analysis because of 2 randomization errors, 10 withdrew their consent and 1 died before discharge, leaving 482 patients in the intentionto-treat analysis: 237 in the intensive arm and 245 in the control arm.The median number of patients enrolled in each hospital was 17 (IQR 7-26; min-max 3−99).
Patient characteristics
Table 1 shows patient clinical characteristics at baseline.No difference was observed between the two groups.Most patients were male (73%), with a median age of 76 years (IQR 65-83).LVEF was ≤ 0.40 in 63% of patients and ≤ 0.50 in 76% patients.Among the four inclusion criteria, patients were admitted with: previous unplanned HF admission (<6 months) in 42%, natriuretic peptides blood levels above cut-offs in 65%, serum creatinine ≥180 µmol/L in 22% and systolic blood pressure ≤ 110mmHg in 44% of patients.
Changes in medication
In the intensive group, most patients were reviewed by the HF team during the first 2 weeks after discharge: 88.2% were reviewed by the HF cardiologist at day 7, 85.2% at day 14, 82.3% at both times, and 76.8% by a dietician at day 7 or 14.
Table 2 shows prescribed drugs in patients before admission, at discharge and at 6 months.
Rates of HF drug prescription increased significantly between admission and discharge for diuretics, angiotensin converting enzyme inhibitors (ACEi) or betablockers and for mineralocorticoid receptor antagonists (MRA) in the two groups.At discharge, nearly 85% of patients with LVEF ≤ 40% received ACEi or angiotensin receptor blockers (ARB), more than 90% received betablockers, nearly 45% received MRA and more than 95% received loop diuretics with a median daily dose of 80mg [IQR 40-125].At 6 months, there was little difference in rate and dosing of the main drugs between the two groups of surviving patients.The median dose of ACEi/ARB was two times higher in the intensive group than the control group.Figure 1 shows changes in HF drugs early post-discharge in patients with altered LVEF in the intensive group.Start or up-titration was more frequent than stop or down-titration for both ACEi/ARB (32% versus 16%) and betablockers (29% versus 10%).
Prescribed doses also increased slightly: ≥50% of maximum doses of ACEi/ARB were reached in 45% of patients at discharge and 49% of patients at day 14, and in 44% and 49% for there was no difference between the two groups.The number of events for the primary
Discussion
This study found little clinical impact of immediate follow up after hospital discharge for patients treated for acute HF at high readmission risk.Specifically, our results suggest that an intensive follow up, during the first two weeks after hospital discharge, is not enough to improve early outcomes for patients at high risk.Importantly, European guidelines 5 recommend that acute HF patients see their general practitioner within a week and their cardiology team within 2 weeks of hospital discharge.Yet real-world data suggest that this level of follow-up is not achieved in more than 50% of patients 17 .As observed in other chronic diseases, HF management is also subject to medical inertia 4,18 .Therefore, it would be of value to gather evidence on which to base a prospective active clinical follow-up of HF patients post-hospitalization. Several factors may explain, at least in part, the lack of difference observed in outcomes between intensive follow up and standard care of patients in this study.Firstly, the patient cohort included very high risk, elderly patients with many Accepted manuscript / Final version comorbidities.Indeed, death or rehospitalization made up nearly 50% of events at 6 months and the median patient age was 77 years.In such a population, it is possible that hypotension, renal failure, and frailty limited the capacity for improvement in medical therapy and the opportunity to improve outcomes in the Intervention group.This is supported by the lack of a difference in between-group provision of medical therapy during follow up.
Secondly, a key element to improve prognosis is the ability to introduce and quickly adjust appropriate doses of evidence-based HF medication 19,20 .In our study, most patients with HF with reduced ejection fraction (HF-rEF) were discharged with a prescription for evidencebased HF treatments (>75% for betablockers or ACE-inhibitors or ARB, and 36% for MRA), and thus the margin/opportunity for improvement was limited.A similar limitation might explain the lack of benefit of BNP-guided management reported in the Guide-it trial 21 .Our Kaplan-Meier curves show a more linear increase in events over time than previous studies or surveys.This could be explained by the fact that patients had severe disease but were discharged with a high level of optimization of both decongestion and HF treatment.
Thirdly, immediate medical consultation post-discharge, even with a specialist HF cardiologist and a dietician, was insufficient and lacked in other essential elements such as more prolonged intensive follow-up or in-home follow up by specialist HF nurses.Regarding the latter, it is noted that there was only a small network of HF nurses operating at the time in France, which prevented such integration into the present study.Also, it is worth noting that non-adherence to HF medications is common, and significantly impacts outcomes 22 .
While we cannot conclude from our study what measures would constitute an effective transitional care service for HF patients, it is still probable that a successful model would likely comprise direct patient follow-up by HF specialist nurses as well as ongoing disease management by other relevant healthcare professionals, as well as therapeutic education, and would require repeated consultations over a prolonged period.Indeed, a Cochrane review 8 found evidence for a positive effect of disease management clinics and nurse case management for post-hospital HF care.Such a sustained disease management program would require formal collaboration between doctors and the relevant health and educational professionals.However, the clinical benefit of proposed non-medical interventions in the transition period, such as education, telehealth or involvement of pharmacists, is still unproven, even though many deem such elements to be essential in a Accepted manuscript / Final version multidisciplinary program.For example, telemonitoring only appears beneficial if it is intensive and incorporates a high level of responsiveness to alerts, requiring significant human and logistical resources 23 .In the future, telemonitoring may become more feasible with more parameters monitored and being integrated into algorithms known to increase clinical benefit.Early rehabilitation, started during hospitalization, has also recently been shown to be at least of physical benefit, though it does not reduce the rate of readmissions 24 .
Lastly, it is probable that transitional care services will need to be adapted to specific patient clinical profiles, patient wishes and available resources.While the present study did not reveal specific improvements for post-hospitalization HF care, sound evidence for development of transition program elements is still needed for the design of an effective, albeit costly program.The fact that so many patients deteriorate after hospital release indicates that such further investigations are critically required.
Our study does have some limitations.Although unlikely 17 , it is possible that patients in both groups received a similar level of immediate post-hospital care as the intensive group.This possibility was not controlled for because we lack sufficiently precise information regarding the exact nature of the medical follow up in the control group.However, we can assume that early follow up in the control group was likely far from optimal according to a recent survey 17 and national health insurance system data showing that not more than 50% patients see their GP within the first month after discharge and not more 50% patients see their referring cardiologist within the first three months.Because of the lack of evidencebased care for HF patients that had preserved LVEF, it is possible that this group of patients were unable to gain as much benefit from the immediate follow up.However, controlling for LVEF in the data analysis showed no difference between the groups.Angiotensin Receptor Neprilysin inhibitors were available only at the end of the inclusion period; consequently their clinical impact during early therapeutic optimization could not be assessed in this study.
Conclusion
In high-risk HF patients, we found no improvement in outcomes using more intensive follow up in the early period after discharge from hospital when compared to standard care.This vulnerable post-discharge time, with a high risk of readmission, should be the subject of future studies in order to specify optimal transitional care services.
LEGENDS
Table 1: Baseline characteristics and comparison between intensive and control groups.
Quantitative variables are given with median and interquartile range (IQR).
* 'Socially isolated' was defined as the lack of family or relationship/friend during hospitalization ** Worsening renal function was defined as an increase in serum creatinine >26.5% or estimated glomerular filtration rate (eGFR)> 25% at any time during hospitalization.
CRT means cardiac resynchronization therapy with (CRT-D) or without defibrillator (CRT-P).
OutcomesFigure 2
Figure2displays the cumulative occurrence of events over the 6-month follow-up;
Figure 2 :Figure 3 :
Figure 2: Time to first unplanned hospitalization for any cause or death from any cause during the 6-month study period
Table 2 :
prescribed HF drugs in patients with LVEF ≤40% at admission, at discharge and at 6 months and comparison between intensive and control groups.Are indicated rates of
Table 3 :
primary and main secondary outcomes and comparison between the two groups by | 2021-10-12T06:23:21.659Z | 2021-10-09T00:00:00.000 | {
"year": 2021,
"sha1": "930aae1d88e19642f4d026006aa10bb00c69f887",
"oa_license": "CCBYNC",
"oa_url": "https://hal.archives-ouvertes.fr/hal-03476733/file/Logeart%20et%20al-2021-Early%20and%20short-term%20intensive%20management%20after%20discharge%20for%20patients.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d2008e798372cb155b4acc7745e9fed82abd2018",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24517653 | pes2o/s2orc | v3-fos-license | Traffic Law Knowledge Disparity Between Hispanics and Non-Hispanic Whites in California
e Abstract—Background: The Hispanic population is one group that is involved in a disproportionately high percentage of fatal motor vehicle collisions in the United States. Study Objectives: This study investigated demographic factors contributing to a lack of knowledge and awareness of traffic laws among Hispanic drivers involved in motor vehicle collisions (MVCs) in southern California. Methods: The cross-sectional study enrolled adults (n (cid:1) 190) involved in MVCs presenting to a Level I trauma center in southern California over a 7-month period. Subjects completed a survey about California traffic law knowledge (TLK) consisting of eight multiple-choice questions. The mean number of questions answered correctly was compared between groups defined by demographic data. Results: The mean number of TLK questions answered correctly by Hispanic and non-Hispanic white groups were significantly different at 4.13 and 4.62, respectively ( p (cid:1) 0.005; 95% confidence interval (cid:2) 0.83 to (cid:2) 0.15). Scores were significantly lower in subjects who were not fluent in English, had less than a high school education, did not possess a current driver’s license, and received their TLK from sources other than a driver’s education class or Department of Motor Vehicle materials. Analysis of variance showed that the source of knowledge was the strongest predictor of accurate TLK. Conclusion: Source of TLK is a major contributing factor to poor TLK in Hispanics. An emphasis on culturally specific traffic law education is needed.
INTRODUCTION
Motor vehicle collisions (MVCs) are a leading cause of preventable morbidity and mortality in the United States.In 2002, MVCs were the eighth leading cause of death overall and the number one leading cause of death among the population aged 3-33 years.In this age group, MVCs were responsible for 24.7% of all deaths in the United States (1).However, the mortality rate has been decreasing over the past few decades.This decline has been attributed to safer roads, safer vehicles, and improved traffic safety laws (2).Further reductions in the death, injury, and disability caused by MVCs can be made by identifying groups with high rates of fatal MVCs and then determining effective methods to reduce the behaviors that placed these groups at higher risk.
The Hispanic population (including, but not limited to, the Latino, Central and South American, Cuban, Puerto Rican, and Spanish subpopulations) has been identified as one group that is involved in a disproportionately high percentage of fatal MVCs in the United States.Braver reported that Hispanic men had elevated occupant death rates per unit of travel when compared to Whites (3).Baker et al. found that Hispanic children aged 5-12 years had an occupant death rate per unit of travel that was 72% higher than non-Hispanic Whites (NHWs), and Hispanic teenagers had the highest occupant death rate per unit of travel among all ethnic groups studied (4).The Hispanic population is also more likely to demonstrate risky behaviors when they are involved in MVCs.Several studies have reported that Hispanic motorists are over-represented in alcohol-related traffic collisions (5)(6)(7).One of these studies, which looked at injured motorists admitted to trauma centers in Illinois, reported that Hispanic crash victims had lower rates of safety belt use and higher rates of alcohol involvement than NHW motorists (5).Several other studies have shown that Hispanic motorists, in general, are more likely to disobey traffic-safety laws.One of these studies consisted of interviews and observational data demonstrating that Hispanic farm workers in California have low rates of safety belt and car seat use (8).Another study done in non-crash-involved motorists demonstrated that alcohol use is higher among Hispanic drivers than others (9).Several other surveys have found that safety belt use is lower among Hispanic Americans compared with the general population (10 -12).
Although several studies have shown that African-American and Native American populations have an even higher incidence of fatal MVCs in selected regions of the country, the present study focuses on the Hispanic population because it is the most rapidly growing minority population in the United States (US) (13).Orange County, California has a population of nearly 3,000,000 people, with 32% Hispanic, 49% non-Hispanic White, 16% Asian, and 30% foreign born (14).In the 2000 census, 92% of Hispanics who gave a specific origin were of Mexican origin (15).Due to sheer numbers, Hispanic driving practices will have a much greater impact on the safety of our society than other minority groups, especially in regions such as southern California where Hispanic populations are densest.
There are a number of cultural and social factors that have an impact on non-compliance with traffic laws among the Hispanic population.The 1995 National Highway Traffic Safety Administration (NHTSA) report illustrated that many recent Mexican immigrants are unaware of US traffic laws.The laws in Mexico are different and less rigorously enforced or not enforced at all (11).
The primary objective of this study was to investigate demographic factors that contribute to a decreased awareness of traffic laws among individuals who are involved in MVCs.This study looked at the level of traffic law knowledge (TLK) among patients who were hospitalized due to MVCs.Because Hispanics suffer more morbidity and mortality from MVCs than NHWs, our principal goal was to document whether or not hospitalized Hispanics with MVC-related injuries have a lower level of knowledge of traffic laws than NHWs with similar injuries.We also looked at a number of other demographic factors that might contribute to low TLK among Hispanics and other ethnic groups.
MATERIALS AND METHODS
The study was approved by the University of California, Irvine School of Medicine Institutional Review Board.Data collection was conducted via face-to-face interviews using a questionnaire in English, Spanish, and Vietnamese, at a Level I trauma center in Orange County, California.All adult drivers and passengers who were admitted to the hospital due to injuries sustained during a MVC were considered for enrollment in the study.Potential subjects, once medically stable and no longer under the influence of alcohol or drugs, were approached by one of two research staff members regarding consent to be enrolled.No attempt was made to classify or collect data regarding the few patients who declined enrollment.The study was restricted to patients who were able to participate in a verbal interview during their hospital stay.The questionnaire was developed in English and translated into Spanish and Vietnamese.A physician assistant or nurse practitioner employed by the hospital conducted all English interviews.All Spanish and Vietnamese interviews were conducted by approved native speakers who are also employed by the hospital.
The questionnaire consisted of several demographic questions and eight questions regarding the subject's understanding of California traffic laws.The answers given by the subjects for the eight TLK questions were compared to the predetermined correct answers in accordance with California law as outlined in the California Driver Handbook.The mean number of TLK questions answered correctly on the survey was considered in light of the subject's demographic data, which included race, gender, English fluency, education level, and average household income.Cutoff points for analyzing continuous or ordinal data, such as income level, were based on an abbreviated version of the US Census data (15).Subjects were also asked where they obtained most of their TLK.The mean number of TLK questions answered correctly on the survey was also considered in relation to whether the subject was the driver or passenger of the vehicle and whether or not the subject claimed to be restrained at the time of the collision.Blood alcohol content (BAC) and illicit drug use, as determined by a blood alcohol level and urine toxicology screen, were collected concurrently on patient presentation to the Emergency Department.The mean number of TLK questions answered correctly on the survey by subjects with a BAC Ͼ 0.08% or a positive urine toxicology screening test were compared to those subjects with a BAC Ͻ 0.08% or a negative urine toxicology screening test.A description of each subject's injuries was also documented and converted to an Injury Severity Score (ISS) by a hospital employee with over 20 years of experience in accurately calculating ISS.The ISS were also examined in relation to the mean number of TLK questions answered correctly to verify whether or not TLK was related to morbidity.
Demographic Data
There were 190 subjects enrolled in the study.The selfreported demographic data as they relate to ethnic background, English fluency, highest level of education, and household income are listed in Table 1.
Driving Data
The responses to questions regarding the subjects' preparations for a safe driving experience before the collision are listed in Table 2.
When subjects were asked what state their driver's licenses were from, there were 141 (74.2%) from California, one (0.5%) from Arizona, one (0.5%) from Washington, one (0.5%) from Utah, one (0.5%) from Maryland, and two (1.1%) from Mexico.The other 43 (22.6%)subjects did not have a current driver's license.One hundred seventy-four (91.6%) subjects were the drivers of the vehicles at the time of the collision, and 16 (8.4%)were passengers.
Among the Hispanic subjects who completed the questionnaire, 86.9% were wearing a seat belt at the time of the collision, 61.0% were driving with a valid license, and 37.0% did not possess a current license.
TLK Data
The eight TLK questions on the questionnaire and the responses given to those questions are shown in Figure 1.When comparing the demographic and driving data in relation to the mean number of TLK questions answered correctly, there were six variables found to be associated with significantly fewer questions being answered correctly (p Ͻ 0.05; 95% confidence interval does not contain zero) (Table 3); five of these variables had a p-value Յ 0.005.These five variables include the Hispanic group, subjects who were not fluent in either written or spoken English, subjects with less than a high school education, subjects who did not possess a current driver's license, and subjects who did not use the Department of Motor Vehicles (DMV) as a resource for their driver's education.The sixth variable, subjects with a household income under $35,000, had a p-value of 0.023.
The prevalence of subjects in the Hispanic group among these six significant variables was calculated.In the Hispanic group, there were 22 of 23 subjects (95.7%) who did not read and write English fluently, and 22 of 28 subjects (78.6%) with less than a high school education.Furthermore, 34 of 43 subjects (79.1%) without a valid license were from the Hispanic group.Only two subjects The results of an analysis of variance (ANOVA) using the five most significant variables (p Յ 0.005) are shown in Table 4.The second and third columns show the results of a simple dichotomy based on the characteristics in the first column.The fourth and fifth columns show the results of the ANOVA with all five variables, using data from 154 participants who were Hispanic or NHW and had complete data on the other four variables shown.(The coefficients are for dichotomous variables and also can be interpreted as differences.)The only variable that proved to be a significant predictor of the TLK score was the source of subjects' traffic law knowledge (driver's education or DMV materials vs. other sources).Furthermore, after controlling for ethnicity, language, education, and a current driver's license, learning about traffic laws from the DMV or driver education was associated with a score of 0.43 points higher on the TLK questions.
Differences in TLK were not significant between genders, number of years driving, restraint use or presence of functioning restraints in the vehicle, drivers vs. passengers, ethnic groups other than Hispanics and NHWs, or according to differences in blood alcohol content or illicit drug use.
Although there was a trend toward fewer TLK questions being answered correctly as ISS increased, the number of subjects with a higher ISS (Ͼ 19) was so few that the difference was not significant when using ANOVA.The mean number of questions answered correctly by subjects with an ISS of 1-19 (n ϭ 174) was 4.34; the mean number of questions answered correctly by subjects with an ISS of 20 -29 (n ϭ 4) was 4.00; the mean number of questions answered correctly by subjects with an ISS of 30 or more (n ϭ 3) was 3.00 (p ϭ 0.098).
DISCUSSION
The estimated Hispanic population, as of July 1, 2004, comprised 14.1% of the United States population.This is the nation's largest ethnic minority, and this group is projected to comprise 24% of the nation's population by the year 2050 (13).In the state of California, the demographic is changing even more rapidly.The percentage of Hispanics is rapidly increasing in California, whereas the percentage of NHWs is declining.The California Department of Finance has projected that the Hispanic population will comprise the majority of the California population by the year 2040.In the population aged 25-34 years, the Hispanic population in California already is the majority (16).One of the cities within the catchment area of this study was 76.1% Hispanic in the year 2000 (17).
With this large shift in demographics comes a shift in language, culture, and socioeconomic balance.The State of California has shown a great deal of interest in how this shift is affecting employment and education, but much less effort has been directed toward learning how this shift will affect the health and safety of either the Hispanic population or the general population as a whole.As the Hispanic population increases, there is a potential for increased numbers of risky drivers and those involved in fatal MVCs.The first step in decreasing risk-taking among the Hispanic drivers is determining why some Hispanic drivers take more risks than the majority of the population.The 1995 NHTSA suggested that many Mexican immigrants are unaware of the risks involved in disobeying US traffic laws because the laws in their country are different and are either enforced less rigorously or are completely ignored (11).Our study reinforces this idea with data demonstrating that the Hispanic group scored lower on the TLK questions than the NHW group.This is an important finding because all the subjects enrolled in this study had been involved in a MVC that caused injuries serious enough to require admission to the hospital.A report from the Federal Highway Administration demonstrated that at least 97% of all motor vehicle collisions are the result of driver error (18).This, along with the results of the current study, suggests that many of the MVCs that involved the subjects in this study were caused by traffic laws being broken, either through ignorance or indifference.The Hispanic group was the one group in this study that scored lower on the TLK questions than any other ethnic group, and significantly lower than the NHW group.This finding suggests that the Hispanic group, especially those involved in MVCs, has a lower level of knowledge and understanding of the California traffic laws than the majority of the driving population.The results of this study suggest that social, as well as ethnic, background has an influence on TLK, as a lower education level, lack of English fluency, and a lower household income also resulted in significantly lower TLK scores.Although these social influences were present in both the Hispanic and NHW groups, the Hispanic group still had a significantly lower TLK score than the NHW group.A larger study would be required to investigate the degree to which social differences affect the behavior of different ethnic groups when driving.
Undoubtedly, there are other factors that contribute to risky driving behaviors among the Hispanic population, but awareness of the laws themselves seems to be a major contributing factor.
The reasons that the Hispanic group has a poor understanding of traffic laws are many, but we were able to elucidate some significant contributing factors in this study.There are very few opportunities for Hispanics to gain TLK in California if they are living in the state illegally, because California requires "lawful presence" to obtain a driver's license.This policy precludes all undocumented immigrants from obtaining a driver's license, and the individuals who drive without licenses often do not actively seek out sources of correct TLK (19).In our study, a disproportionately high percentage of the subjects driving without a valid license were from the Hispanic group.Whereas 48.4% of all subjects were Hispanic, 79.1% of the subjects who were driving without a current valid driver's license were from the Hispanic group.It is not known whether the subjects' licenses were revoked or if they had never received them.However, based on demographics, we speculate that the majority of those driving without a license would belong to the latter group because they are living in the United States illegally and are unable to obtain licenses.Undocumented Hispanic immigrants are more likely to live in poverty than NHWs, therefore, these are individuals who may not have the financial means, or any incentive, to pay for or attend driver's education classes (20).Thus, they are left to either use the knowledge and practices they learned in their countries of origin or use information they have picked up from other sources, such as word of mouth, which may be unreliable.This speculation is supported by the observation that the greatest predictor of how well subjects performed on the TLK questions was the source of their TLK.Subjects who had used driver's education classes or official DMV materials as their source scored significantly higher than those who used other sources.
Other factors that were found to significantly affect the performance of subjects on the TLK questions were fluency in English and level of education.The majority (78.6%) of subjects who had not completed a high school education were from the Hispanic group.It is theoretically possible that subjects with lower levels of education had a more difficult time understanding the TLK questions rather than not knowing the correct answers to the questions.For example, lack of English fluency has been previously found to be associated with a lack of traffic law knowledge and understanding of the proper use of child safety restraints (21).Of the six demographic factors that were found to contribute to a decreased awareness of traffic laws among MVC victims in this study, three were in some way related to education: fluency in English, formal education level, and source of TLK.The results of the ANOVA are consistent with the source of information having an important effect on TLK, and the effects of ethnicity, language, formal education, and driver's licensure being largely mediated through their effect on the major source of information.
The combination of these factors highlights the importance of educating the Hispanic population in a way that is culturally appropriate.This study also demonstrates that education and outreach is especially important to those drivers without a valid license.The 1995 NHTSA report cited several methods of traffic law education that have been successful among the different Hispanic communities (11).Among these methods are a variety of direct community outreach programs, church and school programs, and programs that appeal to the family unit.
We recommend the continued and expanded use of programs that have been successful in converting Hispanics to the adherence of US traffic laws.Based on the results of this study, we propose that an increased dissemination of DMV-approved information to Hispanic communities as part of an outreach program may yield some success in increasing TLK among the Hispanic population.It is our hope that, as the Hispanic population grows throughout the United States and becomes the majority in California, these efforts will increase the general TLK among the Hispanic population and decrease the number of both fatal and non-fatal MVCs, decrease the number of repeated or multiple MVCs, and increase the safety of our society as a whole.
Limitations
There are some important limitations to this study.First, the answers to the survey questions were selfreported by the subjects.However, there was an effort to limit self-report bias.The TLK questions were designed to objectively assess level of TLK, and did not ask questions regarding the subjects' involvement in risky behavior, which would have made the survey more prone to self-report bias.
Additionally, the Hispanic group was not divided into ethnic subgroups of the Hispanic population and may not be representative of the Hispanic population in other regions of the country.The Hispanic group in the catchment area of this study is primarily composed of immigrants from Mexico and descendents of Mexican origin.The results of this study may have differed if it had been conducted in an area of the United States where the Hispanic group is primarily composed of a different cultural or ethnic group.In addition, if the Hispanic group in this study had been subdivided so that each of the Latino and other ethnic groups were analyzed separately, the results may have differed among the distinct Hispanic subgroups.
The amount of time spent in the United States by immigrants was not determined.Subjects who had spent more time in the United States may have answered differently than those who had spent less time.If the Hispanic group had been subdivided into groups based on years spent in the United States, there may have been a difference in TLK between recent immigrants and those who have spent several years or their entire lives in this country.
This was a small study comprised of fewer than 200 subjects.The results of this study suggest that the Hispanic population overall may have a lower level of knowledge and understanding of the traffic laws, which may lead to MVCs involving a higher percentage of Hispanics with hospitalized injuries, however, a larger study is required to provide the power to make stronger assertions.There are undoubtedly other factors that contribute to risky driving behaviors among the Hispanic population, but this study suggests that awareness of the laws themselves is a major contributing factor.Cultural Disparity in Traffic Law Knowledge
CONCLUSION
Hispanics who were admitted to the hospital due to MVCs were not able to answer as many TLK questions correctly as NHWs.A lower score on the survey was best predicted by the source of the subjects' TLK.Other demographic factors that contributed to the lower scores seen in the Hispanic group were an inability to speak and read English fluently, lower levels of education, and driving without a valid license.Due to the increasing density of the Hispanic population in California and other regions across the country, a heightened emphasis on culturally specific traffic-law education is needed using programs that have been previously successful as models.This is especially true for those drivers who have never had any formal traffic-law education in this country and are driving without a valid license.
Figure 1 .
Figure1.The eight TLK questions on the questionnaire and the responses given to those questions.The absolute number of subjects that gave an answer to each question is given above the bar that corresponds to their specific answer listed on the x-axis.An asterisk (*) has been placed on or over the bar that corresponds to the correct answer.
Table 3 . Differences in the Number of TKL Questions Answered Correctly between Demographic and Traffic-related Variables
* Significant difference with p Ͻ 0.005.CI ϭ confidence interval; NHW ϭ Non-white Hispanic; TLK ϭ traffic law knowledge; DMV ϭ Department of Motor Vehicles; BAC ϭ blood alcohol content. | 2018-02-23T16:10:02.635Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "e9f3a390a9ad79f07235265fd0b6dfd42d53ba85",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt90r998zr/qt90r998zr.pdf?t=nsz80u",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "856ef250e48931b44de86b193497ff7c9cb86d06",
"s2fieldsofstudy": [
"Sociology",
"Law"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18540298 | pes2o/s2orc | v3-fos-license | Genetic Mapping of a Major Resistance Gene to Pea Aphid (Acyrthosipon pisum) in the Model Legume Medicago truncatula
Resistance to the Australian pea aphid (PA; Acyrthosiphon pisum) biotype in cultivar Jester of the model legume Medicago truncatula is mediated by a single dominant gene and is phloem-mediated. The genetic map position for this resistance gene, APR (Acyrthosiphon pisum resistance), is provided and shows that APR maps 39 centiMorgans (cM) distal of the A. kondoi resistance (AKR) locus, which mediates resistance to a closely related species of the same genus bluegreen aphid (A. kondoi). The APR region on chromosome 3 is dense in classical nucleotide binding site leucine-rich repeats (NLRs) and overlaps with the region harbouring the RAP1 gene which confers resistance to a European PA biotype in the accession Jemalong A17. Further screening of a core collection of M. truncatula accessions identified seven lines with strong resistance to PA. Allelism experiments showed that the single dominant resistance to PA in M. truncatula accessions SA10481 and SA1516 are allelic to SA10733, the donor of the APR locus in cultivar Jester. While it remains unclear whether there are multiple PA resistance genes in an R-gene cluster or the resistance loci identified in the other M. truncatula accessions are allelic to APR, the introgression of APR into current M. truncatula cultivars will provide more durable resistance to PA.
Introduction
Sap-sucking insects such as aphids, psyllids, scales and whiteflies cause significant damage in agricultural crops throughout the world.Damage is caused by direct feeding from the phloem sap as well as vectoring viruses, with aphids transmitting over 50% of all plant viruses [1].Sap-sucking insects have a close association with their host and feed from a single cell type, the phloem sieve element.Sap-sucking insects have developed the ability to disguise their presence and/or suppress plant defences, ultimately leading to the establishment of a successful feeding site [2,3].In recent years an increased research focus on studying plant-sap-sucking insect interactions has occurred, resulting in the identification of several sap-sucking insect resistance loci [4,5] and an improved understanding of the molecular mechanisms of basal defense as well as gene mediated resistance to sap-sucking insects is emerging [5].
The evolutionary origins of recognition of attackers of plants mainly stems from studies involving plant pathogens rather than insects and is better known as the plants innate immune system [6].Recognition of an attacker often occurs through resistance (R) gene products which recognize specific attacker-derived product(s) and upon recognition mount a defence response.While these R-genes mediate resistance to a variety of different pathogens and pests, their architecture is highly similar and includes one of the following conserved motifs: Nucleotide binding site, leucine-rich repeat (NLRs) or serine/threonine protein kinase domains.This would imply that basic modes of recognition and subsequent signalling pathways that trigger the defence response have been retained through plant evolution and diversification [7,8].
An important advance in understanding R-gene mediated resistance to sap-sucking insects came from the discovery of the major dominant resistance gene Mi1.2, which confers resistance to three sap-sucking insects, being potato aphid (Macrosiphum euphorbiae), whiteflies (Bemisia tabaci) biotypes B and Q and psyllids (Bactericerca cockerelli) as well as three species of root-knot nematodes (Meloidogyne spp.) [9][10][11].The second major R-gene identified and cloned was the Vat gene conferring resistance to cotton-melon aphid (Aphis gossypii) [12].Mi1.2 and Vat belong to the largest class of R-genes encoding proteins with NLR motifs of the subclass with coiled-coiled (CC) motifs.The silencing of the Resistance Gene Candidate 2 (RGC2) cluster of NLR encoding genes in lettuce (Lactuca sativa) led to the loss of resistance to the lettuce root aphid (Phemphigus bursarius) [13].In the model legume Medicago truncatula, single dominant resistance genes to other aphid species including bluegreen aphid (BGA; Acyrthosiphon kondoi), spotted alfalfa aphid (Therioaphis trifolii) and pea aphid (PA; Acyrthosiphon pisum) map to regions dense in these NLR encoding genes [14][15][16][17].For both Mi1.2 and Vat as well as the single dominant resistance genes identified in M. truncatula resistance to aphids is exerted in the phloem, which shows that plants are able to utilize their innate immune systems to defend against parasitism of the phloem.
Over the last decade M. truncatula has emerged as an excellent model plant to study plant insect interactions [5,18], with major dominant resistance genes identified to bluegreen aphid [14], spotted alfalfa aphid [15] and pea aphid [17,19].Furthermore, quantitative trait loci (QTLs) controlling different aspects of aphid resistance including antibiosis, antixenosis and tolerance to BGA, PA, spotted alfalfa aphid and cowpea aphid have been identified [20][21][22].Resistance to BGA, PA and spotted alfalfa aphid has been introgressed into the M. truncatula variety Jemalong (A17) through recurrent backcrosses to create a new aphid-resistant cultivar Jester [19,23].Resistance to these three aphid species in Jester has been dissected over the last decade and it was shown that in all cases it involves antibiosis and antixenosis, with resistance exerted at the phloem [14,15,24].
Resistance in M. truncatula to PA was of particular interest as PA has been chosen by the international aphid genome consortium (IAGC) as the model aphid and there is a reference genome sequence [25] and other genomic resources available [26] as well as a number of distinct PA biotypes [27].In the case of the Medicago-PA interaction in Jester, it was unclear whether resistance to BGA and PA was conferred by the same single dominant resistance gene, AKR (Acyrthosiphon kondoi resistance).In 2009, Guo et al. demonstrated that resistance to the Australian PA biotype was introgressed into the Jester background from a different donor than the resistance to BGA, thus there were two distinct resistance genes for the Australian PA biotype and BGA, where the resistance locus to the Australian PA biotype was termed APR for Acyrthosiphon pisum resistance [19].In M. truncatula resistance to an European pea aphid biotype (PS01) is distinct from resistance to the Australian biotype.Resistance to the European biotype was identified in M. truncatula accession A17 which is moderately resistant to the Australian biotype [17].Like APR mediated resistance RAP1 resistance is also exerted through the phloem.The genetic map position of RAP1 is on linkage group 3 in a region harbouring both serine-threonine kinase and NLR proteins.RAP1 mediated resistance causes 100% mortality to the European clone PS01 and is therefore different from APR mediated resistance since the antibiotic effect of APR on the Australian PA biotype shows no mortality, but rather a reduced reproductive rate [17,24].
Here we present a genetic map position for the APR locus and demonstrate that APR and RAP1 map to the same region on chromosome 3.We also report on a screen of additional M. truncatula germplasm for PA resistance and elaborate on the hypotheses that APR and RAP1 are two distinct genes tightly linked to one another in an R-gene cluster, or are alternative alleles of the same locus.
Resistance to Pea Aphid in the Cultivar Jester Is Controlled by a Single Dominant Gene
Previous mapping data suggested that PA resistance in Jester was linked to that of bluegreen aphid resistance mediated by the AKR locus on chromosome 3 [24].To identify the genetic location of the APR locus, two genetic mapping populations were developed between Jester and A20, a wide cross as well as Jester and A17, a narrow cross.Molecular markers developed by the M. truncatula community [28][29][30], were screened for polymorphisms between the parents for each population (Table S1).A total of 129 F 2 individuals were genotyped with 15 molecular markers polymorphic between Jester and A20.This resulted in the construction of a genetic linkage map for chromosome 3 spanning 100.9 centiMorgans (cM) with an average interval size of 7.2 cM.Seed was collected for these 129 individuals and their F 3 offspring (n = 12 per F 3 family) was infested with PA to determine their PA resistance response and thus the F 2 alleles for the APR locus.This determined that the PA resistance locus APR is located between markers h2_39a22a and h2_180m21a spanning a 12.1 cM interval (Figure 1).
Resistance to Pea Aphid in the Cultivar Jester Is Controlled by a Single Dominant Gene
Previous mapping data suggested that PA resistance in Jester was linked to that of bluegreen aphid resistance mediated by the AKR locus on chromosome 3 [24].To identify the genetic location of the APR locus, two genetic mapping populations were developed between Jester and A20, a wide cross as well as Jester and A17, a narrow cross.Molecular markers developed by the M. truncatula community [28][29][30], were screened for polymorphisms between the parents for each population (Table S1).A total of 129 F2 individuals were genotyped with 15 molecular markers polymorphic between Jester and A20.This resulted in the construction of a genetic linkage map for chromosome 3 spanning 100.9 centiMorgans (cM) with an average interval size of 7.2 cM.Seed was collected for these 129 individuals and their F3 offspring (n = 12 per F3 family) was infested with PA to determine their PA resistance response and thus the F2 alleles for the APR locus.This determined that the PA resistance locus APR is located between markers h2_39a22a and h2_180m21a spanning a 12.1 cM interval (Figure 1).Jester and A17 are 89% identical in their genome organisation [19] with Jester mainly having a large insertion from different donors on chromosome 3.Therefore, the chance to identify recombinants in the APR region of interest from a cross derived between Jester and A17 is higher than that from a cross derived between Jester and A20; thus, 384 F2 individuals of the narrow cross derived between Jester and A17 were genotyped with eight polymorphic markers near the region of APR to identify individuals with recombination events around the APR locus.This identified a total of 26 individuals with recombination events in the APR region of interest and their F3 progeny (n = 12 per F3 family) were infested with PA to determine their resistance status.As shown in Figure 1 the region of interest for the APR locus in the Jester × A17 cross spans 13.4 cM between markers MTIC51 and h2_151m16a.This region spans a physical distance of 3972.4Kb in the M. truncatula v4.0 genome assembly of accession A17, which harbours a cluster of classical nucleotide-binding site leucine-rich repeats (NLR) resistance genes, including the RAP1 resistance gene to the European PA clone LS01 [17], but not the region where the bluegreen aphid resistance gene AKR has been mapped [14].Jester and A17 are 89% identical in their genome organisation [19] with Jester mainly having a large insertion from different donors on chromosome 3.Therefore, the chance to identify recombinants in the APR region of interest from a cross derived between Jester and A17 is higher than that from a cross derived between Jester and A20; thus, 384 F 2 individuals of the narrow cross derived between Jester and A17 were genotyped with eight polymorphic markers near the region of APR to identify individuals with recombination events around the APR locus.This identified a total of 26 individuals with recombination events in the APR region of interest and their F 3 progeny (n = 12 per F 3 family) were infested with PA to determine their resistance status.As shown in Figure 1 the region of interest for the APR locus in the Jester ˆA17 cross spans 13.4 cM between markers MTIC51 and h2_151m16a.This region spans a physical distance of 3972.4Kb in the M. truncatula v4.0 genome assembly of accession A17, which harbours a cluster of classical nucleotide-binding site leucine-rich repeats (NLR) resistance genes, including the RAP1 resistance gene to the European PA clone LS01 [17], but not the region where the bluegreen aphid resistance gene AKR has been mapped [14].
Screening of M. truncatula Accessions for Additional Sources of PA Resistance
With both APR and RAP1 located in an NLR cluster on chromosome 3, we wanted to determine whether additional major PA resistance genes to the Australian PA biotype exist besides APR and perhaps with a more striking lethal resistance as conferred by RAP1 to the European PA biotype LS01.Therefore, additional lines of M. truncatula were screened for aphid performance and plant damage.Thirty-five accessions of the South Australian Research and Development Institute (SARDI) M. truncatula core collection, which represent the major clades in the phylogenetic tree of the SARDI core accessions [31] were selected to evaluate PA resistance performance.These included accessions A20, Cyprus and Borung, previously identified as being highly susceptible to PA, A17 which is moderately resistant, as well as Jester and Caliph which are highly resistant to PA [32].Plant damage and aphid populations were monitored over a 28-day period.One of the typical aphid infestation phenotypes in M. truncatula following infestation with PA is necrotic flecks on local leaves [17,24]; however this was only observed in M. truncatula accessions Jester and A17.No lethal resistance to PA was observed and all accessions showed varying degrees of stunting and wilting, with damage symptoms appearing as yellowing patches or leaf chlorosis surrounding the aphid infestation sites within 9 days after infestation.Nine accessions including two resistant controls (Jester and Caliph) were resistant and survived PA infestation after 28 days post infestation (dpi) and went on to flower and set seed, with the exception of one individual of accession SA27063 (Table 1).The remaining 26 accessions succumbed to the PA infestation, with 15 accessions including susceptible controls (Borung and A20) with higher plant damage scores than the moderately resistant accession A17 (Table 1).Each value represents the mean and standard error (SE) of three biological replicates.For the aphid population build-up, the rating scale was as described by Gao et al. [32].In a subsequent experiment the nine resistant accessions and five highly susceptible accessions from the initial screen were infested to confirm their resistance response to PA infestation with A17 included as a moderately resistant control.Starting with the initial two adult apterous aphids, PA colony density on all susceptible accessions peaked around 12 dpi; thereafter, the plants succumbing to PA infestation by 15 dpi.PA population density on A17 plants, the moderately resistant accession, reached the peak around 15 dpi (Table S2), whereas aphid populations were the largest at 21 dpi on the resistant accessions and declined thereafter at 24 dpi (Table S2).Plant damage on resistant accessions SA1516, SA28645, SA10481, SA10733, Jester and SA11753 remained stable from 21 dpi onwards with an average score of 3.4 (Table S3).
There were some notable differences in the population sizes of PA on the different resistant accessions with a notably lower population density on SA1516 and SA10481 compared to Jester.In a follow-up short-term infestation experiment the performance of PA nymphs over a four-day period was observed, and this reflected the plant damage and aphid densities seen in the long term experiments (Figure 2).The PA nymph population had a significantly lower mean relative growth rate (MRGR) on Jester, SA10733, SA1516 and SA10481 compared to the moderately resistant A17, which, in turn, had a significantly lower MRGR compared to the highly susceptible accessions A20 and Cyprus (Figure 2a) (Tukey Kramer HSD test; p < 0.05).No significant differences between the accessions were found for the survivorship of PA nymphs over this four-day period (Figure 2b) (Tukey Kramer HSD test; p < 0.05).In a subsequent experiment the nine resistant accessions and five highly susceptible accessions from the initial screen were infested to confirm their resistance response to PA infestation with A17 included as a moderately resistant control.Starting with the initial two adult apterous aphids, PA colony density on all susceptible accessions peaked around 12 dpi; thereafter, the plants succumbing to PA infestation by 15 dpi.PA population density on A17 plants, the moderately resistant accession, reached the peak around 15 dpi (Table S2), whereas aphid populations were the largest at 21 dpi on the resistant accessions and declined thereafter at 24 dpi (Table S2).Plant damage on resistant accessions SA1516, SA28645, SA10481, SA10733, Jester and SA11753 remained stable from 21 dpi onwards with an average score of 3.4 (Table S3).
There were some notable differences in the population sizes of PA on the different resistant accessions with a notably lower population density on SA1516 and SA10481 compared to Jester.In a follow-up short-term infestation experiment the performance of PA nymphs over a four-day period was observed, and this reflected the plant damage and aphid densities seen in the long term experiments (Figure 2).The PA nymph population had a significantly lower mean relative growth rate (MRGR) on Jester, SA10733, SA1516 and SA10481 compared to the moderately resistant A17, which, in turn, had a significantly lower MRGR compared to the highly susceptible accessions A20 and Cyprus (Figure 2a) (Tukey Kramer HSD test; p < 0.05).No significant differences between the accessions were found for the survivorship of PA nymphs over this four-day period (Figure 2b) (Tukey Kramer HSD test; p < 0.05).
Resistance in M. truncatula Accessions SA10733 and SA10481 Is Controlled by Single Dominant Gene
SA1516 and SA10481 had the lowest average plant damage scores, albeit similar resistance phenotype to Jester and SA10733, the donor of APR in cultivar Jester.Moreover, notably lower PA population densities on accessions SA1516 and SA10481 were observed in the long-term experiments.Therefore F2 populations were generated between the resistant accessions SA10733 and SA10481 and the highly susceptible accession A20 to determine the genetic control underlying the PA resistance in
Resistance in M. truncatula Accessions SA10733 and SA10481 Is Controlled by Single Dominant Gene
SA1516 and SA10481 had the lowest average plant damage scores, albeit similar resistance phenotype to Jester and SA10733, the donor of APR in cultivar Jester.Moreover, notably lower PA population densities on accessions SA1516 and SA10481 were observed in the long-term experiments.Therefore F 2 populations were generated between the resistant accessions SA10733 and SA10481 and the highly susceptible accession A20 to determine the genetic control underlying the PA resistance in these accessions.Phenotyping of 264 and 355 F 2 individuals of the SA10733 ˆA20 and SA10481 ˆA20 showed a Mendelian segregation ratio of 3:1 for PA resistance in both populations (Table 2).
To determine whether the single dominant resistance in SA10481 was allelic to that of SA10733 and/or SA1516, crosses were generated and F 2 individuals for three crosses evaluated for their resistance to PA.As shown in Table 3 no susceptible individuals were identified, for any of the 535 individuals assayed, whereas the susceptible controls and moderately resistant controls behaved as seen in previous experiments.Thus the single dominant resistance in SA1516 and SA10481 and SA10733 are either alleles of the same gene (e.g., APR) or genes in a tightly linked resistance gene cluster.
Discussion
Previously, we have characterised PA resistance in the M. truncatula cultivar Jester, which also harbours resistance to bluegreen aphid [24].The biology of the resistance to both aphid species in this cultivar shared similarities with resistances occurring at the phloem level and requires an intact plant and involves a combination of antibiosis, antixenosis and plant tolerance [14,24].However, the donor for bluegreen aphid resistance (accession SA1499) was a different donor than that of PA resistance (accession SA10733), thus resistance to both aphids are controlled by distinct single dominant resistance genes with the PA resistance locus tentatively named APR [19].Here we demonstrated that resistance to PA mapped 39 cM distal of the flanking markers for the bluegreen aphid resistance locus AKR (h2_6g9b and 004H01) on chromosome 3 in a region rich in classical NLR type of resistance gene (Figure 1).Moreover, the region that contains APR in the genetic background of Jester spans the same region as the region harbouring RAP1 to the European PA biotype LS01 in the genetic background of A17 [17].This could mean that APR and RAP1 are either two different alleles of the same orthologous gene, or, alternatively, two different genes in a NLR cluster of resistance genes.Further fine-mapping will be achieved in future work by generating re-sequencing data for cultivar Jester to identify single nucleotide polymorphisms (SNPs) or insertions/deletions (indels) in the APR region with the 26 recombinant F 3 families.This would narrow-down the region of interest further and allow a map-based cloning approach for the APR locus.Similarly, the use of the Medicago HapMap resources [33] that contains re-sequencing data for DZA315 would allow the identification of SNPs and indels to generate novel markers for further fine-mapping of the RAP1 locus.
Screening of diverse M. truncatula accessions with eight different European biotypes has previously been conducted by Kanvil and colleagues [27] and showed a range of differences in performance of the different biotypes across 23 M. truncatula accessions.They demonstrated that aphid virulence and host resistance were strongly dependent on the genotype of both the aphid and the host where diverse host-specific PA performance and biotype specific resistance in M. truncatula were observed.In Australia, there is currently only one biotype present and in contrast to the study by Kanvil et al. [27], no lethal resistance to the Australian biotype was identified in M. truncatula germplasm.Despite this result, seven new accessions were identified as being resistant to PA at a similar level to SA10733 and Jester both harbouring the APR gene, with notably lower PA population densities on accessions SA1516 and SA10481 compared to current cultivar Jester (Figure 2, Table 1).To determine the genetic control of PA resistance in the resistant accessions, crosses were generated to the susceptible A20 and phenotyping of the F 2 populations showed that resistance segregated in a Mendelian fashion for a single dominant gene (Table 2), raising the question whether the resistance identified in these accessions were allelic to APR, a gene somewhat linked to APR or an unlinked gene.Out of the 494 F 2 individuals phenotyped none of them showed susceptibility, which suggests that the single dominant resistance in SA1516, SA10481 and SA10733 are either alleles of the same gene (e.g., APR) or genes in a tightly linked resistance gene cluster.The latter could be a valid hypothesis as the RAP1 gene is also located in the same region on chromosome 3, and this region contains a suite of NLR resistance genes.The RAP1 gene in M. truncatula provides race-specific resistance to pea aphid biotype PS01 but not to biotype LL01 [17].Furthermore, it has been shown that different PA biotypes (both sexual and asexual clones) differ in their performance on a range of M. truncatula accessions, including Jester and A17 plants [27].Another PA biotype, N116, was virulent on RAP1 genotypes like biotype LL01 as well as on a wide range of other cultivars and wild M. truncatula genotypes [27].On the contrary, PS01 was avirulent on most of the M. truncatula accessions.The divergent performance of these PA biotypes allowed the determination of inheritance of aphid virulence, and it was demonstrated through a series of F 1 progenies of clones N116 and PS01 that the RAP1 mediated resistance can be overcome by progeny from either selfing or reciprocal crosses [34].This suggests that the annual sexual cycle in aphids can lead to the generation of novel genotypes, which might have increased or decreased virulence.In turn, M. truncatula has to adapt and develop new forms of resistance to PA.In other plant species, this adaptation to other forms of virulent pathogens/pests occurs according to the birth and death model of R genes where R-genes duplicate and diversify in gene clusters [35].Further fine-mapping of the identified PA resistance loci would shed more light on whether this has occurred in M. truncatula in response to different PA biotypes.
The identification of the APR resistance gene in M. truncatula cv.Jester is the fourth major aphid resistance gene in this genetic background (Figure 3), which also harbours resistance to bluegreen aphid conferred by genes AKR [14] and AIN [16] and spotted alfalfa aphid conferred by TTR [15].Breeders introgressed resistance to bluegreen aphid and spotted alfalfa aphid into the genetic background of Jemalong A17 from various resistance sources [19,23].Since the APR locus is located 10.5 cM distal of the flanking marker for TTR in the Jester x A20 population and thus somewhat linked to TTR, they coincidentally introduced resistance to PA as well (Figure 3).The wealth of M. truncatula genomic resources including a reference genome sequence for Jemalong A17 [36,37] and a genome sequence for the model aphid PA [25] makes the M. truncatula-PA system a great one to study plant-insect interactions and R gene specificity and evolution.Similarly, PA genomic datasets such as numerous Expressed Sequence Tags (EST) and transcriptome resources [38] and RNA interference methods to silence aphid genes [39,40] would complement the plant based studies and allow the identification of aphid effectors recognised by the resistance genes.The use of these resources and in addition to the advances in sequencing technologies and Clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 should allow the development of new ways to identify essential PA genes to establish a feeding site and/or effectors recognized by the resistance locus and might lead to effective durable resistance to aphids.
Plants and Aphids
Three genotypes of M. truncatula were mainly used being: Jester, A17 and A20.Genetic F2;3 mapping populations derived from crosses derived between both Jester and A20, and Jester and A17, were generated using a crossing procedure described by Thoquet et al [41] and used in this study for the genetic mapping and phenotyping for PA resistance.The M. truncatula core collection accessions were acquired from the South Australian Research and Development Institute (SARDI, Urrbrae, Australia).Accessions DZA315 and DZA045 were obtained from the Institut National de la Recherche Agronomique (INRA), Montpellier, France.Seeds were germinated and plants grown as described by Klingler et al. [16].The aphid species used was PA collected in Western Australia and were reared on faba bean (Vicia faba), as described by Gao et al. [32].
Plant Damage and PA Performance Tests
To assess the performance of PA and plant feeding damage, two-week-old seedlings of M. truncatula lines A17, A20 and Jester as well as 129 F3 families (n = 12 per F3 family) of the Jester × A20 population and 26 F3 families (n = 12 per F3 family) of the Jester × A17 population were grown in separate 0.9 L pots and were infested with two apterous adult aphids.Similarly, the 35 accessions (Table 1) were screened for PA resistance in a glasshouse when two-week-old and infested with two apterous adult aphids.The screening of the 35 accessions was arranged in a randomized complete block design with three replicates per accession infested for 28 days.
In all phenotyping experiments the aphids were allowed to develop, reproduce, and move freely among plants.Aphid population build-up and feeding damage on plants were assessed at a three-day interval from the third day up to 28 days post infestation using a scale from 1-5 and 0-5, respectively as described previously [20].
Aphid Performance on Caged Leaves
The survival and growth rate of PA were measured after four days on individual plants of each M. truncatula accession with ten replicates for each accession and the mean relative growth rate
Plants and Aphids
Three genotypes of M. truncatula were mainly used being: Jester, A17 and A20.Genetic F 2;3 mapping populations derived from crosses derived between both Jester and A20, and Jester and A17, were generated using a crossing procedure described by Thoquet et al [41] and used in this study for the genetic mapping and phenotyping for PA resistance.The M. truncatula core collection accessions were acquired from the South Australian Research and Development Institute (SARDI, Urrbrae, Australia).Accessions DZA315 and DZA045 were obtained from the Institut National de la Recherche Agronomique (INRA), Montpellier, France.Seeds were germinated and plants grown as described by Klingler et al. [16].The aphid species used was PA collected in Western Australia and were reared on faba bean (Vicia faba), as described by Gao et al. [32].
Plant Damage and PA Performance Tests
To assess the performance of PA and plant feeding damage, two-week-old seedlings of M. truncatula lines A17, A20 and Jester as well as 129 F 3 families (n = 12 per F 3 family) of the Jester ˆA20 population and 26 F 3 families (n = 12 per F 3 family) of the Jester ˆA17 population were grown in separate 0.9 L pots and were infested with two apterous adult aphids.Similarly, the 35 accessions (Table 1) were screened for PA resistance in a glasshouse when two-week-old and infested with two apterous adult aphids.The screening of the 35 accessions was arranged in a randomized complete block design with three replicates per accession infested for 28 days.
In all phenotyping experiments the aphids were allowed to develop, reproduce, and move freely among plants.Aphid population build-up and feeding damage on plants were assessed at a three-day interval from the third day up to 28 days post infestation using a scale from 1-5 and 0-5, respectively as described previously [20].
Aphid Performance on Caged Leaves
The survival and growth rate of PA were measured after four days on individual plants of each M. truncatula accession with ten replicates for each accession and the mean relative growth rate (MRGR) calculated as described by Gao et al. [32].The proportion of aphids that survived and MRGR were compared using the Tukey-Kramer Honestly Significant Difference test with the JMP-IN 5.1 software (SAS Institute, Cary, NC, USA).
Genetic Mapping of PA Resistance in the Various Mapping Populations
Genetic maps for the Jester ˆA20 and Jester ˆA17 mapping populations were generated using both microsatellite and gene-based markers generated by the Medicago research community.Previously we established linkage association with markers on linkage group 3 [24] and therefore markers were initially selected to be evenly distributed over linkage group 3 and were obtained from several published sources [28][29][30].A total of 26 markers were characterised for the Jester ˆA20 (n = 129) and for Jester ˆA17 (n = 384) populations with the polymorphic markers for the respective populations listed in Table S1.
Linkage group 3 was constructed for both mapping populations using a set of 15 and 8 markers for the Jester ˆA20 and the Jester ˆA17 population respectively, using Multipoint v1.2 (Institute of Evolution, Haifa University, Haifa, Israel) as described by Kamphuis et al. [42].
Allelism Tests
Pairwise crosses were made among SA10733, SA1516 and SA10481 to test the allelic status of the PA resistance in SA1516 and SA10481 as in Table 3.The seedlings of F 2 from each cross with at least eight replicates of their respective parental genotypes and A20 were tested for PA resistance.Each three-to-four-week-old seedling was infested with two apterous adult PAs for 28 days.During this period, aphids were allowed to develop, reproduce and move freely.Aphid resistance were scored as either resistant or susceptible at 28 dpi.Susceptible plants die before 20 dpi and with overwhelming aphids around 12 dpi and then totally migrate to the other plants due to the death of the host plant; resistant plants are still surviving at 28 dpi and reasonably healthy.The appearance of parental lines and A20 was used as controls.
Figure 1 .
Figure 1.Genetic map position of the APR (Acyrthosiphon pisum resistance) locus conferring resistance to the Australian pea aphid biotype, covers the same region of interest as the region of interest for RAP1 conferring resistance to a European PA biotype in the genetic background A17.
Figure 1 .
Figure 1.Genetic map position of the APR (Acyrthosiphon pisum resistance) locus conferring resistance to the Australian pea aphid biotype, covers the same region of interest as the region of interest for RAP1 conferring resistance to a European PA biotype in the genetic background A17.
Figure 2 .
Figure 2. (a) Mean relative growth rate (MRGR) of pea aphid nymphs on nine Medicago truncatula accessions over four days.Values are mean and standard error of ten replicates.Accessions that do not share the same letters indicate significant differences in pea aphid MRGR from the other accessions by Tukey Kramer HSD test (p < 0.05); (b) Survivorship of pea aphid nymphs on nine M. truncatula accessions over four days.No significant differences were observed in survivorship by Tukey Kramer HSD test (p < 0.05).
Figure 2 .
Figure 2. (a) Mean relative growth rate (MRGR) of pea aphid nymphs on nine Medicago truncatula accessions over four days.Values are mean and standard error of ten replicates.Accessions that do not share the same letters indicate significant differences in pea aphid MRGR from the other accessions by Tukey Kramer HSD test (p < 0.05); (b) Survivorship of pea aphid nymphs on nine M. truncatula accessions over four days.No significant differences were observed in survivorship by Tukey Kramer HSD test (p < 0.05).
Figure 3 .
Figure 3. Overview of the major resistance genes identified in M. truncatula cv.Jester to three different aphid species.
Figure 3 .
Figure 3. Overview of the major resistance genes identified in M. truncatula cv.Jester to three different aphid species.
Table 1 .
Evaluation of 35 Medicago truncatula accessions from the South Australian Research and Development Institute (SARDI) core collection for resistance to an Australian biotype of pea aphid.
Table 2 .
Segregation of resistance to pea aphid in resistant M. truncatula accession crossed with accession A20.Chi-square analysis for a single dominant Mendelian inheritance of resistance of the two F 2 populations indicates single dominant, Mendelian inheritance of resistance to PA in both populations.
Table 3 .
Pairwise allelism test between resistant M. truncatula accessions.Chi-square analysis for two unlinked Mendelian dominant genes indicates the resistance genes are either allelic or tightly linked. | 2016-08-24T23:09:51.855Z | 2016-07-29T00:00:00.000 | {
"year": 2016,
"sha1": "b0f738ede8920a63e506cdd177899e8bed87abfd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/17/8/1224/pdf?version=1469791223",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "6e9fec1fef2be14af00e2de896c6ccb5e0cc7513",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
35321264 | pes2o/s2orc | v3-fos-license | Ovarian cancer variant rs2072590 is associated with HOXD1 and HOXD3 gene expression
Ovarian cancer (OC) is a common cancer in women and the leading cause of deaths from gynaecological malignancies in the world. In addition to the candidate gene approach to identify OC susceptibility genes, the genome-wide association study (GWAS) methods have reported new variants that are associated with OC risk. The minor allele of rs2072590 at 2q31 was associated with an increased OC risk, and was primarily significant for serous subtype. The OC risk-associated SNP rs2072590 lies in non-coding DNA downstream of HOXD3 and upstream of HOXD1, and it tags SNPs in the HOXD3 3′ UTR. We think that the non-coding rs2072590 variant may contribute to OC susceptibility by regulating the gene expression of HOXD1 and HOXD3. In order to investigate this association, we performed a bioinformatics analysis by a functional annotation of rs2072590 variant using RegulomeDB (version 1.1), HaploReg (version 4.1), and PhenoScanner (version 1.1). Using HaploReg, we identified 19 genetic variants tagged by rs2072590 variant with with r2 >= 0.8. Using RegulomeDB, we identified that three genetic variants are likely to affect TF binding + any motif + DNase Footprint + DNase peak. Other genetic variants are likely to affect TF binding + DNase peak. Using PhenoScanner (version 1.1), we identified that these 19 genetic variants could significantly regulate the expression of nearby genes, especially the HOXD1 and HOXD3 in human ovary tissue.
INTRODUCTION
Ovarian cancer (OC) is a common cancer in women and the leading cause of deaths from gynaecological malignancies in the world [1].Like other human complex diseases, OC is caused by the combination of genetic variants and environmental factors, including the familial BRCA1 and BRCA2 mutations and common genetic variants of lower penetrance [1].In addition to the candidate gene approach to identify OC susceptibility genes, the genome-wide association study (GWAS) methods have also reported new variants that are associated with OC risk [1].
However, the exact genetic mechanisms for these OC susceptibility variants are still unclear [2].It is reported that the potential associations between gene expression and OC risk alleles may connect risk variants to their putative target genes/transcripts and biological pathways [2].The minor allele of rs2072590 at 2q31 was associated with an increased OC risk (OR = 1.16, 95% CI 1.12-1.21,p = 4.5 × 10 −14 ), and was primarily significant for serous subtype (OR = 1.20, 95% CI 1.14-1.25,p = 3.8×10 −14 ) [3].The 2q31 locus contains a family of homeobox (HOX) genes involved in regulating embryogenesis and organogenesis [3].Altered expression of HOX genes has been reported in many cancers [3].The OC risk-associated SNP rs2072590 lies in non-coding DNA downstream of HOXD3 and upstream of HOXD1, and it tags SNPs in the HOXD3 3′ UTR [3].
Research Paper www.impactjournals.com/oncotarget
We think that the non-coding rs2072590 variant may contribute to OC susceptibility by regulating the gene expression of HOXD1 and HOXD3.In order to investigate this association, we conducted a functional annotation of rs2072590 variant using RegulomeDB (version 1.1) [4], HaploReg (version 4.1) [5], and PhenoScanner (version 1.1) [6].
LD analysis using HaploReg
Using the LD information from the 1000 Genomes Project (EUR), we got 19 genetic variants tagged by rs2072590 variant with with r 2 >= 0.8.These 19 genetic variants are located around the HOXD4, HOXD3, AC009336.24and HOXD-AS1.Here, we give the detailed information including the LD information about these variants in Table 1.
Functional annotation using RegulomeDB
RegulomeDB was used to annotate these 19 genetic variants with known and predicted regulatory elements.The results showed that three genetic variants including rs1562315, rs2551802 and rs6433571 likely to affect TF binding + any motif + DNase Footprint + DNase peak, as described in Table 2. Other genetic are likely to affect TF binding + DNase peak.More detailed results are described in Table 2.
DISCUSSION
Overall, the GWAS methods have reported new variants that are associated with OC risk [1].However, the exact genetic mechanisms for these OC susceptibility variants are still unclear [2].Evidence shows that the potential associations between gene expression and OC risk alleles may connect risk variants to their putative target genes/transcripts and biological pathways [2].Zhao et al. selected seven OC risk variants including rs3814113 on 9p22, rs2072590 on 2q31, rs2665390 on 3q25, rs10088218, rs1516982, rs10098821 on 8q24, and rs2363956 on 19p13 [2].They evaluated the associations between gene expression and OC risk alleles using the whole genome mRNA expression data in 121 lymphoblastoid cell lines from 74 non-related familial ovarian cancer patients, and 47 non-cancer unrelated family controls [2].They identified two cis-associations between rs10098821 and c-Myc, and rs2072590 and HS.565379.
The OC risk-associated SNP rs2072590 lies in non-coding DNA downstream of HOXD3 and upstream of HOXD1, and it tags SNPs in the HOXD3 3′ UTR [3].However, Zhao et al. did not report any significant association between rs2072590 and HOXD1 or HOXD3.We think that the non-coding rs2072590 variant may contribute to OC susceptibility by regulating the gene expression of HOXD1 and HOXD3.Here, we conducted a functional annotation of rs2072590 variant using RegulomeDB (version 1.1) [4], HaploReg (version 4.1) [5], and PhenoScanner (version 1.1) [6].
Using HaploReg, we identified 19 genetic variants tagged by rs2072590 variant with with r 2 >= 0.8.Using RegulomeDB, we identified that three genetic variants are likely to affect TF binding + any motif + DNase Footprint + DNase peak.Other genetic variants are likely to affect TF binding + DNase peak.Using PhenoScanner (version 1.1), we identified that these 19 genetic variants could significantly regulate the expression of nearby genes, especially the HOXD1 and HOXD3 in human ovary tissue.
LD analysis using HaploReg
HaploReg is a tool for exploring annotations of the noncoding genome at variants on haplotype blocks [5].HaploReg includes LD information from the 1000 Genomes Project, chromatin state and protein binding annotation from the Roadmap Epigenomics and the Encyclopedia of DNA Elements (ENCODE) projects, sequence conservation across mammals, the effect of SNPs on regulatory motifs, and the effect of SNPs on gene expression from eQTL studies [5].We used HaploReg (version 4.1) to identify the rs2072590 tagged variants using the LD information from the 1000 Genomes Project (EUR) with r 2 > = 0.8 [5].
Functional annotation using RegulomeDB
RegulomeDB (version 1.1) is a database that annotates SNPs with known and predicted regulatory elements in the intergenic regions of the human genome [4].Known and predicted regulatory DNA elements include regions of DNAase hypersensitivity, binding sites of transcription factors, and promoter regions that have been biochemically characterized to regulation transcription [4].RegulomeDB (version 1.1) includes the public datasets from Gene Expression Omnibus (GEO), the ENCODE project, and published literature [4].
Functional annotation using PhenoScanner
PhenoScanner (version 1.1) is a curated database holding publicly available results from large-scale GWAS [6].The motivation for creating this tool is to facilitate "phenome scans", the cross-referencing of genetic variants with a broad range of phenotypes, to help aid the understanding of disease pathways and biology [6].The catalogue currently contains nearly 3 billion associations and over 10 million unique SNPs [6].The results are aligned across traits to the same effect and non-effect alleles for each SNP [6]. | 2018-04-03T04:47:09.515Z | 2017-10-13T00:00:00.000 | {
"year": 2017,
"sha1": "e9dfd7679d46340ace70f71b7f0ac9656231e4e9",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/21902/pdf/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e9dfd7679d46340ace70f71b7f0ac9656231e4e9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4467500 | pes2o/s2orc | v3-fos-license | Mesh-reinforced pancreaticojejunostomy versus conventional pancreaticojejunostomy after pancreaticoduodenectomy: a retrospective study of 126 patients
Background Pancreatic fistula is a major cause of morbidity and mortality after pancreaticoduodenectomy. The aim of this study is to compare the safety and efficacy of a newly developed technique, namely mesh-reinforced pancreaticojejunostomy, in comparison with the conventional use of pancreaticojejunostomy after undergoing a pancreaticoduodenectomy. Methods Data was collected from regarding 126 consecutive patients, who underwent the mesh-reinforced pancreaticojejunostomy or conventional pancreaticojejunostomy, after standard pancreaticoduodenectomy by one group of surgeons, between the time period of 2005 and 2016. This data was collected retrospectively. Surgical parameters and perioperative outcomes were compared between these two groups. Results A total of 65 patients received mesh-reinforced pancreaticojejunostomy and 61 underwent conventional pancreaticojejunostomy. There were no substantial differences in surgical parameters, mortality, biliary leakage, delayed gastric emptying, gastrojejunostomy leakage, intra-abdominal fluid collection, postpancreatectomy hemorrhage, reoperation, and the total hospital costs between the two groups. Pancreatic fistula rate (15 versus 34%; p = 0.013), overall surgical morbidity (25 versus 43%; p = 0.032), and length of hospital stay (18 ± 9 versus 23 ± 12 days; p = 0.016) were significantly reduced after mesh-reinforced pancreaticojejunostomy. Multivariate analysis of the postoperative pancreatic fistula revealed that the independent factors that were highly associated with pancreatic fistula were a soft pancreatic texture and the type of conventional pancreaticojejunostomy. Conclusions This retrospective single-center study showed that mesh-reinforced pancreaticojejunostomy appears to be a safe technique for pancreaticojejunostomy. It may reduce pancreatic fistula rate and surgical complications after pancreaticoduodenectomy. Trial registration This research is waivered from trial registration because it is a retrospective analysis of medical records.
Background
Pancreaticoduodenectomy (PD) has, for a long time, been used as the standard surgical procedure for the treatment of patients with malignant or benign diseases of the pancreatic head or the periampullary region. Mortality in patients undergoing PD is recorded to be below 5% for general advance in surgical technique; however, postoperative morbidity remains high at 30-50% [1][2][3]. The main factor is postoperative pancreatic fistula (POPF), which can lead to severe secondary complications such as postoperative hemorrhages and intra-abdominal abscesses [4,5]. Therefore, prevention and adequate treatment of POPF has always been of high priority [6]. Considerable techniques including pancreaticojejunostomy with duct to mucosa anastomosis or intussusceptions, main duct stenting and pancreaticogastrostomy have been described for safe surgical management of pancreatic remnants; however, no single method has made evident to the scientific community its superiority [7][8][9][10][11][12][13]. Since August 2005, our institute has attempted to reduce the frequency of pancreatic fistula (PF) by using new method termed mesh-reinforced anastomosis [14]. We have previously reported in previous studies that this technique appears to be safe, simple, and quick [14,15]. The purpose of this retrospective study is to compare perioperative outcomes of mesh-reinforced pancreaticojejunostomy (PJ) with the conventional surgical procedure of pancreaticojejunostomy (PJ). This study was conducted by the same pancreatic team of the same institute.
Database
From August 2005 to November 2016, 126 patients who underwent mesh-reinforced PJ or conventional PJ after PD in our institution were included in this study. Patients' data, including demographics, operative procedures, postoperative complications, and mortality, were retrospectively compared between 65 consecutive patients with mesh-reinforced PJ and 61 consecutive patients with conventional PJ. The perioperative management, including antibiotics, perioperative Octreotide administration, and enteral and parenteral nutrition, was the same in both groups. Definitions of pancreatic fistula (PF), postpancreatectomy hemorrhage (PPH), and delayed gastric emptying (DGE) were followed according to the International Study Group of Pancreatic Surgery (ISGPS) [16][17][18]. Medical morbidity was termed as conditions not related to surgical complications including cardiac, pulmonary, and renal-related complications. Follow-up results were obtained from patients' medical records and telephone calls that were made. The diameter of the main pancreatic duct (MPD) 1 year after PD was measured on computed tomography scans and recorded. The last follow-up day was July 10, 2017. The study was approved by the Committee of Ethics of Sir Run Run Shaw Hospital of Zhejiang University. All patients signed a written informed consent acknowledging potential surgical risks.
Operation techniques
In both groups, pancreatoduodenectomy (PD) was performed using the standard procedure [19]. Hemostatic management of the cut surface was done by electric coagulation or suture ligatures with 4-0 prolene stitches (Ethicon, Somerville, NJ). Our technique of meshreinforced PJ has been reported previously [14,15]. In brief, pancreatic remnant was isolated 3 cm in length from the pancreatic transection. A mesh (polypropylene mesh, Ethicon, New Jersey, USA) strip of 1.5 cm in width was tightly wrapped over the pancreatic stump in one circle roughly 1.0 cm from the transection edge. The main pancreatic duct was identified, and subsequently, a stent tube was inserted (Fig. 1). The posterior part of jejunal stump and the inner part of mesh in the pancreas were anastomosed using continuous 4-0 prolene stitches (Fig. 2). The pancreatic stump invagination was performed by tightening the posterior sutures after all of the posterior stitches were completed. The continuous sutures were subsequently extended to fix the anterior part of jejunal loop and the inner part of mesh in the pancreatic stump using the same principle (Fig. 3). Conventional pancreaticojejunostomy (CPJ) with end-to-end anastomosis was performed between the pancreatic stump and the jejunum loop by two layers. The outer layer consisted of the remnant of pancreatic capsule and the seromuscular layer of the jejunum. The inner layer consisted of the pancreas and the full thickness of the jejunum. In both groups, a stent tube was placed in the main pancreatic duct.
Statistical analyses
Statistical analysis was carried out using SPSS version 20.0 (IBM, Armonk, New York, USA). Categorical variables were presented as numbers and percentages; continuous variables were expressed as mean ± standard deviation (SD) or median (range). Categorical variables were compared using the χ 2 test, or Fisher's exact test when necessary, and continuous variables with Student's t test. The risk factors of POPF were investigated by using logistic regression analysis. Parameters that were significant on univariable analysis (p < 0.100) and/or expected to be important clinically were included in the multivariable logistic regression model. Results were expressed as odds ratios (ORs) with 95% confidence intervals (CI). A p value < 0.05 was considered statistically significant.
Patient characteristics and intraoperative data
Patient baseline parameters are shown in Table 1. There were no significant differences between mesh-reinforced PJ and conventional PJ with regard to age (p = 0.312), gender (p = 0.726), body mass index (p = 0.186), and the American Society of Anesthesiologists (ASA) Classification (p = 0.899). Intraoperative parameters are shown in Table 2. No significant differences between meshreinforced PJ and conventional PJ groups with regard to pathologic findings (p = 0.839), pancreatic texture (p = 0. 688), and blood loss (p = 0.818) were observed. There was also no statistical difference between the two groups concerning operative time (359 ± 69 vs. 351 ± 63, p = 0.487). Table 3 describes the postoperative complications and economic outcomes. The overall in-hospital mortality was recorded as 4.0% (5 of 126). One fatality was recorded in the mesh-reinforced PJ group, and four fatalities occurred in the conventional PJ group. The cause of death was intraperitoneal hemorrhage in mesh-reinforced PJ group (n = 1). And in conventional PJ group, the causes of death were multiorgan failure (n = 2), intraperitoneal abscess (n = 1), and intraperitoneal hemorrhage (n = 1). There were no substantial differences in mortality between the two groups (p = 0.197). For patients who survived the operation, seven patients required reoperation due to pancreatic anastomosis leakage (n = 3), gastrojejunostomy leakage (n = 2), or intraperitoneal hemorrhage (n = 2). The reoperation rate was 3% (2 of 65) in the mesh-reinforced PJ group and 8% (5 of 61) in the conventional PJ group, with no significant differences (p = 0.262). Postpancreatectomy hemorrhage (PPH) was observed in nine patients. All mentioned cases occurred more than 24 h after PD and originated from the peripancreatic region. The treatment for PPH included blood transfusion (9/9), arterial embolization (4/9), and relaparotomy for severe PPH (2/9). Postpancreatectomy hemorrhage revealed no substantial difference between the two groups (p = 0.454). Delayed gastric emptying (DGE), biliary leakage, and gastrojejunostomy leakage, as well as intra-abdominal fluid collection, also showed no significant differences between the two groups ( Table 3). The incidence of POPF was significantly different between mesh-reinforced PJ and the conventional PJ group (15 vs. 34%, respectively; p = 0.013) ( Table 3). Eleven cases of pancreatic fistula of grade B were managed by antibiotics and total parenteral nutrition, and all recovered well after conservative treatment. Treatment for grade C pancreatic fistula included percutaneous drainage (4/4) and re-laparotomy (3/4). One case of grade C in the mesh-reinforced PJ group was treated by re-laparotomy and ultimately recovered well. Two cases of grade C in the conventional PJ group received re-laparotomy and died from POPF-induced sepsis and bleeding. Total surgical morbidity (25 versus 43%; p = 0.032) and postoperative length of hospital stay (18 ± 9 days vs. 23 ± 12 days; p = 0.016) were significantly different between mesh-reinforced PJ and conventional PJ groups (Table 3). There was no substantial difference in the total hospital costs (106,265 ± 1231 RMB vs. 114,265 ± 1349 RMB; p = 0. 142) between mesh-reinforced PJ and conventional PJ groups (Table 3). Forty-one cases in the mesh- (Table 4). No statistical differences between the two groups concerning dilated pancreatic duct were observed (p = 0.578).
Risk factors for development of pancreatic fistula
Predictors of PF for all patients are shown in Table 5. Factors significantly increasing the risk of PF by univariate logistic regression analysis included soft pancreatic texture, ampullary or duodenal disease, and type of conventional pancreaticojejunostomy (p < 0.05). A multivariate logistic regression analysis revealed that the independent factors that were highly associated with PF were soft pancreatic texture and type of conventional pancreaticojejunostomy (p < 0.05).
Discussion
Despite the remarkable progress in surgical technique and perioperative care during the last decades, the pancreaticenteric anastomosis remains the "Achilles heel" of PD. Although more than 80 different methods have been described for safe surgical management of the pancreaticenteric anastomosis, none have been proven to be superior techniques over others, and consequently became widely accepted [20]. Because most of these surgical methods include stitches that penetrate through the pancreatic parenchyma and the soft pancreatic tissue, it makes the pancreas vulnerable to the formation of PF [21,22]. Peng and coworkers [23] performed a comprehensive three-layer invagination anastomosis, called binding anastomosis, to protect the pancreatic anastomosis from PF. Remarkably, a 0% rate of PF has been reported using this technique. However, this procedure includes complex and troublesome maneuvers. To effectively prevent the PF, we have designed a new technique, namely "mesh-reinforced pancreaticojejunostomy." This technique using single-layer continuous suturing is far less complex when compared with the binding pancreaticojejunostomy. In the present study, there is no significant difference between mesh-reinforced PJ and conventional PJ in regard to operative time (p > 0.05). This new technique of mesh-reinforced pancreaticojejunostomy is favored for patients with a soft pancreatic remnant. Mesh around the pancreatic remnant provides a safe anchor site for the suture, which is particularly suitable for soft pancreatic parenchyma to avoid anastomotic dehiscence. In the present study, the rate of overall PF was recorded to be 15% in the mesh-reinforced PJ group and 34% in the conventional PJ group with a significant difference (Table 3). Several previous studies have evaluated risk factors of pancreatic fistula after pancreatic-enteric anastomosis. These risk factors include age, prolonged jaundice, and intraoperative blood loss, all of which have been associated with an increased risk of PF [24,25]. In the current study (Table 5), demographic factors, such as age, gender, and prolonged jaundice were not statistically associated with pancreatic fistula. Intraoperative parameters, such as soft pancreatic texture and pathology diagnoses, were found to positively correlate with the risk of PF on univariate analysis (p < 0.05). However, pathological diagnosis failed to maintain its statistical significance in the multivariate model. In the stepwise multivariate logistic regression analysis (Table 5), independent factors influencing PF rates were the texture of the organ and type of conventional pancreaticojejunostomy (p < 0.05). Thus, these results were similar with a number of studies [26][27][28], which reported that a soft pancreatic remnant is more likely to develop PF.
To identify new surgical techniques that can substantially lead to decrease mortality rates is a challenging task due to the fact that operative mortality in patients undergoing PD is already low. Therefore, the length of postoperative hospital stay comes to be an important representation of patients' condition and surgical outcome. A shorter length of postoperative hospital stay is considered a predictor of less-invasive surgical procedures and less cost of medical expense. Our data demonstrates that length of hospital stay was significantly reduced in meshreinforced PJ group (18 ± 9 vs 23 ± 12 days; p = 0.016). This shows mesh-reinforced pancreaticojejunostomy developed a fast-track postoperative course. The possible explanation is that surgical complications and POPF rate were significantly less common in the mPJ group (p < 0.05 Table 3). The occurrences of POPF and surgical complications contribute to an increased length of postoperative hospital stay. The cost of mesh increased hospital cost, but there was no significant difference in total hospital cost between the two groups (p > 0.05 Table 3). We found that the length of postoperative hospital stay, surgical complications, and POPF rates were significantly less common in the mPJ group (p < 0.05 Table 3) which reduced the total hospital cost. The advantages of mesh-reinforced PJ [14,15] are as follows: Firstly, mesh provided a safe anchor site for the suture to avoid laceration of anastomotic and postoperative bleeding; secondly, mesh compression to pancreatic tissue minimized the chance of pancreatic leakage and bleeding; thirdly, mesh was thought to stimulate growth of fibroblast and enhance the anastomotic healing process. We considered that these advantages of mesh decreased pancreatic fistula rate, hospital stay, and the complication rate.
Theoretically, the use of mesh has potential disadvantages. As an implanted foreign body, mesh may increase the risk of intra-abdominal infection. However, our data showed there was no statistical difference between the two groups concerning occurrence of abdominal infections (p > 0.05). We think it may be because of the fact that the mesh was completely wrapped by the jejunal loop during the procedure. On the other hand, the mesh had contractility, which may result in pancreatic atrophy or pancreatic duct dilation. Prolene-Mesh had a contractility of around 20% [29]. We used polypropylenemesh reinforcement in 10 pigs in an animal experiment, which showed that the pancreatic duct was dilated, and the mesh was rejected 3 months after mesh-reinforced pancreatojejunostomy in all experimented pig subjects. However, the pancreatic duct of the pig was too fine to have a stent tube placed inside. Our data showed there was no statistical difference between the two groups concerning dilated pancreatic duct at the 1-year followup, demonstrating that the stent tube withstood compression by the mesh strip. Unfortunately, the anastomotic patency and postoperative pancreatic function was not examined in our study.
Conclusions
In conclusion, this retrospective single-center study showed that mesh-reinforced pancreaticojejunostomy appears to be a simple and safe technique for pancreaticojejunostomy. It provided a safe anchor site for the suture, which was especially suitable for the soft and fragile pancreatic texture to avoid laceration of anastomotic sutures and prevent pancreatic leakage. It should be applicable to all types of pancreatic remnant. Our study was a retrospective analysis of medical records. It should be mentioned that it has all the disadvantages that are associated with any retrospective series. A prospective randomized control study is necessary to confirm the efficacy of this procedure in future. | 2018-03-28T11:27:45.875Z | 2018-03-27T00:00:00.000 | {
"year": 2018,
"sha1": "ac96a1d4e0a8b753f564ff5e5b10cf436ac6cfc6",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-018-1365-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac96a1d4e0a8b753f564ff5e5b10cf436ac6cfc6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216825421 | pes2o/s2orc | v3-fos-license | APPROACHES FOR MONITORING THE LEVEL OF PROVIDING MUNICIPAL ADMINISTR ATIVE SERVICES ELECTRONICALLY (UKR AINIAN CASE)
In the article several ways for creating methodology of monitoring e-government progress and perspectives on local level is analysed. This article seeking to review and contextualize the wealth of e-administration research in post-communist countries. On the basis of evolving monitoring practices some recommendations are pro-posed on how to improve the municipal e-services assessment quality.
Introduction
Municipalities increasingly active role during the time of building informational society patterns has drawn further attention to e-governance initiatives on regional and sub-regional levels. The crucial question for both academic and policy-making communities is: are local e-government monitoring tools for post-communist countries adequate for measuring progress in building truly effective, responsive and accountable local self-government?
Vol. 28/2, 4/2018 Approaches for monitoring the level of providing municipal administrative services electronically (Ukrainian case) Therefore, as a result of different approaches adopted most scholars explores front-office as well as backoffice aspects of e-government from only one perspective either civic sector, businesses or public officials. This studies mainly focuses on provision of information services However, the values of local administration and their changes seems to be essential as the certain services in post-communist countries are still dependent from human being factor and administrative culture is not fully constructed for perceived citizens like partners. Also e-administration could be assessed from transformation point of view. Is it helpful for introducing new philosophy of public administration based on enhancement civic engagement in post-soviet countries.
Objective and methodology
The objective of this article is to give an overview of assessment municipal e-government concept and to reveal the evolution of monitoring tools applied in Ukraine. It should be started with a review of the current experience, discussed digital market in Ukraine and finally outlined the latest trends in e-government assessment process along with the key challenges in Ukraine.
It was used a method of theoretical, logical and systemic analysis of literature (scientific papers, policy documents and statistical sources) to study various views on monitoring the level of providing municipal e-services and outline recent trends in Ukraine. Also methods of comparative analysis (to compare various approach for building indicators) are applied in the article.
Results and discussion
So far, there have been a very few studies on evaluating performance of e-government on regional and local levels. It is a new field of research in Ukraine. Main Ukrainian studies can be referred to the "first generation" of evaluating e-government: focus on the problems of implementing the concept of e-Ukraine or review of the problem of assessing e-government effectiveness (Chmelyova, Zolotar, 2014;Kondratenko, 2011;Novosad, Seliverstov, Yurynets, 2011).
First complex attempt to assess the level of introducing e-administration instruments in Ukrainian regions has been performed in 2015. In previous years, the efficiency of sites of oblast councils, local councils of oblast centers, and councils of the second largest cities in Ukraine, have been already conducted by the Civil Society Institute NGO, in particular: -2008 -sites of oblast councils (NUTS 2), -2009 -sites of local councils of oblast centers (NUTS 3), -2010 -sites of the second largest cities (LAU) in Ukraine. It is also important to note a complexity of the OSI measurement method, as the expected assessment involves qualitative rather than quantitative values. It concerns four stages of developing and providing online services (United Nations…, 2014 ): a) stage 1: emerging information services: government websites providing information on public policies, governance, laws, regulations, relevant documents, and the types of government services provided; b) stage 2: enhanced information services: government websites delivering enhanced one-way or simple two-way e-communication between government and citizens, such as downloadable forms of government services and applications; Volodymyr Streltsov, Piotr Niedzielski c) stage 3: transactional services: government websites engaged in two-way communication between the government and citizens, which can include requesting and receiving inputs on government policies, programmes, and regulations; citizens can get specialized data and download various forms after electronic authentication of their identity; d) stage 4: connected services: government websites use Web 2.0 and other interactive tools to communicate with citizens. E-services and e-solutions cut across the departments and ministries in a seamless way; information, data and knowledge are transferred from government agencies through integrated applications. The government creates an environment that empowers citizens to be more involved in government activities to have a voice in developing and making decisions. In early 2013, the Coalition of NGOs monitored the efficiency of introduced electronic governance system in 100 municipalities of Ukraine; the monitoring included the analysis of development and efficiency of using official websites and electronic document circulation system in local self-government bodies of selected cities (Kuspliak, Serenok, 2014). On the basis of the research was made a rating of local self-government bodies according to the activeness of using e-administration instruments; analyzed and summarized the results, developed recommendations for local self-government bodies regarding transparency, openness, and work optimization by the use of information and communications technologies.
Basic ground for e-administration monitoring is constitute Ukrainian legislation containing regulations on the use of information and communications technologies (ICT). Thus, a number of laws oblige local self-government bodies to use the ICT: 1. Law of Ukraine as of 1 June 2010 No. 2289-VI On Public Procurement. On the basis of these laws and partly on OSI measurement, the certain criteria for the assessment of front and back-offices of local councils were formed. Simultaneously, for taking into account some points, important for territorial communities, they were included in the list of criteria. During this attempt was used a system of indicators divided into five categories (Kuspliak, Serenok, 2014 Vol. 28/2, 4/2018 Approaches for monitoring the level of providing municipal administrative services electronically (Ukrainian case) e) timely content updates category (number of points for this category is 22). It should be added that in the last category was assessed the promptitude of content updates, divided into eight types of information, according to the legislation: -news -not later than 1 day (news for the current of previous day are published), -draft regulatory acts -not later than 5 working days after publishing a notification about the promulgation of this regulatory act, -other draft decisions -in 20 days before their adoption, -decisions of city council -within 5 days after their signing, -decisions of city mayor -within 5 days after their signing, -decisions of executive bodies' chairmen -within 5 days after their signing, -reports of city mayor -within 5 days after their signing, -income declaration of city mayor for the last year -not later than after 30 days after its submission. Thus, the official website of every selected local self-government body was analyzed according to 98 indicators, and the maximum number of points was 165. According to the results of assessment of official websites of LSGBs, was created a rating of local self-government bodies according to the total number of points, and in every category.
In case of the Centers of Rendering Administrative Services (CRAS) research main attention was paid not to assess the quality of services provided by CRASs, but to analyze the quality of their basis functioning elements, like the electronic queue, information terminals and stands, accompanying services and conditions created for people with disabilities etc. This category contains 11 indicators, the maximum number of points -13 (Kuspliak, Serenok, 2014).
Therefore, as a result of methodology adopted this monitoring concentrates only one perspective either citizens.
Later approach taking into account the developments that have taken place in recent years in the field of local e-government assessment, six key measurement dimensions were identified, namely (Donetsk and Lugansk…, 2018): -measuring organizational capacity and development of technical infrastructure (back-office), -measuring the information content of the official websites of the target authorities and ensuring the principles of the availability of web content in their work (front-office), -measuring the use of e-participation tools in the target authorities (front-office), -measuring access to public information in the target authorities in the form of open data (front-office), -measuring access to administrative services electronically in the target authorities (front-office), -measuring the scale of the practice of implementing electronic document management systems in the target authorities (back-office). Further changes in finding proper indicators were made: 1. Green (C) means that there is convincing evidence of a high level of implementation (use) of e-government tools or activities. 2. Yellow (F) means that evidence of a high level of implementation (use) of e-government tools or activities is not so obvious. 3. Red (B) means there is strong evidence of problems with the implementation (use) of e-government tools or activities.
Volodymyr Streltsov, Piotr Niedzielski 4. Gray (C) means that information for evaluating the implementation (use) of e-government tools or activities is not enough. This approach could be very hard compared with previous model where every selected local self-government body was analyzed according to the system of indicators corelated with points.
Very specific domain constitutes the assessment of the electronic document circulation system which is quite complicated. On addition some gaps in Ukrainian legislation concerning the implementation and use of the electronic document circulation system in activities of government bodies are. In 2003, two relevant laws were adopted in Ukraine -On electronic document and electronic document circulation, and On electronic signature. They determine basic notions and paperwork requirements, general principles of electronic document circulation, give the definition of concepts, features, legal status, constituent elements of a digital signature, characterize concepts and requirements for digital signature key certificates, conditions and safety measures, and general principles of operation of key certification centers. However, according to the experts, these laws pay a little attention to the mechanism of implementation the electronic document circulation system in governmental bodies.
Besides Some Ukrainian cities are actively introducing the electronic document circulation system for a long time already, and have reached the certain results. However, there are a lot of factors which negatively influence the introduction of fully functioning electronic document circulation systems by local self-government bodies. First of all -it's the absence of the unified strategy for step-by-step introduction of electronic document circulation with certain financing; absence of standardized certified programs and security rules, unwillingness of the senior officials refuse manual administration, and paperwork culture of local governments' employees; the problem of data storage reliability and smooth operation of the system; budget limitations.
The exploratory findings need to be considered carefully. Yet, they are still interesting because there is a little empirical research which have explored this complex matter. But the following research limitations should be take into account: a) hard to estimate the reliability of information, published on official sites of local self-government bodies; b) some assessed information was situated on the website connected to the official website of a city council; c) difficulties related to receiving delayed answers on information requests by city councils and also this answers are usually not full or contradictory; d) questionable validity of monitoring the quality of CRASs basis functioning elements, as long as such monitoring would require more opinions of users to form objective results.
Vol. 28/2, 4/2018 Approaches for monitoring the level of providing municipal administrative services electronically (Ukrainian case)
Conclusions
Ukraine in time of transforming into a networked society trying to build horizontal connections between the state, municipalities and its citizens. In the course of adopting values of EU administrative space, the building of the effective and accountable e-government would help facilitate its capacity to manage resources, implement sound policies and better satisfy the need of its citizens. Therefore, it would be timely to set some light on the public value and how to use it for monitoring the e-government service performance because of its comprehensiveness.
"Second generation" of evaluating e-government in Ukraine focuses not only on provision of information services, but trying to find effective ways of involving citizens in public affairs on regional and local levels. Public value-based e-Government services monitoring should be understood at the regional and local levels.
Issues analysed by different approaches lead to different outcomes and give only part of the answer what is the level of e-government in a given local community. The correct evaluation of e-government on regional and local levels should more concentrate on effectiveness of municipalities; quality of public service delivery; and building transparency and accountability. Relationships among those dimensions, or how they relate each other in the field of e-Government performance can be clarified in future research. | 2020-04-30T09:11:56.673Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "54af5ad77cf0a90862591b605f33e1f2aa0a9fff",
"oa_license": "CCBYSA",
"oa_url": "https://wnus.edu.pl/ejsm/file/article/view/18474.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "06d1590617dbe506aa3af3f105aadc87b5dac358",
"s2fieldsofstudy": [
"Political Science",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
} |
16115193 | pes2o/s2orc | v3-fos-license | The role of nanoparticle shapes and deterministic aperiodicity for the design of nanoplasmonic arrays
In this paper, we study the role of nanoparticle shape and aperiodic arrangement in the scattering and spatial localization properties of plasmonic modes in deterministic-aperiodic (DA) arrays of metal nanoparticles. By using an efficient coupled-dipole model for the study of the electromagnetic response of large arrays excited by an external field, we demonstrate that DA structures provide enhanced spatial localization of plasmonic modes and a higher density of enhanced field states with respect to their periodic counterparts. Finally, we introduce and discuss specific design rules for the engineering and optimization of field enhancement and localization in DA arrays. Our results, which we fully validated by rigorous Generalized Mie Theory (GMT) and transition matrix (T-matrix) theory, demonstrate that DA arrays provide a robust platform for the design of a variety of novel optical devices with enhanced and controllable plasmonic fields. 2009 Optical Society of America OCIS codes: (240.6680) Surface plasmons; (240.6695) Surface-enhanced Raman scattering; (050.6624) Subwavelength structures; (290.4020) Mie theory. References and links 1. M. Queffelec, Substitution dynamical systems-spectral analysis, Lecture Notes in Mathematics, (Springer: Berlin, 1987), Vol. 1294. 2. E. Macia, “The role of aperiodic order in science and technology,” Rep. Prog. Phys. 69, 397-441 (2006). 3. S. G. Williams, “Symbolic dynamics and its applications,” (American Mathematical Society, Providence, RI, 2004); ISBN, 0821831577. 4. M. R. Schroeder, Number Theory in Science and Communication, (Springer-Verlag, 1985). 5. P. Prusinkiewicz and A. Lindenmayer, The Algorithmic Beauty of Plants, (Springer, New York, 1990). 6. C. Janot, Quasicrystals: a primer, 2nd ed. (Oxford University Press, New York, 1997). 7. M. Dulea, M. Johansson, and R. Riklund, “Localization of electrons and electromagnetic waves in a deterministic aperiodic system,” Phys. Rev. B 45, 105–114 (1992). 8. L. Kroon, E. Lennholm, and R. Riklund, “Localization-delocalization in aperiodic systems,” Phys. Rev. B 66, 094204 (2002). 9. A. Gopinath, S. V. Boriskina, N. N. Feng, B. M. Reinhard, and L. Dal Negro, “Photonic-plasmonic scattering resonances in determinsitic aperiodic structures,” Nano. Lett. 8, 2423–2431 (2008). 10. A. Rudinger and F. Piechon, “On the multifractal spectrum of the Fibonacci chain,” J. Phys. A: Math. Gen. 31, 155-164 (1998). 11. L. Dal Negro, N. N. Feng, and A. Gopinath, “Electromagnetic coupling and plasmon localization in deterministic aperiodic arrays,” J. Opt. A, Pure Appl. Opt. 10, 064013 (2008). 12. J. M. Luck, “Cantor spectra and scaling of gap widths in deterministic aperiodic systems,” Phys. Rev. B 39, 5834-5849 (1989). 13. L. Dal Negro and N. Feng, “Spectral gaps and mode localization in Fibonacci chains of metal nanoparticles,” Opt. Express 22, 14396-14403 (2007). 14. L. Dal Negro, C. Forestiere, G. Miano, and G. Rubinacci, “Role of aperiodic order in the spectral, localization, and scaling properties of plasmon modes for the design of nanoparticle arrays,” Phys. Rev. B 79, 85404 (2009). 15. A. Gopinath, S. Boriskina, B. Reinhard, and L. Dal Negro, “Deterministic aperiodic arrays of metal nanoparticels for surface-enhanced Raman scattering,” Opt. Express 17, 37413753 (2009). #109277 $15.00 USD Received 26 Mar 2009; revised 1 May 2009; accepted 6 May 2009; published 26 May 2009 (C) 2009 OSA 8 June 2009 / Vol. 17, No. 12 / OPTICS EXPRESS 9648 16. D. W. Brandl, N. A. Mirin, and P. Nordlander, “Plasmon modes of nanosphere trimers and quadrumers,” J. Phys. Chem. B 110, 12302-12310 (2006). 17. E. M. Purcell and C. R. Pennypacker, “Scattering and absorption of light by nonspherical dielectric grains,” Astrophys. J. 186, 705-714 (1973). 18. B. T. Draine, “The discrete dipole approximation and its application to interstellar graphite dust,” Astrophys. J. 333, 848-872 (1988). 19. K. L. Kelly, E. Coronado, L. Zhao, and G. C. Schatz, “The optical properties of metal nanoparticles: the influence of size, shape, and dielectric environment,” J. Phys. Chem. B 107, 668-677 (2003). 20. L. Zhao, K. L. Kelly, and G. C. Schatz, “The extinction spectra of silver nanoparticle arrays: influence of array structure on plasmon resonance wavelength and width,” J. Phys. Chem. B 107, 7343-7350 (2003). 21. C. L. Haynes, A. D. McFarland, L. Zhao, R. P. Van Duyne, and G. C. Schatz, “Nanoparticle optics: the importance of radiative dipole coupling in two-dimensional nanoparticle arrays,” J. Phys. Chem. B 107, 7337-7342 (2003). 22. S. Zou and G. C. Schatz, “Theoretical studies of plasmon resonances in one dimensional nanoparticles chains: narrow lineshapes with tunable widths,” Nanotech. 17, 2813-2820 (2006) 23. Y.-L. Xu, “Electromagnetic scattering by an aggregate of spheres,” Appl. Opt. 34, 4573-4588 (1995). 24. T. Wriedt and A. Doicu, Light Scattering by Systems of Particles, (Springer, Berlin, 2006). 25. T. Wriedt, “A review of elastic light scattering theories,” Part. Part. Syst. Charact. 15, 67–74 (1998). 26. L. Tsang and J. Au Kong, Scattering of Electromagnetic Waves, (John Wiley, NY, 2001). 27. S. A. Maier, P. G. Kik, and H. A. Atwater, “Optical pulse propagation in metal nanoparticle chain waveguides: etimation of waveguide loss,” Phys. Rev. B 67, 205402 (2005). 28. C. F. Bohren and D. R. Huffman, Absorption and scattering of light by small particles, (John Wiley, 2004). 29. P. Nodlander, C. Oubre, E. Prodan, K. Li, M. I. Stockman, “Plasmon hybridization in nanoparticle dimers,” Nano. Lett. 4, 899-903 (2004) 30. W. Rechberger, A. Hohenau, A. Leitner, J. R. Krenn, B. Lamprecht, F. R. Aussenegg, “Optical properties of two interacting gold nanoparticles,” Opt. Comm. 220, 137-141 (2003) 31. V. Shalaev, Optical properties of nanostructured random media, (Springer-Verlag, 2002). 32. K. Li, M. I. Stockman, and D. J. Bergman, “Self-similar chain of metal nanospheres as an efficient nanolens,” Phys. Rev. Lett. 91, 227402 (2003).
Introduction
Understanding plasmonic excitations in deterministic structures without translational invariance offers a vastly unexplored potential for the creation and manipulation of subwavelength localized electromagnetic fields.Deterministic-aperiodic (DA) structures, which are generated by the mathematical rules of symbolic dynamics and number theory [1][2][3][4][5], manifest unique light localization and transport properties associated with a great structural complexity [6][7][8] and can be conveniently fabricated using conventional nano-lithographic techniques [9].These structures, which are intermediate between disordered systems and periodic ones, enable a unique control and manipulation of spatially localized plasmonic states over broadband frequency and angular spectra.When fabricated using metal nanoparticles on dielectric substrates, DA structures give rise to large plasmonic gaps, as for periodic media (i.e.photonic-plasmonic crystals) and to nanoscale field localization with strong electric field enhancement (with respect to the incident wave), like disordered random media with fractal geometries.However, differently from roughened metal surfaces and random systems, which are irreproducible and lack simple design rules, DA structures are amenable to engineering and deterministic optimization.The spatial complexity of DA structures is described by the corresponding spatial Fourier spectra, which in contrast to periodic structures, densely fills the reciprocal space with multi-fractal features [6,[9][10][11].In this paper, we explore DA plasmonic arrays of metal nanoparticles for the design of sub-wavelength optical fields.In particular, the discussion will be focused on the scattering and field localization properties of DA arrays generated according to the Fibonacci, Thue-Morse (TM) and Rudin-Shapiro (RS) sequences, which we have recently generalized into two spatial dimensions using simple symbolic inflations [11].These structures are the representative members of the three known classes of deterministic non-periodic systems characterized by quasi-periodic, singular-continuous and absolutely-continuous (flat) Fourier spectra, respectively [12].Unlike random structures, the positions of nanoparticles in these DA arrays are uniquely specified by choosing the deterministic generation rule and the minimum inter-particle separation, enabling the systematic study and optimization of their plasmonic and scattering properties.We have recently explored and investigated the spectral, localization, and dispersion properties of dipolar eigenmodes in linear and two-dimensional DA arrays of spherical nanoparticles arranged according to Fibonacci, TM, and RS sequences [11,13].In Fibonacci and TM systems we demonstrated the presence of large spectral gaps in the pseudo-dispersion diagrams of plasmonic modes, and we clearly established the connection between the quasiperiodic geometry of Fibonacci arrays and the resulting spectral properties of their plasmonic modes [14].Moreover, by combining electron-beam lithography (EBL), experimental darkfield scattering and Raman spectroscopy with accurate electrodynamics calculations based on the Generalized Mie Theory (GMT), we recently demonstrated broadband, distinctive scattering resonances in DA arrays of Au nanoparticles [9] as well as spatially-averaged Raman enhancement factors of the order of ~ 10 7 in DA arrays of Au nano-triangles [15].However, in the field of metal plasmonic nanostructures, the study and the device applications of DA structures are still in infancy.In particular, the accurate understanding of the optical scattering and localization properties of large arrays of metal nanoparticles without translational invariance poses significant challenges to the numerical solution methods of classical electrodynamics.In this paper, we validate and utilize an efficient numerical method based on coupled-dipoles in order to study, at reduced computational costs, large DA arrays (more than thousand interacting nanoparticles) excited by an external field.In addition, we discuss specific design rules for the engineering and optimization of field enhancement and localization with respect to the array geometry and the shapes of the nanoparticles.Our results demonstrate that DA structures provide a novel and robust platform for the engineering of novel nanoplasmonics devices with enhanced plasmonic localization and enhancement distributed over larger areas compared to periodic arrays of metal nanoparticles.
Computational method
The near-and far-field properties of arbitrary scattering objects can be efficiently calculated using the coupled-dipole approximation (CDA), originally developed by Purcell and Pennypacker [17] and improved by Draine [18].This method, also known as discrete dipole approach (DDA), has been widely utilized in the context of periodic arrays of metal particles [19][20][21][22] due to its fast convergence and convenient implementation.In the following, we will briefly review the main features of this method when applied to arrays of metal nanoparticles with ellipsoidal shapes.To this purpose, we introduce a rectangular coordinate system.The fundamental directions ˆˆx, y, z are chosen to be coincident with the three principal axes of the ellipsoids.The position vectors of the centers of the ellipsoids are indicated with along the x, y, and z directions, respectively).The value of the electric field at the center of the h-th particle minus the value of the electric field generated by the h-th particle itself is given by: ( ) where 0 α is the polarizability dyad, ( , , ) is the diagonal dyad of the depolarizing coefficients A a , A b , A c whose expressions are given by the integrals: In order to correctly take into account the contributions of radiative damping and depolarization of the radiation across the particle surface (due to the finite ratio of particle size to wavelength) we multiply given in Eq. ( 5).This choice implements the Modified Long Wavelength Approximation (MLWA) [19], which extends the validity of CDA models to particles of finite radius.In this approximation, the electronic polarizability of the particles is expressed as: The value of the electric field generated by the k-th particle at the center of the h-th particle can be expressed as: where hk h k = − r r r and ˆhk The field h E is equal to the sum of the external electric field and the electric field generated by all the other particle ( ) By defining the transfer dyad of the system as: we can now formulate the full scattering problem as a set of N inhomogeneous linear equations which can be solved numerically once the excitation source is known: ( )
Model validation
We have validated the CDA model summarized above by using the Generalized Mie Theory (GMT) [23] and the transition matrix (T-matrix) theory (TM) [24,25], which are to date the most accurate and efficient semi-analytical techniques of computational electromagnetics.Both the theories provide an analytical solution to the full Maxwell's equations, including retardation effects and all the necessary multipolar scattering orders, enabling the most accurate treatment of both the near-and the far-field response of large nanoparticle arrays of arbitrary geometries.However, the applicability of the GMT method is limited to spherical particles, while the T-matrix can in principle be applied to arbitrarily shaped particles, despite its convergence is severely affected for large arrays of non-spherical particles .
In the T-matrix method, the incident and scattered fields are expanded into a series of base functions defined in terms of the outgoing spherical vector functions 3 n M and 3 n N [24]: 1 The expansion coefficients of the scattered field are related to the coefficients of the incident field by the T-matrix (transition matrix) of the system: The standard scheme for computing the elements of the T-matrix relies on the null-field method [26].For a homogeneous particle, the transition matrix is given by [24]: where the matrices Q 31 and Q 11 can be evaluated from the extinction theorem and the Huygens principle, respectively.These matrices can be expressed as surface integrals over the scattering objects [24].The T-matrix method can be applied to non-spherical particles, but the scattering fields can only be calculated outside the sphere with the smallest radius subtending the whole scattering object, not necessarily connected.Because of this restriction, the applicability of the T-matrix approach is effectively limited to the calculation of the scattered fields in the far-field region, while the information on the plasmonic near-fields remains inaccessible.Given these limitations, we have validated our CDA model for the near-and farfield regions separately.We utilized the GMT approach for the calculation of the wavelengthdepend near-field spectra normalized to the incident one (field enhancement spectra) considering arrays of nanospheres, and the T-matrix for the calculation of the far-field scattering efficiencies of arrays with particles of non-spherical shapes (oblate and prolate ellipsoids).The scattering efficiency of the arrays is defined as the ratio of the scattering cross sections with the sum over the array of the geometrical cross sections of the individual particles.For the validation, we considered a periodic (square lattice) and a quasi-periodic Fibonacci array of 80 Au nanoparticles.The minimum interparticle separation (edge-to-edge) between the nanoparticles was fixed at 25 nm, irrespective of the particles shape factor defined by the ratio c/a.The radii of the spherical particles are equal to 25nm and for ellipsoidal ones we choose oblate particles with a=b=25nm and c/a=0.2,0.6 and prolate ones with a=b=25nm and c/a=1.5, 3. We also notice that, based on accurate comparison with GMT the numerical calculations, particle sizes up to 75 nm in radius can be adequately described within the proposed approach.For simplicity, the Au dielectric function was modeled using a Drude model with parameters given in Ref. 27.We notice that this choice results in an underestimation of the absorptive losses of Au at short wavelengths.However, this does not affect the conclusions of our general analysis and the validity of our model, which can be flexibly adapted to the specific experimental demands.Figure 1 demonstrate the validity limits of the CDA calculation approach which slightly overestimates the plasmonic local fields and the scattering efficiencies in all cases except when considering particles with large shape factors (c/a=3).Since within the CDA approach each particle is approximated with only one dipole, discrepancies at large shape factors are expected due to the absence of multipolar scattering orders.However, we notice that the main features of the spectra calculated using the two semi-analytical approaches are also clearly present in the corresponding CDA results, which were obtained at significantly reduced computational cost, and can be utilized to investigate much larger arrays.This motivates our choice to explore the optical properties of large (thousands of interacting particles) DA arrays of metal nanoparticles within the simple and cost-effective CDA method, as we will discuss in the next sections.
Results and discussion.
In this section, we apply our CDA model to the understanding of the far-field and near-field plasmonic properties of periodic and DA arrays of different morphologies and nanoparticles shapes.All the particles in the arrays were modeled as rotationally symmetric ellipsoids (a=b=25nm) with the symmetry axis along the z direction, orthogonal to the plane of the arrays.The shape factor of the ellipsoidal particles was parameterized by the c a ratio.Oblate ellipsoids correspond to the 1 c a < case (pancake shaped), spherical particles to 1 c a = , and prolate ellipsoids to 1 c a > (cigar shaped).The minimum edge-to-edge interparticle separation between the nanoparticles in the different arrays was fixed at 25 nm for all the shapes, and the arrays were homogeneously excited by a plane wave at normal incidence with linear polarization lying in the plane of the arrays.
This choice of the interparticle separation describes the strong quasi-static coupling regime, which is important for many nanoplasmonics applications such as Surface Enhanced Raman Scattering (SERS) sensing, plasmonic-enhanced broadband light emitters, absorbers and nano-detectors.However, a reduction of the interparticle separation below 25nm, which will further enhance the strength of the quasi-static coupling, would be extremely challenging to achieve using standard nano-fabrication techniques.
In Fig. 2 and in Fig. 3 The results in Fig. 2 clearly demonstrate the dramatic influence that the nanoparticles shapes (shape factor) have in determining the plasmonic resonant peaks of arrays that are strongly coupled in the (short-range) quasi-static regime.We notice that for periodic arrays (Fig. 2(a)) the peak positions of the plasmonic scattering peaks red-shift to longer wavelengths as the shape factor of the particles is reduced, correspondingly turning their shapes from elongated cigars to thin disks.This behavior is consistent with the well-known shape-dependent shift of the resonant condition of plasmonic modes in isolated metal nanoparticles [28].Differently from the case of periodic arrays, in DA arrays of metal nanoparticles the electromagnetic coupling results in a separation of plasmonic modes associated to the presence of an increased structural disorder.In addition, the variations in the shapes of the particles strongly affect the plasmonic field spatial distribution, which determine the in-plane electromagnetic array coupling, as we will discuss later in this paper.As shown in Figs. 2 and 4, in multi-scale DA environments, the quasi-static electromagnetic coupling is enhanced by far-field diffractive contributions, resulting in a larger plasmonic mode separation with respect to periodic structures of comparable interparticle distances [11,15].In particular, DA arrays show the presence of two major scattering peaks, which become more predominant for prolate ellipsoids.As we will demonstrate below, these two scattering peaks, which feature a markedly different electric field distribution, are a distinctive characteristics of DA arrays associated to the simultaneous excitation of longitudinal and transverse resonant modes of different particles clusters inhomogeneously distributed across the array plane (Fig. 3).These features of DA arrays originate from the presence of multiple length scales (spatial frequencies) corresponding to a broad distribution of particle dimers (for Fibonacci and Thue-Morse) and dimers-tetramers (for Rudin-Shapiro) [9,16].
In Fig. 3(a) we compare the calculated scattering efficiency for a Fibonacci structure with oblate nanoparticles (c/a=0.2) with the scattering spectra of the two most recurrent cluster configurations (particle dimers) in the Fibonacci array (Fig. 3(a), inset).In Fig. 3(b) and in Fig. 3(c) we show the scattering spectra of the recurrent dimers and Fibonacci arrays with spherical and prolate (c/a=3) particles, respectively.Figures 3(a)-3(c) demonstrate in very clear terms that the origin of the plasmonic mode separation observed in Fig. 2 is indeed related to the simultaneous excitation of longitudinal and transverse resonances in Fibonacci arrays [29,30].This is also confirmed by Fig. 3(d) and Fig. 3(e) which show the electric field patterns calculated at the two Fibonacci scattering peaks shown in Fig. 3(b).The electric field distribution associated to the blue-shifted Fibonacci scattering peak is dominated by vertically coupled dimers (Fig. 3(d)).On the opposite, longitudinally coupled dimers (Fig. 3(e)) correspond to the red-shifted scattering peak shown in Fig. 3(b).In addition, we observe in Fig. 3 that the largest plasmonic mode splitting is obtained for prolate particles due to a stronger field enhancement, which increases the dimer coupling strength.We notice that this same argument also applies to Thue-Morse and Rudin-Shapiro structures, with the exception that more complex particle clusters (tetramers) must be taken into account resulting in more complex spectral features as shown by the near-field spectra in Fig. 4(c) and Fig. 4(d).Finally, we notice in Figs. 2 and 4 that the spectral broadening and field enhancement of DA arrays are maximized for Rudin-Shapiro arrays which are the most disordered structures characterized by continuous spatial Fourier spectra [7,9,11,15].
The Field Enhancement spectra shown in Fig. 4 show very similar trends with respect to the scattering spectra discussed in Fig. 2.Moreover, we notice from Figs. 2(a) and 4(a) that the wavelength position of the peak field enhancement of periodic arrays always corresponds to well-defined dipolar modes, which are red-shifted with respect to the peak position of the scattering efficiency.This trend is particularly emphasized for periodic arrays of prolate nanoparticles.On the other hand, by comparing Figs.2(b)-(d) and Fig. 4(b)-(d) we can notice that all the considered DA arrays behave differently.In particular, the far-field scattering and near-field enhancement spectra are almost coincident for any value of shape factor.This features are important for engineering applications.In fact, using DA arrays it is possible to roughly identify the wavelength range of maximum near-field enhancement by simple farfield scattering measurements.
The role of particles shapes in determining the strength of the electromagnetic coupling in periodic and DA arrays can be best understood by calculating the electric field profiles in the plane of the arrays.Figure 5 shows the calculated electric field distribution in the plane of a Fibonacci array consisting of (a) oblate ellipsoids (rc = 25nm, c/a = 0.2) (b) spheres (rc = 25nm) and (c) prolate spheroids (rc = 25nm c/a =3).It can be observed that the in-plane electric field distribution can peak either at the nanoparticles positions or at interstitial positions between the nanoparticles, strongly affecting the in-plane electromagnetic coupling of the array.We notice from Fig. 5 that the maximum intensity of the plasmonic near-fields increases as we move from oblate to prolate ellipsoids, enhancing the electromagnetic coupling of the nanoparticles.This is expected when exciting prolate ellipsoids with an incident field polarized orthogonally to their major axis, because of the higher excitation efficiency compared to the case of prolate ellipsoids where the incident field aligns parallel to the axis.In particular, the stronger electric field intensity at interstitial nanoparticles positions is directly responsible for the observed longitudinal/transverse splitting of the plasmonic modes showed in Fig. 2 for DA arrays.The results shown in Fig. 6 fully justify this interpretation by demonstrating a direct correlation between the maximum intensity of the electric field and the wavelength separations of the two peaks observed in the scattering cross sections of the DA arrays.In order to discuss more quantitatively the distinctive features of the local plasmonic fields in DA arrays of nanoparticles we introduce the Cumulative Distribution Field Enhancement (CDFE) function, which measures how often a field value is represented in the plane of the arrays.This function associates to each prescribed value of field enhancement the fraction of the total area of the array in which the local field enhancement is greater than the prescribed value.Figure 7 As a representative example, let us first discuss the case of spherical nanoparticles (panel b).We observe that all the DA arrays are characterized by higher CDFE values compared to the periodic ones.In particular, the Fibonacci CDFE is higher than the periodic case for fields greater then 5.35; this means that the fraction of the array area characterized by a field higher then 5.35 is larger than the periodic case.Moreover, for DA arrays this area becomes one order of magnitude larger than the periodic case for field values larger then 10.7.This behavior becomes even more pronounced when we consider DA arrays of prolate ellipsoidal nanoparticles.These results quantify the ability of DA arrays to provide larger values of enhanced plasmonic fields over larger areas when compared with periodic structures, which is an important feature for the engineering of device applications for active nanoplasmonics.Another important attribute of DA arrays is their ability to confine plasmonic fields on the nanoscale.Giant field enhancement and nanoscale localization of electromagnetic fields have been thoroughly investigated in rough metal surfaces and fractal random media, both theoretically and experimentally [30,31].These studies clearly demonstrated that the abundance of spatial frequencies and the lack of translational invariance, which characterize random and fractal systems, are key ingredients to induce resonant modes with higher spatial localization and electric field enhancement than periodic systems.However, it remains to be investigated if deterministic non-fractal plasmonic structures with multi-scale DA patterns can provide superior mode localization and field concentration with respect to their periodic counterparts.In order to answer this question, we quantitatively investigate the spatial mode localization of plasmonic DA arrays by defining a localization index as the two-dimensional generalization of the participation ratio defined in Ref. 13.We define the plasmonic mode participation ratio (PR) for two-dimensional DA arrays as the following function: where E(x,y) is the electric field enhancement and the integration is performed in the z=0 plane of the array of area A. This definition ensures that the participation ratio for spatially localized field patterns in arrays plain is much lower than the participation ratio of spatially extended (de-localized) field patterns.Therefore, the field states with the smallest PR are the most localized ones.In Fig. 8 we show the wavelength dependence of the calculated PR, normalized to the maximum value of the periodic array, for periodic and DA arrays of nanoparticles with different values of shape factor.As before, all the arrays share the same minimum interparticle separation of 25nm.The data in Fig. 8 demonstrate the enhanced localization of plasmonic fields in DA arrays when compared with periodic structures.In fact, we notice that DA arrays (Fig. 8(b)-8(d)) show much lower PR values with respect to periodic arrays (Fig. 8(a)) irrespective of the shape factor values and across the entire wavelength spectrum.In addition, the PR index of periodic arrays markedly fluctuates in wavelength for prolate ellipsoids, while these fluctuations are strongly reduced in DA arrays.It is also interesting to notice that while in Fibonacci (Fig. 8(b)) and Thue-Morse (Fig. 8(d)) arrays the PR spectrum is strongly peaked at a well-defined wavelength, Rudin-Shapiro arrays (Fig. 8(c)) feature a broader PR spectrum appreciably lower than all the other structures, as a result of their increased structural disorder.These results demonstrate unambiguously that the effect of increasing the "deterministic disorder" from periodic to Rudin-Shapiro structures induces a large number of strongly localized plasmonic states with progressively increasing degree of spatial localization and broader spectra.We have shown so far that DA arrays provide stronger field localization, and larger values of enhancement occurring over larger areas when compared with periodic structures.However, another important aspect for the engineering of DA arrays is the ability to identify robust optimization criteria for purpose-driven nanoplasmonics applications.This possibility is clearly missing in random structures due to their characteristic lack of reproducibility, which fundamentally limits their engineering applications.In order to investigate the specific engineering design rules of DA nanoplasmonic arrays, we have plotted in Fig. 9 the wavelength positions corresponding to the maximum light scattering cross section (blue triangles), maximum electric field enhancement (black squares) and minimum PR (red circle) for different values of shape factor.We can notice that while in the case of periodic arrays (Fig. 9 wavelengths, requiring a very careful optimization for each of them separately, this is not the case for DA arrays.In fact, as shown in Figs.9(b)-9(d), the three targeted parameters are simultaneously maximized for DA arrays.This behavior is best displayed by Thue-Morse and Rudin-Shapiro structures (Fig. 9(c) and 9(d)) due to the increased spatial disorder and reduced symmetry, which favors the simultaneous optimization of scattering, field enhancement and localization at any given frequency.This is the direct consequence of the broadband character of the plasmonic properties of DA arrays compared to periodic ones.Interestingly, we notice in periodic structures (Fig. 9(a)) a non-monotonic trend of the maximum field enhancement, scattering cross section and the minimum PR as the particles shape factor ratio c/a decreases from its maximum value.This behavior is not followed by all the DA arrays, which feature an almost monotonic shift to longer wavelengths (red-shift) as the particles shape factor is reduced from the maximum value.These results are explained by the tuning of the in-plane electromagnetic coupling induced by the variations in the particles shape factor, as discussed in relation to Fig. 5.The results shown in Fig. 9 give us a simple design criterion on how to vary the shape factor of nanoparticles in DA arrays in order to obtain the best plasmonic response at a given frequency, and clearly point towards a more robust engineering of DA structures with respect to periodic ones.
Summary
In this paper, we have investigated the role of the nanoparticles shape factor in the scattering and localization properties of deterministic aperiodic nanoparticle arrays of metal nanoparticles using a CDA model, which has been validated through a comparison with electromagnetic calculations based on the semi-analytical generalized Mie theory (GMT) and T-matrix theory.Our results have highlighted the advantages of the aperiodic structures by demonstrating stronger field localization and larger values of field enhancement occurring over larger areas with respect to reference periodic structures.In addition, we have investigated engineering design criteria for DA arrays with respect to the arrays morphology and the nanoparticles shape factor, and showed that, contrary to periodic structures, DA arrays can be engineered to simultaneously maximize, at a given wavelength, the far-field scattering efficiency, the electric field enhancement, and the spatial localization of the plasmonic fields.This analysis provides a rigorous framework for the design and engineering of a variety of novel nanoplasmonics devices based on DA arrays, such as reproducible single-molecule SERS substrates, highly sensitive bio-sensors and plasmonic-enhanced non-linear elements.
1 2 N
r ,r , ....,r , where N denotes the nanoparticles number.We indicate with h p the electric dipole moment of the h-th particle and with h E the electric field generated at position h r by all the particles and external sources in the frequency domain.We denote with volume (a, b, c are the axes of the ellipsoid
Fig. 1 .
Fig. 1.Comparison of the field enhancement spectra calculated with CDA and GMT code (a-b) for a) 81-nanospheres periodic array and b) 80-nanospheres Fibonacci array.Comparison of the scattering efficiency calculated with the CDA and the T-matrix method (c-f) for Fibonacci array of 80 oblate nano-spheroids with (c) c/a = 0.2, and (d) c/a=0.6, and for Fibonacci array of 80 prolate nanoparticles with (e) c/a = 1.5 and (f) c/a = 3.
shows the CDA results compared to GMT and T-matrix calculations.The comparisons of the CDA and GMT calculated wavelength spectra of the local field enhancement for periodic and Fibonacci arrays of Au nanospheres are shown in Figs.1(a) and 1(b), respectively.The scattering efficiencies of Fibonacci arrays of oblate and prolate ellipsoids calculated with CDA and Tmatrix are shown in Figs.1(c)-1(f), respectively.All the arrays have been homogeneously illuminated by an x-polarized plane wave at normal incidence.The results shown in Fig. 1 #109277 -$15.00USD Received 26 Mar 2009; revised 1 May 2009; accepted 6 May 2009; published 26 May 2009 (C) 2009 OSA
Fig. 2 .
Fig. 2. Scattering efficiency versus wavelength for several values of the particles shape factor c/a and for (a) 1936 nanoparticles periodic array b) 1428 nanoparticles Fibonacci array (c) 2048 nanoparticles Rudin Shapiro array (d) 2016 nanoparticles Thue-Morse array.
we show the calculated scattering efficiency spectra (SCS) and maximum field enhancement spectra (MFE), in the plane of the arrays, for arrays of different morphologies and particles shapes.These are calculated for a periodic (a), Fibonacci (b), Rudin-Shapiro (c) and Thue-Morse (d) two-dimensional arrays of Au nanoparticles for five different values of shape factor c a , as sketched in insets.In the rest of the paper, periodic arrays are composed of 1936 nanoparticles, Fibonacci of 1428, Rudin-Shapiro of 2048 and Thue-Morse of 2016.
Fig. 3 .
Fig. 3. Scattering efficiency versus wavelength for a 1428 nanoparticles Fibonacci structure, horizontal dimer and vertical dimer with oblate (c/a=0.2) (a), spherical (c/a=1) and prolate (c/a) shape.Electrical Field Distribution associated to the (d) blue-shifted (492nm) and (e) redshifted (540nm) Fibonacci scattering peak of Fig. 3(b).In both cases the incoming field polarization is along the horizontal axis.
Fig. 4 .
Fig. 4. Maximum field enhancement versus wavelength for several values of the ratio c/a and for (a) 1936 nanoparticles periodic array (b) 1428 nanoparticles Fibonacci array (c) 2048 nanoparticles Rudin Shapiro array (d) 2016 nanoparticles Thue-Morse array.
Fig. 6 .
Fig. 6.Calculated wavelength splitting of the scattering cross section of DA arrays as a function of the maximum field enhancement for Thue-Morse (red triangles), Rudin-Shapiro (green squares), and Fibonacci (black squares) arrays with different nanoparticles eccentricities.
Fig. 7 .
Fig. 7. Semilogarithmic plots of the CDFE function for arrays of various nanoparticle shapes (radius 25 nm) and different morphologies calculated at the wavelength of maximum field enhancement. | 2015-12-19T08:33:32.797Z | 2009-06-08T00:00:00.000 | {
"year": 2009,
"sha1": "8286e02738faf986db1b7f4f990b19afe5eb353d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/oe.17.009648",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "2e4ff8a5fef7f67e74475fa396b03dac828cc55c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
255972513 | pes2o/s2orc | v3-fos-license | Butterfly mimicry rings run in circles
Organisms face a wide variety of selective pressures that shape both the tempo and direction of their evolution, from sexual selection by potential mates or competitors to natural selection imposed by competition and abiotic factors. While rapid evolution of morphological traits can often be traced through historical and contemporary records, it remains difficult to disentangle the effect of the myriad selective pressures on trait evolution without characterizng the ecological and phylogenetic context in which they evolved. A new paper by Dipendra Basu, Vaishali Bhaumik, and Krushnamegh Kunte (1) provides unique insight into the evolutionary forces acting on critical adaptive phenotypes by comprehensively characterizing a complex but defined community of mimetic butterflies in the Western Ghats of India. Within community assemblages, species interactions, including predator–prey interactions and intraspecific communication, are frequently mediated and facilitated via a plethora of olfactory and visual cues. Honest warning signals that reflect prey defenses or unpalatability, for example, bright colors to warn potential predators of toxicity, are referred to as aposematic (2). Such conspicuous warning signals can quickly train predators to avoid the unpalatable species (3, 4). Distantly related, palatable species frequently evolve to resemble unpalatable species
Organisms face a wide variety of selective pressures that shape both the tempo and direction of their evolution, from sexual selection by potential mates or competitors to natural selection imposed by competition and abiotic factors. While rapid evolution of morphological traits can often be traced through historical and contemporary records, it remains difficult to disentangle the effect of the myriad selective pressures on trait evolution without characterizng the ecological and phylogenetic context in which they evolved. A new paper by Dipendra Basu, Vaishali Bhaumik, and Krushnamegh Kunte (1) provides unique insight into the evolutionary forces acting on critical adaptive phenotypes by comprehensively characterizing a complex but defined community of mimetic butterflies in the Western Ghats of India.
Within community assemblages, species interactions, including predator-prey interactions and intraspecific communication, are frequently mediated and facilitated via a plethora of olfactory and visual cues. Honest warning signals that reflect prey defenses or unpalatability, for example, bright colors to warn potential predators of toxicity, are referred to as aposematic (2). Such conspicuous warning signals can quickly train predators to avoid the unpalatable species (3,4). Distantly related, palatable species frequently evolve to resemble unpalatable species 1 To whom correspondence may be addressed. Email: mkronforst@uchicago.edu.
Published January 17, 2023. . This selection to escape the mimic is stronger than stabilizing selection to maintain the current aposematic phenotype, resulting in a continual chase between the mimic and model (E and F).
Warning trait value
to exploit the benefit gained from predators learning to associate aposematic phenotypes with an unpleasant experience. This form of dishonest signaling-Batesian mimicry-evolves through the process of advergence, wherein one species (the mimic) evolves to resemble an unpalatable (model) species, and therefore gains protection from predators (5,6). Aposematism and mimicry are widespread across the tree of life with striking examples in insects, fish, birds, and mammals (7). How do models and mimics coevolve? Our understanding of the evolutionary dynamics of Batesian mimicry has been dominated by two hypotheses. The first hypothesis posits that model species are under stabilizing selection at their phenotypic optimum (i.e., the best aposematic phenotype), while mimics are under directional selection to evolve a strong resemblance with the model [ Fig. 1 A-C; (8)]. The idea that mimics evolve toward their models faster than models can evolve away from their mimics is well supported by many observations of advergence in natural mimicry systems (9) and the prediction that any deviation from an established aposematic color pattern will expose models to increased predation (8,10).
An alternative hypothesis posits that the burden imposed on models by their mimics can lead to an evolutionary arms race or "chase-away selection" [Fig. 1 D-F; (8,11,12)]. Increasingly accurate mimics and a higher frequency of mimetic individuals (termed mimetic load (13)) are expected to reduce the effectiveness of aposematic cues on predator learning. This results in stronger selection on models to differentiate themselves from their mimics, the evolution of aposematic phenotypes, and reciprocal selection on the mimics to "catch-up" to their evolving models ( Fig. 1 D-F; (14)). Despite theoretical support for this hypothesis, the few empirical studies investigating how models respond to mimetic load show scarce yet equivocal results on the importance of chaseaway selection (15,16).
While considerable theoretical work has been done to understand the evolutionary consequences of cooccurring mimic and models, these patterns are challenging to observe in natural systems, leaving our general understanding of mimic-model evolutionary dynamics unresolved despite over 150 y of fascination with this phenomenon (17). Comparative evolution studies like that of Basu et al. (1) offer a compelling framework in which to investigate this problem. Basu et al. comprehensively characterized mimic-model evolutionary dynamics in a defined community of butterflies localized to the tropical forests of the Western Ghats, allowing them to disentangle the effects of phylogenetic constraint and natural selection in the repeated evolution of aposematic or mimetic color patterns. By comparing phenotypic similarity between mimics and models spread across a dense, time-calibrated phylogeny, the authors identified several exciting trends in aposematic and mimetic butterfly color pattern evolution that shed light on adaptive evolution in this community.
First, the authors compared morphological traits in mimetic and nonmimetic sister taxa pairs and found that wing color patterns diverged rapidly from the ancestral background in mimics. Importantly, they did not observe these elevated rates of divergence in flight-related morphologies, suggesting that flight is more likely to be phylogenetically and functionally constrained and that focal mimicry traits appear to be specific to visual cues from wing color patterns. Next, they compared rates of evolution between models, mimics, and nonmimetic sister taxa; contrary to theoretical predictions, the authors found that aposematic color patterns and flight morphology evolved faster in the models compared to mimics. These surprising results provide compelling empirical support for the chaseaway selection hypothesis and contradict the widespread expectation that mimics should evolve faster than their models, thus advancing our understanding of mimic-model coevolutionary dynamics. 18 for a recent review). While the development and evolution of sexually monomorphic mimicry is expected to be subject to the same constraints as any other adaptive trait, recent genetic mapping studies in a variety of organisms, including the swallowtail butterfly Papilio polytes and its close relatives (19)(20)(21), Papilio dardanus (22), the nymphalid butterfly Hypolimnas misippus (23), Ischnura damselflies (24), and brown anole lizards (25) have shown that female-limited polymorphisms are frequently controlled by discrete alleles of a single switch locus. The genetic architecture of mimicry and the level of genetic constraint that mimetic phenotypes are under are therefore significantly different in sex-limited and sexually monomorphic mimics.
The Influence of Genetic Architecture on the Rate of Adaptation
Switch architectures may be predicted to allow rapid, independent evolution of female and male color patterns because they alleviate genetic constraints imposed by selection on male color patterns. That is, genetic switches allow for decoupling of color patterning programs between males and females that Basu et al. comprehensively characterized mimic-model evolutionary dynamics in a defined community of butterflies localized to the tropical forests of the Western Ghats, allowing them to disentangle the effects of phylogenetic constraint and natural selection in the repeated evolution of aposematic or mimetic color patterns. enable selection to independently optimize sex-specific phenotypes. Importantly, Basu et al. (1) found that female-limited mimics have evolved novel color patterns significantly faster than monomorphic mimics, providing much-needed general evidence that switch locus architectures frequently allow rapid evolution of sex-specific traits. Interestingly, the authors also showed that both genetic architectures allow mimics to evolve toward model color patterns but that female-limited mimics evolve significantly faster. While the reasons why sex-limited polymorphisms evolve, and evolve so rapidly, remain poorly characterized despite over a century of genetic investigations, studies like that of Basu et al. are beginning to lay the framework for understanding these intriguing genetic systems.
Future Directions
The results and discussion presented by Basu et al. (1) raise several interesting questions. The fast rate of aposematic trait evolution in models highlights the need for empirical studies on mimic-model chases. For example, using well-characterized mimicry rings like those in the Western Ghats, distinct local populations of a model species can be compared to investigate whether mimetic load, and therefore chase dynamics, is correlated with the rate of evolution in the aposematic model [as done in Akcali et al. (16)]. Further, detailed pairwise comparisons of mimic-model dynamics that show differential rates of model evolution can help elucidate mimicry ring characteristics that are more likely to result in chase dynamics. Additionally, it is unclear how sensitive the observed evolutionary patterns are to the abundances and densities of model-mimic complexes within community assemblages. Previous work has drawn attention to our incomplete understanding of the ecological dynamics of mimic-model assemblages (17). Basu et al. characterized the mimetic community in a defined geographic region with a shared geological and ecological history, allowing them to control for many of those interactions. Future studies should continue to consider deeply how ecology interplays with and feeds back on the evolutionary dynamics uncovered in this study. | 2023-01-19T20:37:34.711Z | 2023-01-17T00:00:00.000 | {
"year": 2023,
"sha1": "0b7cfce68397425ea83b26788e80aa3c07be653d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1073/pnas.2220680120",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1052d18274d8c27f99e22c9c9f4bec84e162986a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
52040227 | pes2o/s2orc | v3-fos-license | Epileptiform activity in the mouse visual cortex interferes with cortical processing in connected areas
Epileptiform activity is associated with impairment of brain function even in absence of seizures, as demonstrated by failures in various testing paradigm in presence of hypersynchronous interictal spikes (ISs). Clinical evidence suggests that cognitive deficits might be directly caused by the anomalous activity rather than by its underlying etiology. Indeed, we seek to understand whether ISs interfere with neuronal processing in connected areas not directly participating in the hypersynchronous activity in an acute model of epilepsy. Here we cause focal ISs in the visual cortex of anesthetized mice and we determine that, even if ISs do not invade the opposite hemisphere, the local field potential is subtly disrupted with a modulation of firing probability imposed by the contralateral IS activity. Finally, we find that visual processing is altered depending on the temporal relationship between ISs and stimulus presentation. We conclude that focal ISs interact with normal cortical dynamics far from the epileptic focus, disrupting endogenous oscillatory rhythms and affecting information processing.
were affected by ISs depending on the temporal relationship between stimulus presentation and contralateral spike burst.
Together, our data prove that ISs interact with other cortical dynamics far from the epileptic focus, disrupting endogenous oscillatory rhythms and affecting brain information processing. This evidence sustains the notion of an IS-induced cognitive impairment in local and distant areas, an idea that has major clinical implications in the ongoing discussion about the pharmacological treatment of subclinical EEG anomalies 15,33,34 , especially in children, where cognitive impairment induced by ISs is more severe 35,36 . Here we studied the effects of acute IS induced in a naïve brain on cortical processing and we determined that brain computation is strongly affected even in absence of the circuitry rearrangements proper of chronic epilepsy.
Results
Slow-wave oscillations are synchronized between hemispheres. First, we recorded baseline activity in both hemispheres in the primary visual cortex (V1) of C57BL/6 J mice under urethane anesthesia. In accordance with previous reports we observed slow-wave oscillations typical of non-REM sleep and resting wakefulness, with alternating up and down states 37,38 . Up states were characterized by a negative deflection of about 0.8 mV in the recorded extracellular potential (Fig. 1a,b). The mean duration of up states ranged between 0.45 and 0.75 s for all animals, and the mean frequency between 0.55 and 0.95 Hz. We observed that up and down states pattern appeared to be synchronized between the two hemispheres, as previously reported 39,40 . We measured the temporal delay between the middle point of each up state in hemisphere 1 (Hem 1; Fig. 1a,b, black line) with the nearest up state middle point in hemisphere 2 (Hem 2; Fig. 1a,b, gray line); then we computed the distribution of the time lag between the nearest neighbor (NN). To test whether this distribution supports the up states synchronization between hemispheres, we generated a shuffled sequence of up and down states in one hemisphere and we computed the NN distribution with this new sequence. The comparison between original and shuffled NN distributions demonstrates a significant difference (KS test, p < 0.001 for each animal), proving that the oscillations are coupled (Fig. 1c). Since the observed distributions are symmetrical, we concluded that there was no dominant hemisphere in the generation of slow-wave oscillations.
ISs fragment the up states in the opposite hemisphere. Interictal activity was elicited focally in the visual cortex of Hem 2 by localized superfusion with bicuculline (BMI) 100 μ M. As already described 9 , the focal treatment with bicuculline was followed within minutes by the appearance of ISs, characterized by stereotypical sharp deflections (2-6 mV) of the LFP recording that remained localized in the treated cortical area (Fig. 2c, black line). ISs implicated the transient and simultaneous recruitment of most neurons in the affected focus, as demonstrated by in vivo two-photon imaging (Fig. 2a-c). In vivo loose-patch recordings showed that neuronal firing occurred in a window of about 50 ms centered on the onset of the LFP transient. Each neuron produced only a small number of action potentials for each IS (median 2.6 spikes/IS (1° quart: 1.9, 3° quart: 4.0), n = 9 neurons from 6 mice; Fig. 2d,e). The interval between two consecutive ISs had a wide non normal distribution (Lilliefors normality test, p < 0.001), with a median interval of 2.6 s (0.38 Hz; n = 8 mice).
First of all, as evidenced in Fig. 3a,b, there was a transient interruption of the ongoing up state by the emerging IS. The presence of ISs in Hem 2 was associated with a change of the statistics of the slow-wave oscillations in Hem 1; in particular, as shown in Fig. 3c,d, the frequency of up states increased from 0.70 ± 0.06 Hz to 0.81 ± 0.06 Hz (paired t-test, n = 8, p = 0.007), while their duration decreased from 0.74 ± 0.04 s to 0.64 ± 0.03 s (paired t-test, n = 8, p = 0.003).
These data suggested a bidirectional interaction between up states and ISs: the onset of an up state caused an IS, which in turn, fragmented the up state in the contralateral cortex. To verify this hypothesis, we computed the distribution of the time lag between each IS and the middle point of the nearest up state in Hem 1 (Fig. 3e). The lag distribution showed a peculiar double peak and a strong decrement in correspondence of the zero-lag point. This suggests that the majority of ISs occurred immediately after the onset of an up state and that the ISs in Hem 2 caused the interruption of the up state in Hem 1. The random temporal shuffling of the up state sequence led to completely different distributions of lags (green lines in Fig. 3e), demonstrating the correlation between the two processes.
To analyze further this relationship, we divided the LFP recorded in Hem 1 in short segments centered on the onset of each IS. The raster plots in Fig. 4a represent the cropped LFP recorded from the two hemispheres of one representative mouse. The interruption of the up states in Hem 1 following the IS in Hem 2 was a consistent phenomenon (see red bar in Fig. 4a, lower left), as it occurred in every fragment. The average field shown above the raster plot clearly reports the temporal dynamic of the interplay between up states and ISs: each IS was preceded by an up state onset, and in turn caused a brief, transient interruption of the up state. In average, the lag between the up state and the following IS was about 200 ms and the up state interruption caused by the IS lasted for about 300 ms. A characteristic small deflection of the LFP was observed in Hem 1 in correspondence to the IS. Figure 4b reports the mean of the IS-locked LFPs of every recorded animal (thin lines) and the population mean.
We analyzed the LFP segments in the frequency domain by computing the average spectrogram of the LFP segments (Fig. 4b, top). The up state interruption was clearly visible as a reduction in all frequencies up to 100 Hz (red bar), and in correspondence of the interictal peak there was a sharp increase in the power of all frequencies.
We supposed that at this time Hem 1 was receiving a transient excitatory stimulation from the contralateral cortex, with an increased probability of neuronal firing.
ISs alter the neuronal firing probability in the contralateral cortex. In order to verify that contralateral ISs changed neuronal firing probability, we performed loose-patch recordings of neurons in Hem 1 (Fig. 5a,b). As previously described 41 , before superfusion of bicuculline each neuron fired up to a few action potentials only during each up state (Fig. 5a). When bicuculline was added in Hem 2, we observed a biphasic effect on the firing probability in Hem 1 (Fig. 5c,d). Interestingly, the temporal pattern of the firing density is perfectly mirrored by the temporal evolution of the gamma band power (Fig. 5d). In correspondence with the IS peak, the firing probability was briefly boosted and this facilitation was immediately followed by a window of decreased firing of about 0.3 s, which overlapped perfectly with the silencing observed in the field recordings. Between 0.3 and 0.8 s after the IS, the firing rate incremented again, following the reappearance of the up state observable in the field potential (Fig. 5e). These data showed that contralateral ISs modified the temporal pattern of spontaneous discharges in the control hemisphere. This happened without a large increase in the overall number of spikes (Cnt: median 1.72 spikes/s, n = 28 neurons from 12 mice; BMI: 2.00 spikes/s, n = 20 neurons Contralateral interictal spikes interfere with visual processing. We evaluated the processing of visual stimuli by recording the visual evoked potentials (VEPs) in V1, in response to the periodic reversal of a checkerboard (2 s period; Fig. 6a). The probability distribution of the timing of the ISs relative to the checkerboard reversal clearly demonstrated that visual stimuli did not influence IS statistics (upper panel of Fig. 6b). To evaluate the effects of ISs on sensory responses, we sorted the records containing the VEPs according to the distance of the stimulus onset from the nearest IS occurring in Hem 2. A raster plot of the records from one single animal is depicted in the lower panel of Fig. 6b. The checkerboard reverses at 0 s and the VEPs are represented by the vertical blue band that appears with a latency of about 150 ms from the stimulus onset; the dotted line marks the timing of ISs. As exemplified in this plot, ISs had an effect on the VEPs depending on their relative timing: ISs falling shortly before the stimulation caused an increment in the recorded potential compared to the VEPs recorded in baseline conditions, while ISs occurring immediately after the stimulation suppressed the evoked potential (see Fig. 6b,c). We quantified this effect by calculating the VEP amplitude in response to a high contrast checkerboard as a function of the temporal distance between the IS and the stimulus presentation (Fig. 6d). In every experiment we observed an enhancement of the response (blue band) followed by a strong suppression (red band). When the distance between reversal and the closest IS was larger than about 350 ms, the response was similar to the response measured before BMI superfusion (green bands). These temporal windows are reported in Fig. 6e together with the effect of ISs on the firing rate (replotted from Fig. 5d). The firing increase caused by ISs, masked the external stimulus, interrupting the ongoing VEP and causing a brief window of functional blindness. On the other side, the following silencing appeared to increase the sensitivity of the cortex to the visual stimulus. A similar behavior has been observed for sensory stimulations elicited during the down state, especially for high-intensity stimuli 42,43 , strengthening the idea that during this time window the contralateral cortex switched to a down state.
To assess the ability of V1 of extrapolating significant features of visual stimuli in presence of contralateral IS activity, we measured the contrast sensitivity by recording the evoked potentials in response to checkerboards of variable contrast 44 . We divided the recordings in three groups, according to the interval between the stimulus and the closest IS. The first group was represented by records in which the stimulus was farther away from any IS than 350 ms (green zone in Fig. 6d,e). The second group (blue zone) included records in which the IS immediately preceded (− 0.35 to 0.05 s) the stimulus. Finally, the third group (red zone) included records in which the IS followed (0.05 to 0.35 s) stimulus presentation.
The contrast sensitivity curves in the three conditions are shown in Fig. 6f and were calculated by measuring the integral of the VEPs in a window of 0.4 s from their onset (0.1 s after stimulus presentation). As it can be seen in Fig. 6g, occurrence of an IS before or after the presentation of a visual stimulus significantly altered the detection of different contrasts. In our experimental conditions, visual processing was disrupted intermittently for about 35% of the time (Fig. 6d). These data demonstrated that each focal IS transiently influenced signal processing in wide cortical territories following a complex biphasic modality dependent on the relative timing between IS and stimulus in homotypic regions of different hemispheres.
Discussion
Epilepsy is a multiform pathology with variable etiology and clinical development 1 . In consequence of that, it is difficult to reproduce it in animal models, because there are several transgenic or pharmacological models for most part of diagnosed forms of epilepsy, while many types of diseases remains still to be investigated. Aim of our study is to investigate how an abnormal acute activity induced in a previously normal brain affects information processing in cortical areas far from the epileptiform focus in the anesthetized animal.
It is known that epilepsy produces cognitive impairment at different levels according the magnitude and frequency of seizures and the areas involved; these deficits are usually temporary, but their severity correlates with developmental onset time 45 , being more severe in children. Interestingly, it has been seen that chronic interictal activity in children can be comparable to epilepsy with seizures in terms of cognitive deficits, even if its effects can be less pronounced in number of areas involved and in extension 6 .
Here, we describe two different mechanisms that can affect cortical function in areas connected to foci of epileptiform activity. Firstly, we observe that ISs interact with slow-wave activity, fragmenting up states and potentially interfering with their role on the homeostasis of cortical circuitry. Secondly, ISs interfere with cortical computation by introducing an external non-physiological modulation of neuronal excitability in correspondence of every contralateral IS.
Cortical slow-wave oscillations are critical for memory consolidation and brain homeostasis 46,47 and alterations of this rhythm potentially lead to long-term cognitive dysfunctions. Our data suggest that the same cellular mechanism causing the up state onset in Hem 1, triggers an IS in Hem 2 after a brief and variable lag. Thus, slow-wave activity promotes ISs, as also demonstrated in human patients 48 . In turn, the IS closes prematurely the up state, forcing the cortex in a brief down state, thus fragmenting the slow-wave cycle. Recently, a similar alteration has been found in a chronic rat model of hippocampal interictal epileptiform discharges, suggesting the generality of this mechanism 49 . There are also clinical evidences pointing toward the importance of a disruption of the sleep slow oscillation in the emergence of pathological conditions; for example, in children affected by Landau-Kleffner syndrome 50-52 , a form of partial epilepsy with continuous spikes and waves during sleep (> 85% of time), there is regression of cognitive function, long-standing developmental delay and loss of acquired language skills that are irreversible after two years from appearance of symptoms. In this disease, the putative role of horizontal connectivity is supported by the ameliorative effects of intracortical resection 53,54 .
The second question we address in our study regards the impact of epileptiform activity on interconnected regions of the brain not directly interested by the focus. Although early reports have already shown that neurons in the contralateral cortex may display altered activity after the induction of an interictal focus 18,19 , the relationship of this alteration with up and down states as well as its effects on visual processing are still unknown. Our experiments evidence a biphasic effect of ISs on contralateral activity: after a brief time period of increased firing in correspondence with the IS a window of strongly reduced firing appear. The effects on the responses to visual stimuli mirror this dual effect of ISs on spontaneous firing. Indeed, the subdivision of VEPs according to their temporal connection with ISs demonstrate that ISs can enhance or suppress visual responses depending on their temporal relation with the stimulus. Given the average frequency of ISs in our model, the visual cortex operates in an altered state for about 35% of time. This would be hardly compatible with a proper control of visually driven behavior. Indeed, several papers have shown that electrophysiological estimations of visual properties correlate with behavioral analysis [55][56][57][58][59] . However, further studies could be done to better investigate perception in these epileptiform mice in future.
Furthermore, we can speculate that if interictal spike activity is generated in one associative area, its propagation to the contralateral may affect both local associative functions and possibly, related sensory perception. Conversely, propagation of interictal events into primary sensory cortices might be responsible of sporadic neglect or alteration of bottom-up sensory coding. These phenomena might be triggered by focal epileptiform activity localized millimeters away from the control region, leading to a revision of the concept of "focal activity" and possibly promoting a more brain-wide approach to epilepsy treatment.
Methods
Mouse preparation. Adult (age > postnatal day 60) C57BL/6 J mice were used (n = 26). Animals were reared in a 12 h light/dark cycle, with food and water available ad libitum. All experimental procedures conformed to the European Communities Council Directive n° 86/609/EEC and were approved by the Italian Ministry of Health.
Recordings were performed as described previously 60,61 . Mice were anesthetized by intraperitoneal injection of urethane (0.8 ml/hg in 0.9% NaCl; Sigma). Additional doses (10% of initial dose) were intraperitoneally administered to maintain the anesthetic level when necessary. The head was fixed in a stereotaxic frame. Body temperature during the experiments was constantly monitored with a rectal probe and maintained at 37 °C with a heating blanket. The depth of anesthesia was evaluated by monitoring pinch withdrawal reflex and other physical signs (respiratory and heart rate). A portion of the skull overlying the visual cortex (0.0 mm anteroposterior and 2.7 mm lateral to the lambda suture) was drilled on both the hemispheric sides and the dura mater was left intact. A double chamber was created with a thin layer of a synthetic resin (Paladur, Heraeus Kulzer GmbH & Co.) around the edges of the craniotomy. Cortex was maintained constantly wet with artificial cerebrospinal fluid (ACSF) solution: NaCl 132.8 mM, KCl 3.1 mM, CaCl 2 , 2 mM, MgCl 2 1 mM, K 2 HPO 4 1 mM, HEPES 10 mM, NaHCO 3 4 mM, glucose 5 mM, ascorbic acid 1 mM, myo-inositol 0.5 mM, pyruvic acid 2 mM, pH = 7.4. In order to induce interictal spikes, the solution inside one chamber was replaced by ~40-60 μ L of GABA A receptor agonist bicuculline methiodide (BMI, 100 uM in ACSF solution; Sigma). The drug superfusion was restricted to a single hemisphere and verified by means of paired local field potentials recordings. BMI was occasionally added in order to keep constant the interictal activity pattern. Animals deeply anesthetized under urethane were sacrificed by cervical dislocation without regaining consciousness at the end of the experiment.
Local field potential and loose-patch recordings. To record local field potentials LFPs in the two hemispheres, two glass micropipettes (impedance ~2 MΩ , filled with ACSF solution) were positioned into the visual cortex at a depth of 250-300 μ m (II/III layer) with a motorized micromanipulator (MPI electronic). A common reference Ag-AgCl electrode was placed on the cortical surface.
For the loose-patch recordings 62 glass micropipettes (4-8 MΩ resistance, filled with ACSF; Sigma) and an axopatch-1D amplifier were used. The pipette was inserted through the pia by applying about 300 mbar of positive pressure until II/III layer was reached 63 . Cells were searched in voltage-clamp mode with the positive pressure lowered to 30 mbar while monitoring the tip resistance with a square-wave current pulse (test stimulus 20 mV). On approaching a cell, pressure was relieved and light suction was applied. Voltage responses were recorded in current clamp mode (I = 0) with a 10-fold gain.
Electrophysiological signals were amplified 1000-fold (EXT-02F, NPI), band pass filtered (0.1-1000 Hz), and sampled at 10 kHz with 16 bit precision by a National Instruments (NI-usb6251) AD board controlled by custom made LabView software. Line frequency 50 Hz noise was removed by means of a linear noise eliminator (Humbug, Quest Scientific).
Two-photon calcium imaging combined with LFP recordings. Imaging was performed on adult mice (P > 90) obtained by crossing a homozygous B6;Cg-Tg(CaMKIIa-cre)T29-1Stl/J female (Jackson lab, stock number #005359) with a B6;129 S-Gt(ROSA)26Sor tm95.1(CAG-GCaMP6f)Hze /J homozigous male (Jackson, stock number #024105). In this animal (CaMKII-GCaMP6f) the expression of the genetically encoded Ca 2+ sensor GCaMP6f was restricted to pyramidal cells of LII/III. The mouse was head-fixed and a craniotomy of 2-3 mm in diameter was drilled over the visual cortex as for electrophysiological experiments. A perforated glass was glued to the craniotomy in order to facilitate the penetration of drugs after the acquisition of the baseline activity of the brain and to record LFPs. Imaging was performed with a two-photon microscope (Ultima IV, Prairie Technology) equipped with a 18 W laser (Chamaleon Ultra 2, Coherent) tuned at 890 nm that delivered about 30 mW at the sample. Images were acquired 200-250 μ m below the cortical surface with a water immersion objective (Olympus XLUMPLFLN-W 20X, numerical aperture) at a resolution of 512 * 512 pixels at 10 Hz (spiral scan). The same field was acquired in baseline condition and after administration of BMI 100 μ m.
Visual evoked potentials (VEPs). VEPs in response to alternating checkerboards modulated at different contrasts were recorded at the same depth of LFPs. All visual stimuli were computer-generated on a display (mean luminance at maximum contrast, 3 cd/m 2 ) by a MATLAB custom script that exploits the Psychophysics Toolbox. The luminance of the checkerboard was calibrated by means of a photometer (Konica Minolta). The contrast values were calculated with the Michelson's formula: (I max − I min )/(I max + I min ) 64 . Transient VEPs were recorded in response to the reversal (0.5 Hz) of the checkerboard (spatial frequency 0.04 c/deg). The response to a blank stimulus (0% contrast) was also recorded to estimate noise. Data analysis. Collected data were visually inspected, and traces presenting drift or other artefacts were excluded (~1% of the data). All the subsequent analysis was performed on MATLAB (The MathWorks Inc.). We designed a custom software for the automatic detection and measurement of up and down states based on the computation of spectral power in the gamma band 65 . The procedure for an unbiased detection of upstates is showed in Supplementary Fig. 1. In brief, we computed the short-time Fourier transform (STFT) in the 40-90 Hz frequency band with an overlapped Blackman window of 0.2 s. Spectrograms were normalized to the mean FFT in order to make different frequencies comparable in power 65 . Gamma-band activity (GBA) was estimated as the sum of the normalized STFT components in the considered frequency range. GBA was smoothed with a sliding window of 80 ms and logarithmically scaled. The time histogram of log(GBA) was bimodal, reflecting the distribution of gamma band-activity during up and down states. The threshold for the discrimination of up states was set to 50% of the peak-to-peak interval. A cut-off in the minimum up (down) states duration was set to 100 ms, and up (down) states shorter than the cut-off were assigned to the ongoing down (up) state (Supp. Fig. 1d).
To detect ISs the LFP signal a threshold was fixed at 3SD from mean. Peri-IS spectrograms were calculated using MATLAB STFT function with an overlapping Blackman window of 200 ms. Spectrograms were then normalized by the average LFP spectrum far (> 2 sec) from IS events.
The track was high-pass filtering (200 Hz) for the spike detection on loose-patch recordings; a threshold of 3 SDs was used to detect spike events. Cell recordings that included less than 100 interictal spikes were excluded from subsequent analysis.
VEP amplitudes were calculated as the mean voltage in a window of 400 ms starting from 100 ms after stimulus presentation. The temporal distribution of amplitude values was smoothed with a Gaussian kernel of 100 ms to generate the temporal evolution of VEP amplitudes reported in Fig. 5d; two animals were excluded because of the low number of events. | 2018-04-03T01:28:05.433Z | 2017-01-10T00:00:00.000 | {
"year": 2017,
"sha1": "23a7864775d4bb8d5dd8a48bd81338285c455436",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep40054.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be4cf81e2711d43baaf112a1cdff3f7cb757fdc8",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
19431546 | pes2o/s2orc | v3-fos-license | Long-term Combination Therapy With α-Blockers and 5α-Reductase Inhibitors in Benign Prostatic Hyperplasia: Patient Adherence and Causes of Withdrawal From Medication
Purpose To investigate long-term therapeutic effects and patient adherence to a combination therapy of a 5α-reductase inhibitor and an α-blocker and to identify causes of withdrawal from medication in patients with clinical benign prostatic hyperplasia (BPH). Methods BPH patients with lower urinary tract symptoms (LUTS) receiving combination therapy with follow-ups for 1–12 years were retrospectively analyzed. Therapeutic effects were assessed at baseline and annually by measuring International Prostatic Symptoms Score, quality of life index, total prostate volume (TPV), maximal flow rate, voided volume, postvoid residual volume and prostate-specific antigen level. Causes of discontinued combination therapy were also investigated. Results A total of 625 patients, aged 40–97 years (mean, 73 years) were retrospectively analyzed. All measured parameters showed significant improvements after combination therapy. Three hundred sixty-nine patients (59%) discontinued combination therapy with a mean treatment duration of 2.2 years. The most common reasons for discontinued treatment were changing medication to monotherapy with α-blockers or antimuscarinics (124 patients, 19.8%), receiving surgical intervention (39 patients, 6.2%), and LUTS improvement (53 patients, 8.5%). Only 64 patients (10.2%) were loss to follow-up and 6 (1.0%) discontinued combined treatment due to adverse effects. Smaller TPV after short-term combination treatment caused withdrawal from combination therapy. Conclusions BPH patients receiving long-term combination therapy showed significant improvement in all measured parameters. Changing medication, improved LUTS and choosing surgery are common reasons for discontinuing combination herapy. A smaller TPV after short-term combination treatment was among the factors that caused withdrawal from combination therapy.
INTRODUCTION
Benign prostatic hyperplasia (BPH) is a progressive disease commonly associated with bothersome lower urinary tract symptoms (LUTS). It may result in complications, such as acute urinary retention, and require BPH-related surgery [1][2][3]. Treat-ments with α-1 blockers have been found to rapidly improve the maximal flow rate (Qmax) and quality of life index (QoL-I) [4,5]. Previous trials have revealed that 5α-reductase inhibitors (5ARI), such as finasteride or dutasteride, could reduce the total prostate volume (TPV) and surgical risk in long-term follow-ups [6,7]. Combination therapy with α-blockers and 5ARIs has been proven effective in reducing LUTS, decreasing TPV, and reducing the risk of disease progression compared to treatment with a single medication or placebo [8,9]. Combination therapy with α-blockers and 5ARI has been recommended for patients with moderate-to-severe LUTS and enlarged prostates in the guidelines for BPH/LUTS management [10,11].
BPH with LUTS is not life threatening and life-long medical treatment has become the main management strategy in recent decades [10]. Long-term reports on patient compliance with combination treatment for LUTS due to BPH are scarce. This study aims to assess the long-term therapeutic effects of combination therapy at a single center in Hualien County, Taiwan. The causes of discontinuing one or both medications were also assessed.
MATERIALS AND METHODS
Patients with clinical BPH (TPV ≥ 30 mL) receiving combination treatment for at least 1 year were retrospectively analyzed. All patients were treated with combined 5ARI (dutasteride 0.5 mg once a day) and alpha-blockers (tamsulosin 0.4 mg or doxazosin 4 mg once a day) from the beginning and had regular follow-ups. These patients were investigated for LUTS, prostate indicators, and uroflowmetry parameters annually after starting combination treatment. Data were retrospectively collected between 2003 and 2015. The inclusion criteria were male sex, aged 40 and over, a diagnosis of BPH associated with LUTS, and first prescription of a combination of an α-1 blocker and a 5ARI for at least one year. Patients with neurological lesions, recurrent urinary tract infection, and prostate cancer confirmed by biopsy were excluded from this study. This study had been approved by the Ethics Committee of the Buddhist Tzu Chi General Hospital, Hualien, Taiwan (approval number: 102-85). Informed consent was waived by the Ethics Committee of the Buddhist Tzu Chi General Hospital, Hualien, Taiwan as the chart review involved a regular treatment and the study was retrospectively performed.
Routine clinical assessments of BPH at baseline and annual follow-ups included digital rectal examination, International Prostatic Symptom Score (IPSS), QoL-I measurement, transrectal ultrasound of the TPV and transition zone index (TZI), uroflowmetry (Qmax, voided volume [VoL] and postvoid residual [PVR]), and prostate-specific antigen (PSA). Baseline data were collected every 6 months for 2 years and then annually, starting from the time the patient was enrolled in the study until the discontinuation of combination treatment. If patients died or did not refill medication, they were considered to have dropped-out from the follow-up analysis. If medications were changed during the follow-up period, patients were grouped into the discontinued medication group and the parameters were not included in the follow-up analysis. Reasons for discontinuing treatment were determined through a face-to-face interview or telephone call by the same research assistant. Patient characteristics and parameters which might be associated with discontinuation of combination treatment were also analyzed.
In the analyses, the mean and standard deviations were calculated for the continuous variables, and numbers and percentages for categorical data. Continuous data of therapeutic outcomes were compared between the groups at different timepoints. Paired t-tests and Pearson chi-square tests were used to compare the measured parameters from baseline to each time point, and the relationship between QoL-I and adherence to medication as appropriate. A P-value of less than 0.05 was considered statistically significant.
RESULTS
A total of 625 patients were recruited for this study. The mean age at baseline was 72.9 ± 9.0 years (range, 40-97 years). The patient number at each follow-up year is listed in Table 1. The mean treatment duration of the recruited patients was 3.16 ± 2.94 years (range, 1-12 years). There were 256 patients (41%) who continued taking combination therapy, with a mean treatment duration of 4.59 ± 3.34 years (range, 1-12 years), whereas 369 patients (59%) discontinued combination therapy, with a mean treatment duration of 2.20 ± 2.16 years (range, 1-12 years).
Changes in Parameters From Baseline to Different Time-Points in Patients Who Received Combination Therapy
From baseline to the longest period of 12 years of ongoing treatment, combination therapy showed sustained and continued improvements in all parameters. In general, improvement became less prominent after 6 years of follow-ups. The distribu- Table 1. Interestingly, the 12year follow-up with combination therapy resulted in statistically significant continuous improvement in the IPSS-T and QoL-I over all 12 years (P < 0.018 and P < 0.001) respectively. Qmax, TPV, TZI, and PSA levels all showed significant improvement when compared to baseline levels in the first 4-8 years during the follow-up period.
Causes of Discontinuation of Combination Therapy
Among the 625 recruited patients, 369 patients (59%) discontinued the combination medication. Among the 369 patients, 124 discontinued combination treatment due to changing medication to monotherapy with alpha-blockers only (n = 54, 8.6%), antimuscarinics only (n = 17, 2.7%), and combined alpha-blocker and antimuscarinics (n = 53, 8.5%). Thirty-nine patients (6.2%) discontinued due to receiving surgical interven- tion, and 53 (8.5%) due to improved LUTS. Only 64 (10.2%) were loss to follow-up and 6 (1.0%) discontinued combined treatment due to adverse effects ( Table 2). The treatment duration was divided into 3 groups: shortterm ( < 2 years), middle-term (between 2 to 5 years) and longterm ( > 5 years). In the short-term treatment duration group, the most common reasons for early termination of combination therapy were: deceased (35 patients, 5.6%), loss to followup (34 patients, 5.4%) and improvement of LUTS (33 patients, 5.3%). In the middle-term treatment duration group, the most common reason was loss to follow-up (27 patients, 4.3%), followed by conversion to a single medication with α-1 blockers only (25 patients, 4.0%), and conversion to α-1 blockers and anti-muscarinic agents (24 patients, 3.8%). However, in the long-term treatment duration group, most of the patients could adhere to combination therapy. The rates of medication conver-sion and surgical intervention were less than 2%.
Differences in Parameters Between the Discontinued and Continued Treatment Groups Over Time
Over 12 years, there was consistent improvement in the IPSS and QoL-I. The most prominent improvement among all the parameters was in the first four years of follow-ups. There was no significant difference in the age between continued and discontinued treatment groups (P = 0.484). Between continued and discontinued medication groups, most of the parameters showed no significant difference in time points of follow-up except in IPSS-T, QoL-I, Qmax, and Vol, in which the continued group had greater improvements at the 4 or 5 years of followup. However, TPV in the first and second year of follow-up was significantly smaller in the discontinued group compared to the continued group. The distribution of the measured parameters between the 2 groups is shown in Fig. 1. After excluding 52 deceased patients, the remaining 573 patients were included in the analysis. The Kaplan-Meier survival curve of patients who continued combination therapy is shown in Fig. 2. There was a high drop-out rate in the first 2 years followed by a consistent but slow rate of withdrawal from combination treatment during the follow-up period.
DISCUSSION
This real-time, single hospital, prospective data collection and retrospective analysis study demonstrated significant improvements in all analyzed parameters, including TPV, TZI, PSA, Vol, PVR, Qmax, IPSS-T, and QoL-I. Interestingly, except in the cases of the IPSS-T and QoL-I, improvements in each variable did not consistently improve within the years of over 4 to 8 years of follow-up, indicating that dynamic characteristics of the prostate and flow rate parameters change over time. BPH is one of the major factors leading to clinical LUTS. Elevation of PSA concentrations and TPV indicate deterioration of LUTS, and an increased risk of acute urinary retention and the need for surgical intervention [12,13]. TPV and PSA have become important parameters for further treatment. The Com-bAT study revealed that a combination of dutasteride and tamsulosin was more effective than monotherapy for the overall progression of LUTS in BPH [9]. The European Association of Urology guidelines currently show a benefit with 5ARI in improving LUTS in patients with moderate-to-severe LUTS and a * * * * * * * * * * * * * TPV > 40 mL or elevated PSA ( > 1.4-1.6 ng/mL) [11]. However, there are still no studies showing the long-term follow-up results of combination therapy in BPH patients. Previous studies have only included up to four years of follow-up [14]. The results of this study are comparable with the CombAT study. All measured variables improve after 4 years of combination therapy and only 1% of patients discontinue treatment due to surgical intervention, indicating a high patient compliance and tolerability to combination therapy. Moreover, our study showed that combination therapy provides effective and longlasting benefits over a long time span of more than 4 years. Among the reasons for discontinued combination therapy, the most common reasons were due to changing medication and improved LUTS. The main reason for converting to a single medication was the presence of adverse effects due to 5ARI, such as sexual dysfunction (impotency, ejaculation problems) and gynecomastia [15]. Patients who changed from combined medication to antimuscarinics had reasons based on their main symptoms. Some patients who were bothered by both empty and storage LUTS in the beginning complained of residual storage LUTS after combined medication for a period of time. Because those patients had already been freed of empty LUTS, an antimuscarinic agent was prescribed. However, the patients who discontinued treatment due to changing medication decreased at the follow-ups for over 5 years, indicating that patients who are free of adverse effects, have good compliance to oral medication and improved LUTS usually can continue combination therapy for long time. The patients who converted to surgical intervention also decreased after more than 5 years of follow-ups, suggesting that patients who can adhere to combination therapy may not need BPH surgery because their condition is stable.
Regarding the treatment duration, this study showed that the mean duration of combination therapy was 4.6 years in patients who adhered to the combination treatment, but only 2.2 years in those who discontinued combination therapy. The most important factor of discontinued combination therapy in our results was due to adverse effects that contributed to converting to monotherapy. This finding was similar to that of previous trials, where higher discontinuation rates were observed in the combination therapy group compared to the monotherapy group at the first 2-year follow-up [16].
Because the mean age in this study was older than 70 years, with an improvement of IPSS and QoL-I compared to the baseline, patients tended to discontinue combination therapy espe-cially in the first 2 years of follow-up. This indicated that the patient might have already felt satisfied with the effects of combination therapy and tended to stop medication by themselves. This is why a high percentage of patients discontinued treatment in under 2 years and at the 2-to 5-year follow-up. Subjective satisfaction with combination therapy is an important factor in choosing long-term medical treatment with combined medications. The result clearly shows patients' concerns and strongly influenced adherence to the combination therapy.
Although TPV was weakly correlated with BPH symptoms and QoL-I, TPV was considered to be a predictor for long-term medical adherence [17]. In this study, we found that patients with a smaller TPV tended to discontinue the combination therapy within the first 2 years. These patients might also have improved LUTS in addition to a smaller TPV after short-term combination therapy. Therefore, they might discontinue combination therapy when their symptoms improve. Some patients with small TPV might not benefit from combination therapy and request surgical intervention or a change in medication to relieve their severe LUTS. A good patient-doctor relationship and education of the benefits of long-term combination therapy for BPH is very important in contributing to better adherence to combination therapy [18].
Limitations of this study are no placebo control group, no double-blind study, uncontrolled prescription of alpha-blockers, and large follow-up data loss. However, in real world practice, patient loss during a long-term treatment period is expected. Improvement of patient education may increase patient compliance to medical treatment of BPH.
In conclusion, combination therapy of an α-1 blocker and 5ARI lead to significant improvements over baseline in LUTS, uroflowmetry parameters, and prostate variables over time. This was observed not only in the short-term but also over a long-term follow-up period of 12 years. Changing medications, improved LUTS and choosing surgery are common reasons for discontinuing combination therapy. A larger TPV after shortterm combination treatment was among the factors that causes the withdrawal from combination therapy. | 2018-04-03T04:02:10.907Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "d59493fdcbda995e898ab54d7666328b1f4af85b",
"oa_license": "CCBYNC",
"oa_url": "http://www.einj.org/upload/pdf/inj-1632526-263.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d59493fdcbda995e898ab54d7666328b1f4af85b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237718022 | pes2o/s2orc | v3-fos-license | Between Sea and Land: Geographical and Literary Marginality in the Conversion of Medieval Frisia
: Ancient and medieval Frisia was an ethno-linguistic entity far larger than the modern province of Friesland, Netherlands. Water outweighed land over its geographical extent, and its marginal political status, unconquered by the Romans and without the feudal social structure typical of the Middle Ages, made Frisia independent and strange to its would-be conquerors. This article opens with Frisia’s encounters with Rome, and its portrayal in Latin texts as a wretched land of water-logged beggars, ultimately unworthy of annexation. Next, the early medieval conflict between the Frisians and the Danes/Geats, featured in Beowulf and other epic fragments, is examined. Pagan Frisia became of interest during Frankish territorial expansion via a combination of missionary activity and warfare from the seventh century onward. The vitae of saints Willibrord, Boniface, Liudger and Wulfram provide insights into the conversion of Frisia, and the resistance to Christianity and Frankish overlordship of Radbod, its last Pagan king. It is contended that the watery terrain and distinctive culture of Frisia (pastoralism, seafaring, Pagan religion) as noted in ancient and medieval texts rendered it “other” to politically centralized entities such as Rome and Francia. Frisia was eventually tamed and integrated through conversion to Christianity and absorption into Francia after the death of Radbod.
Introduction: Rome and Frisia
The modern province of Friesland in the Netherlands is 5741 square kilometres of thinly populated, low-lying land, the occupants of which possess a language and customs separate from those of the Dutch. Five uninhabited and two inhabited West Frisian Islands are included in the province. Much terrain is below sea level, and the highest point is approximately 15 m above sea level. Friesland in the north consists of canals, lakes and other waterways; the south consists of fens and heath, marshes with clayey soil, and polders (Niederhöfer 2010). There is only one major town, the capital Leeuwarden, and one notable port, Harlingen. Modern Friesland is smaller than medieval Frisia, which included the province of Groningen in the Netherlands and the provinces of Ostfriesland and Nordfriesland in Germany. This article reviews Frisia's interactions with ancient Rome and medieval Francia to demonstrate the otherness of the Frisians' geographical environment, religion and culture. The sources employed include Tacitus, Pliny the Elder, Beowulf, Gregory of Tours, and the vitae of saints Willibrord, Wulfram, Boniface and Liudger. It is argued that Rome's failure to incorporate Frisia and Francia's long and hard-fought conversion of the Frisians were substantially due to the watery terrain and decentralized society of the Frisians.
The Roman general Nero Claudius Drusus conquered Frisia in 12 BCE and Roman texts shed some light on Frisian culture and politics from that date onward. Drusus taxed the hides of cattle farmed by Frisians leniently, and in 29 CE Frisia revolted against Olennius, an official who raised the tax. The Frisians expelled the Romans, humiliating Emperor Tiberias, and retained their independence until the general Corbulo sailed up the Rhine in 47 CE (Springer 1953, pp. 109-11). Pliny the Elder took part in Corbulo's victorious campaign against the neighbouring Chauci, which the Frisians greeted with conciliation. In Book XVI of his Natural History, Pliny appears puzzled that people living what he regards as a miserable existence in the far north were unwilling to accept conquest by Rome: a wretched race is found, inhabiting either the more elevated spots of land, or else eminences artificially constructed, and of a height to which they know . . . that the highest tides will never reach. Here they pitch their cabins; and when the waves cover the surrounding country . . . so many mariners on board ship are they: when . . . the tide recedes, their condition is that of . . . shipwrecked men . . . [Y]et these nations, if this very day they were vanquished by the Roman people, would exclaim against being reduced to slavery! (Pliny the Elder 1855, Book XVI, sct. 1) Emperor Claudius ordered Corbulo to retreat to the west bank of the Rhine (Vandermeulan 1998, p. 2), ending direct Roman interference in Frisian affairs. Tacitus states that the Frisians were resentful of Rome's claims to lands in their vicinity; the occupation of the Rhine riverbank opposite Cologne in 58 CE by Frisians under Verritus and Malorix, of whom Tacitus says they "exercised over the tribe such kingship as exists in Germany" (Tacitus 1937, pp. 94-95;Potter 1992) demonstrates this. Nero gave the leaders Roman citizenship and ordered them to vacate the land. They refused and the Romans expelled them; another tribe, the Ampsivarii, then occupied the site, which suggests resentment of Roman overlordship was potentially widespread.
Much of Frisia was approximately 150 km north of the limes dividing Germania Inferior from the 'barbarians' outside, but Roman artefacts are found throughout Frisian territory, clustered in the artificial mounds (called wierden in the north and terpen in the south, from the terms for 'village' or 'hamlet') which were constructed from c. 600 BCE to enable successful settlement of the low-lying land (TeBrake 1978, p. 5). These finds are mostly Samian Ware and coins, and are possibly linked to the trade with Romans in Friesian cattle (Galestin 1999(Galestin -2000. Winsum, a site on the saltmarsh that was settled in the sixth century BCE and may have become a Roman military support station, yielded Roman objects from the Augustan period onwards (Galestin 1999(Galestin -2000. The archaeological data are complicated by objects being located in later strata, the ground having been disturbed through "constructing foundations for new houses, raising sections of the terp, or building dams and dykes" (Galestin 2010, p. 71). Michael Erdrich claimed Roman finds were imported briefly and at specific times, as there was no regular trade between Rome andFrisia (cited in Derk de Weerd 2004-2005, pp. 339-64). In contrast, Marjan C. Galestin argued that archaeological and textual sources support regular contact between Frisians and Romans through Frisians serving in the Roman army and commerce involving both groups. Military campaigns are described in first-century Latin texts; later inscriptions on gravestones and altars indicate trade as the main relationship. An altar dedicated to a local goddess, Hludana, from Beetgum terp referred to Romans who had leased fisheries in the region (Galestin 2010, p. 75). Frisian veterans brought Roman artefacts home, and trade with Rome decreased as the empire faded in the fourth and fifth centuries.
Frisia's geography impacted Rome's lack of interest in permanent conquest. Cassius Dio noted that Drusus' ships ran aground in the 'lake' when he invaded the Chauci after pacifying the Frisians (Cassius Dio 1917, pp. 365-67); this refers to the treacherous Waddenzee (Wadden Sea), a vast, shallow body of water, mud flats and intertidal sands covering more than a million hectares (Krauss 2005). Due to geography, Frisia was a decentralized society lacking unified government, and settlements were small and separated by waters; the construction of dykes began in the eleventh century but land reclamation was not successfully achieved until the fourteenth century or later. Adriaan Verhulst argues that late antique Frisia lacked distinctions between farmers and traders, and "settlements were morphologically very close to the later, small, one street towns" of the early medieval era (Verhulst 1994, p. 370). Trade by sea with Rome is confirmed at Feddersen Wierde near Bremerhaven, Germany, approximately 350 km from the limes (rendering an overland route unlikely). This site was "settled during the last half of the 1st century B.C. and was abandoned during the 5th century A.D." (Parker 1965, p. 1). Roman finds include pottery, glassware, and fibulae. Additionally, the route "by land from Italy to the Frisian coast" (following the Rhine) was in use from Roman to pre-Carolingian times, as attested by "finds of Ostrogothic silver coins and those of the Exarchate of Ravenna in the middle Rhine region" (Adelson 1960, pp. 277-78). The ephemeral qualities of Frisia's watery landscape, and the rebellious independence of the Frisians who rejected incorporation into the Roman Empire, facilitated Frisian separateness and maintained a distinctive Frisian culture until the seventh century, when Frankish coins are found in archaeological sites. From then on, Frisia is usually viewed-questionably, as will be seen-in terms of Frankish overlordship (Croix and Ijssennagger-Van Der Pluijm 2019).
Early Medieval Frisia in Literature and Archaeology
Early medieval Frisia covered a large expanse of territory, "roughly from Bruges in Belgium to the estuary of the Weser in northern Germany" (Stein-Wilkeshuis 1997, p. 18). The Frisians of this era are visible in literary and historical texts and the archaeological record. The Anglo-Saxon poem Beowulf (c. 800 CE) devotes approximately one hundred lines to the "Finnsburg Episode", in which King Hrothgar the Dane's scop (bard) sings of a feud between the Frisians and the Danes. This tale is told on the second of three nights the Geatish hero Beowulf spends in Heorot, Hrothgar's hall, during which he kills the monstrous Grendel and his mother (Bremmer 2004, p. 4). The Dane Hnaef is killed by Finn, son of the Frisian king Folcwalda and husband of his sister Hildeburh, while a guest in Finn's hall. In the resulting melee, there are many casualties on both sides, including Finn's son (Bremmer 2004, p. 5). Finn later met his end at the hands of Hengest, one of Hnaef's band (Hewett 1879, pp. 6-7), and Hildeburh was taken back to Denmark. This conflict is the topic of the 48 line Finnsburh Fragment, and Finn is mentioned in another poem, Widsith, an idealised treatment of a scop and the rulers he has known (Hostetter 2021). Rolf H. Bremmer notes that in Beowulf, Finn's hall is portrayed as filled with treasures, and after Finn's death, the Danes took "all the household-property of the king of that country, such jewels, skilfully wrought gems as they were able to find in Finn's home" (Bremmer 2004, p. 6).
Beowulf also contains four references to a raid on Frisia by King Hygelac, in which Hygelac was killed and his champion Beowulf, after slaying the Frisian champion Daeghrefn, had to swim home to Geatland (Bremmer 2004, p. 7). 1 The motive for this raid also seems to be treasure and glory, and Bremmer notes that in the sixth century, Frisia expanded its reach and became prosperous through trade, with Dorestad emerging as a crucial entrepôt ( van Es and Verwers 2010). The Frisians controlled "all the major river estuaries of North-Western Europe; the Scheldt, the Meuse, the Rhine, the Ems and the Weser", which meant that "all kinds of luxury goods-including glass, wine, pepper, pottery, querns, jewellery and woolen fabric-passed from the Rhineland through Frisian staples and were shipped by Frisians to England and Scandinavia" (Bremmer 2004, p. 13). Stéphane Lebecq observes that Frisian traders returned home with grains, timber, slaves, precious stones, and other valuable materials such as amber, and wool to make cloth (Lebecq 1983). Hygelac's raid is an important part of Beowulf, in that it has been used to narrow the dating of its composition. Craig R. Davis proposes that the emergence of the Danes as a polity and the powerful Scylding dynasty is a concern for the Beowulf poet, and an interest in Danish identity is thus discernible in his audience (Davis 2006, p. 116). This points to a late ninth-century origin at the court of Alfred the Great (849-899) as Alfred "traced his father's ancestry to Scyld [and] his mother's ancestry through the Jutish kings of Wight to Goths and Geats" (Davis 2006, p. 111).
The Danes are mentioned in historical sources from the sixth century onwards, including Procopius, Jordanes, Gregory of Tours, and Bede. Gregory names Chlochilaichus (cognate with Hygelac) as a Danish (not Geatish, as in Beowulf ) king who invaded Frankish lands (Frisian, as in Beowulf ) in c. 520 CE in Book 3, Chapter 3 of Historia Francorum: The Danes sent a fleet under their king Chlochilaich and invaded Gaul from the sea. They came ashore, laid waste one of the regions ruled by Theuderic and captured some of the inhabitants. They loaded their ships with what they had stolen and the men they had seized, and then they set sail for home. Their king remained on the shore, waiting until the boats should have gained the open sea, when he had planned to go on board. When Theuderic heard that his land had been invaded by foreigners, he sent his son Theudebert to those parts with a powerful army and all the necessary equipment. The Danish king was killed and the enemy fleet was beaten in a naval battle and all the booty was brought back on shore once more. (Gregory of Tours 1974, p. 163) Hygelac, in Gregory's Historia, died in northern Gaul; two centuries later, the Liber Historiae Francorum states that he invaded the lands of the Attoarii/Chattuarii (the Hetware in Beowulf ) and died near Nijmegen (Currie 2020, p. 393). Bremmer concludes the prominence of Hygelac's raid in Beowulf and other texts meant that "Frisia played a remarkable role in the political realities of both the Danish and the Geatish courts" (Bremmer 2004, p. 11). The Liber Monstrorum (Book of Monsters), an Anglo-Saxon text from the late seventh or early eighth century, identifies Hygelac as a Geat (such as in Beowulf ) and a giant, and claims that "[h]is bones are preserved on an island in the river Rhine, where it breaks into the Ocean, and they are shown as a wonder to travellers from afar" (Burbery 2015, p. 319). The discovery of Doggerland, the submerged land bridge between England and the Netherlands, which is rich in Pleistocene mammoth fossils, offers a possible explanation for this spectacular phenomenon (Burbery,. This remarkable modern find neatly links the remote, watery geography of the Frisians with the monstrous and the marvelous.
Missions to the Frisians: Wilfrid and Willibrord
Missions were core activities for Christians: peoples that did not convert to Christianity were damned. Christian rulers and missionaries thought Pagan customs and laws were at best invalid, and at worst, demonic (Cusack 2013, pp. 65-80). In 797, Alcuin of York wrote to Bishop Speratus, "Quid Hinieldus cum Christo?" (What has Ingeld-another hero from Beowulf -to do with Christ?) (Bullough 1993, pp. 93-125). Anglo-Saxon aristocrats enjoyed heroic poems about Pagan ancestors, but clerics tended to dismiss them as irredeemable. Bremmer argues that literacy reached Frisia in approximately 1200 CE, though he notes that Frisians were familiar with books as objects due to missions, and that the Lex Frisionum, the oldest Frisian law code, was drafted in Latin before 802 (Bremmer 2014, pp. 3-4). There are no early medieval texts by Frisians (Mostert 2010, p. 450); 2 what Frisians thought of missions and political absorption into Francia and Christianisation must be extracted from hagiographies and annals produced by pro-Frankish clerics (Bartlett 1998, p. 56). 3 Thus, the history of medieval Frisia is written by the state that sought to obliterate it, whose rulers promoted Christianity and campaigned to eliminate the traditional religion of the Frisians (Van der Pol 2015, p. 21). The conquest of Frisia began in Pepin II's reign (680-714), continued through that of his son Charles Martel (714-741) and his great-grandson, Charlemagne (768-814). The region south of the Weser and Rhine Rivers was absorbed in 695 by Pepin; then Charles Martel extended Frankish territory "from Friesland to Vlieland, and the areas around Utrecht and the Veluwe to the Ijssel River, in the middle of the present-day Netherlands" (Van der Pol 2015, p. 21). When Pepin died in 714 CE, a large-scale rebellion against the Pippinids broke out, uniting its enemies. Pepin's heir Grimoald, who married Theudesinda, daughter of the Frisian King Radbod (Redbad), also died in 714 (Bremmer 2020, p. 4). Radbod sailed up the Rhine as far as Cologne and re-took Dorestad in 717. Charles Martel subsequently restored order, and when Radbod died, he had no successor to resist Frankish colonisation and Christianity. Yet, parts of Frisia east of the Zuider Zee and extending to the River Weser stayed Pagan and were made Christian by the Franks only in Charlemagne's brutal campaigns against the Saxons (772-809 CE), which ended the independence of the last two Pagan peoples in Carolingian lands (Cusack 2011, pp. 44-45).
Richard E. Sullivan's study of Carolingian missions from c. 687 to 900 CE notes two incompatible ideas that underpinned Christian missions: first, that Christianity was a superior religion and persuasion alone would prevail in "making pagans receptive to Christianity as a substitute for their existing religion" (Sullivan 1994, chp. I, p. 273); and second, that the evils of Paganism justified coercion, including political pressure and military action. The first mission to the Frisians was that of Wilfrid of Hexham, who according to Bede and Wilfrid's hagiographer Eddi visited Frisia en route to Rome in 678 CE. Eddi says Wilfrid was well received by King Aldgisl and "he baptised all but a few of the chiefs and many thousands of the common people" (Eddi 1983, p. 132). 4 Wihtberht, an English monk living in Ireland, was sent by Egbert of Ripon to Frisia in 680 and evangelised for two years, but the new ruler, Radbod was hostile to Christianity; his hostility was probably enabled by the assassination of Dagobert II on 23 December 679. No trace of Wilfrid's mission remained (Levison 1946, p. 53 Utrecht became a centre for the production of texts for the Frisian mission. Marco Mostert, discussing written culture in Frisia, records King Aldgisl's response to a letter he received from the Frankish duke Ebroin during Wilfrid's tenure in Frisia, offering a reward for Wilfrid's head: "the king ripped apart the parchment and threw the pieces into the fire . . . Aldgisl communicated in a telling visual image to the Frankish messengers that he did not accept the proposition" (Mostert 2010, pp. 461-62). This anecdote is of interest as it confirms Frisian hostility to the Franks prior to the reign of Radbod and indicates that Christianity (in the form of Wilfrid, an Anglo-Saxon), divorced from Frankish territorial ambitions, was perhaps not unacceptable. After Willibrord received the pallium in 695, he used Utrecht as a base for the mission to the Danes and Frisians. Willibrord met Radbod after a failed venture to Denmark, when he landed on an island dedicated to Fosite (Maclear 1869). Cattle were reserved for the deity, and water from the sacred spring must be collected in silence. Willibrord was unintimidated; he baptised three people (probably ransomed Danish boys) in the spring and instructed his company to kill and eat the cattle as required. The Frisians were shocked: "they expected that the strangers would become mad or be struck with sudden death. Noticing, however, that they suffered no harm, the pagans, terror-stricken and astounded, reported to the king what they had witnessed" (Alcuin 1954, p. 10). 5 Radbod cast lots thrice daily for three days hoping to mandate Willibrord's execution, but was unsuccessful: The lots of death never fell upon Willibrord nor . . . his company, except in the case of one of the party, who thus won the martyr's crown. The holy man was then summoned before the king and . . . upbraided for having violated the king's sanctuary and offered insult to his god.
[Willibrord] replied 'The object of your worship, O King, is not a god but a devil, and he holds you ensnared in rank falsehood in order that he may deliver your soul to eternal fire . . . Be baptized in the fountain of life and wash away all your sins . . . ' [Radbod] was astonished and replied: 'It is clear to me that my threats leave you unmoved and that your words are as uncompromising as your deeds'. However, although he would not believe the preaching of the truth, he sent back Willibrord with all honour to Pippin, King of the Franks. (Alcuin 1954, pp. 10-11) Willibrord and Wulfram were unsuccessful in their attempts to convert Radbod: both explained Christian theological ideas about the afterlife of damned Pagans, to him but the king was unmoved. He did, however, permit both to depart alive.
Radbod and Frisia: Politics and Religion
Radbod appears in the hagiographies of several missionaries (Willibrord, Boniface, Wulfram, and Liudger) and in numerous historical texts, including the Liber Historiae Francorum, the Continuations of the Chronicle of Fredegar, the Annales Mettenses Priores, and some minor chronicles. He played an important role in the rise of the Pippinids, and the eclipse of Neustria by Austrasia (Meens 2015, p. 578). He is referred to both as 'duke' and 'king', and while he gave his daughter in marriage to Grimoald son of Pepin II, after both died in 714, he joined forces with their enemy, the Neustrian mayor of the palace Ragamfred, and opposed Pepin's illegitimate son Charles Martel (Broome 2015, p. 161). Radbod is portrayed as a great seafaring commander who inflicted the only recorded defeat on Charles Martel when he re-took Dorestad in 717. After this victory, he experienced an illness and died in 719 (Broome 2015, p. 179). Charles Martel then brought "all of Friesland west of Lake Flevo definitively . . . under Frankish dominion" (Mostert 2013, p. 121). Certain sources accord greater significance to Radbod; for example, the Continuations omits Pepin II's campaigns against the Suevi to focus on the Frisians and their formidable ruler, who acted as a religious and a military leader (Wood 1995, p. 258).
Sources for the Frisian missions portray Radbod primarily as a Pagan and an opponent of conversion to Christianity (Broome 2014). This is always contextualized in terms of Frankish political ambitions and patronage of missions; Bede describes the missions of Wilfrid, Wihtberht and Willibrord, and notes that as "Pippin had recently occupied Frisia citerior and driven out King Radbod, he sent Willibrord and his companions there to preach; and he assisted them with his imperial authority so that no troubles would interfere with their preaching" (Colgrave and Mynors 1969, Book X). The implication is clear; Frankish overlordship signifies Christianity, and areas that the Franks do not control are outside of the Christian faith (Bremmer 1992). 6 Willibald's Vita Bonifatii, written approximately a decade after Boniface's martyrdom at the hands of Frisian Pagans at Dokkum in 754 CE, records Boniface's arrival in Frisia in 716, in the midst of the conflict between Radbod and Charles Martel. Radbod expelled Christians from Frisia, and restored idols and temples, that is, reinstated the Pagan religion. Alcuin, writing in c. 796 CE, regards Radbod as an obstacle to the conversion of the Frisians, such that his death was a boon, but as a ruler that Willibrord had civil dealings with (Broome 2015, p. 159). The Vita Vulframni, portraying Radbod at a time prior to his meeting with Willibrord in 696 (the likely year of Wulfram's death), shows the Frisian king permitting Christian preaching and allowing Wulfram to make converts among those he redeems from sacrificial death (Glaister 1878). Jonas of Fontenelle actually states, "nor did the aforementioned chieftain [Radbod] forbid that the word be preached . . . even the son of the same Radbod [was baptised] . . . clean, as is believed, he passed over out of the world" (Jonas of Fontenelle 2021, chp. IV), see Supplementary Materials. 7 The Vita Vulframni was written at Fontenelle in the late eighth or early ninth century; the author identifies himself as Jonas, a monk of that foundation (Effros 1997, p. 272). The text has had a checkered history as a source for missions to the Frisians. Wilhelm Levison, editor of the Monumenta Germaniae Historica, termed Wulfram's travels to Frisia "a famous legend" (Levison 1946, p. 56), yet favoured a relatively early date of between 788 and 811 for the text. Stéphane Lebecq regards Wulfram as "le principal animateur" of evangelism to the Frisians, more significant than Willibrord, and follows Levison in dating the saint's life to 788-811, when Fontenelle enjoyed a renaissance under Abbot Gervold (Lebecq 2000, p. 75). Lebecq argues the Vita is a compendium of earlier compositions; for example, the dedication to Bainus refers to the abbot of Fontenelle from 701-710, and the core of information regarding pre-Christian Frisian religion comes from Frisian informants such as Ovo, a boy saved by Wulfram from sacrificial death (Lebecq 1996, p. 776). Wulfram was born at Milly near Fontainebleau and, after being ordained priest, won the favour of Theodoric III (c. 651-691) of Neustria and became bishop of Sens (Mershman 1912). Lebecq considers that this elevation occurred in 687/8 and that Wulfram resigned the pallium and became a missionary to the Frisians in approximately 690, the same year Willibrord was sent to Frisia by Egbert (Lebecq 2018, pp. 555-68). Wulfram retired to Fontenelle and died in 696/7 (though it has been proposed he may have lived to 703, which suggests his body was translated one year after his death) (Howe 2001, p. 153).
By contrast, Ian Wood has argued that the text is a fraudulent answer to Alcuin's Vita Willibrordi, asserting a Neustrian claim against Boniface's Austrasian connections, which nevertheless provides reliable anthropological data about Radbod and Frisian customs, or how later authors such as Alcuin understood Radbod and Frisian customs (Wood 2013). Lebecq rejects Wood's contention that the Vita Vulframni was deliberately composed to challenge the Vita Willibrordi, in that Wulfram was a Neustrian and Willibrord was effectively in service to the Austrasian Pippinids (Wood 2001, pp. 92-94). Wood's argument, which unlike Lebecq's has not been developed to any length, but rather exists as a briefly expounded aside in studies that are about other aspects of Germanic Christianisation (Wood 1995(Wood , 2001(Wood , 2008(Wood , 2013, turns on the chronological anomalies in the text, and simply rejects its authenticity without further ado. Admittedly, there are issues with the text; the most obvious is that the date of Wulfram's death is given as 720 and Bainus' translation of his relics is accordingly shifted to 729 (Howe 2001, p. 158). Lebecq argues that the author of the Vita Vulframni was acquainted with Jonas of Bobbio's Vita Columbani, Gregory of Tours' Decem Libri Historiarum, Bede's Historia Ecclesiastica, the Vita Sancti Amandi and the Miracula Amandi (Lebecq 2000, p. 77). 8 For Lebecq, the composite text we have potentially consists of: an original biography, perhaps composed for Abbot Bainus' translation of Wulfram's body in 704; two miracle stories (healing the sick and rescuing a paten from the sea) which he attributes to Abbot Wando who was supposedly in Frisia with Wulfram (he became abbot in 742 and died c. 756); and the descriptions of the saint's missionary activities in Frisia, which are from Ovo, the Frisian eyewitness who became a monk of Saint-Wandrille (Lebecq 2000, pp. 83-84). Considering these elements, Lebecq concludes that Wulfram's mission has "un caractère d'authenticité indéniable et que leur originalité absolue oblige l'historien à reconnaître l'existence d'une connexion entre le monastère neustrien et la Frise lointaine" (Lebecq 2000, p. 87).
Radbod also features in the lives of two saints who were active long after his death, Boniface and Liudger. Unsurprisingly, Willibald's Vita Bonifatii emphasizes Radbod's Paganism: Boniface opposed Pagans during his mission, although there was a political dimension to his activities and heterodox Christianity was as much of a target as Paganism. For example, Boniface's encounter with the Thuringian rulers Theobald and Heden, who allowed people to regress to "rustic" or "heretical" practices, is reported without reference to Heden's support of Willibrord and his monastery at Echternach (Broome 2015, p. 162). Richard Broome argues that Willibrord's depiction of Radbod, "which present[s] him as a hostile, pagan king" tends to "overlook the pro-Christian connotations of his relations with both Pippin II and the Neustrians Chilperic II and Ragamfred" (Broome 2015, p. 161). Altfrid's Vita Liudgeri, a hagiography of an Utrecht-born saint, contains an account of Radbod's last illness, and it is clear from the changed historical context (Liudger's dates are c. 742-809) that Christian missions went more smoothly after the Pagan Frisian ruler's death.
The Vita Vulframni: The Missionary and the Sea
The Vita Vulframni has generally been regarded as a reliable source of information about pre-Christian Frisian religion; for example, Wood, who has repeatedly argued that it is a "forgery" nevertheless accords credibility to it descriptions of lot casting and sacrificial death by drowning (Wood 1995, p. 260). In 2015, Rob Meens took a moderate stance regarding its reliability; he argued that even if the content was not true or accurate, it nevertheless informed readers about "the insecurity about the fate of pagan forefathers" and the "way in which Wulfram and Radbod were remembered at the time of composition of the Vita" (Meens 2015, p. 580). The most significant element of the portrayal of human sacrifice in the service of Pagan gods in the Vita Vulframni is the role played by the sea, which is linked to the watery geography of Frisia itself. Chapter V of Vita Vulframni tells how Wulfram of Sens celebrated Mass on board ship while en route to Frisia; during the ceremony the paten fell into the sea. Wulfram prayed and "from the deep of the sea the same paten, divinely carried back, stuck fast to the hand of the same attendant" (Jonas of Fontenelle 2021, chp. V). Arguably, this episode contrasts the sacrifice of the Mass with the Pagan space of the waters, and Wulfram's prayers overcoming the loss of the paten in the sea prefigures the triumph of Christianity over Pagan rituals involving the treacherous waters.
Wulfram's Vita mentions victims "led to a certain place surrounded by water in the manner of two seas to be swallowed wretchedly by waves when the current of the sea overwhelmed the same place . . . at the time of high tide" (Jonas of Fontenelle 2021, chp. VIII). Wulfram's rescued victims are given names and individual characteristics to make them personalized and meaningful to readers. Chapter VI records that he rescued a lad named Ovo; "[o]n a certain day a certain boy, born of the very nation of the Frisians, was led to the noose to be sacrificed to the gods . . . the boy is hanged on the gallows for a period of nearly two hours" (Jonas of Fontenelle 2021, chp. VI); but Wulfram's prayers burst his chains and Ovo was delivered. Ovo later became a monk of Saint-Wandrille and, according to Lebecq, an important eyewitness source for the Vita Vulframni (Lebecq 1994, p. 144). Chapter VIII tells of "a certain widow woman who had two most dear sons, who, by the cast of the lot had been going to be sacrificed to daemons and destroyed in the whirlpool of the sea" (Jonas of Fontenelle 2021, chp. VIII). Wulfram's prayers result in God drying up the land, a direct 'crossing the Red Sea' motif from the Biblical book of Exodus. These boys, too, are baptised after their rescue. These incidences of conversion establish the sea as waters that lead to death, whereas the holy waters of baptism lead to eternal life. There is seemingly a link between lot casting and sacrifice, and a reference in the Lex Frisionum reinforces the practice of sacrifice by rising water which is attested in the Vita Vulframni (Lebecq 2000, p. 86).
The Lex Frisionum is a late Carolingian capitulary dated to 785-793/4 (Popkema 2014, p. 369) that has in the last two decades been studied from the viewpoint of violence and the punishments and reparations it occasions. For example, Han Nijdam has researched the relationship between the measurement of wounds and the fines levied to compensate for them (Nijdam 2000) and Premyslaw Tyszka situates the Lex Frisionum in the context of Germanic law codes and sexual violence against women, free and enslaved (Tyszka 2011). More salient to the argument of this article, Bremmer notes that the most extraordinary linking between the sea and death in the text is the "remarkable punishment to which the desecrator of pagan sanctuaries (fanum) was subjected: he was led to the sea shore, castrated and drowned" (Bremmer 2010, p. 531). This suggests that the sea is a holy place that is associated with the Frisian gods, and that those who profane them have their lives extinguished in the sea, which is a natural element that carried out the divine will. The boys rescued by Wulfram in the Vita Vulframni are presumably not being punished for desecrating sanctuaries (thought there are interesting possibilities in the relation between physical castration and the Christian embrace of celibacy in monasticism), but the Lex Frisionum confirms that death by drowning was an authorised Pagan form of execution.
Chapter IX of the Vita Vulframni gives the account of the "failed baptism" of Radbod (Meens 2015, p. 579), and Chapter X relates how Wulfram destroyed "a house of shining gold and unbelievable beauty" that "his god has vowed that he will bestow on Prince Radbod after his death" (Jonas of Fontenelle 2021, chp. X). The baptism scene is the most important in the hagiography and has attracted interest for many years (Maclear 1863, pp. 175-80;Brown 1910, pp. 141-44). Wulfram and Radbod discoursed as the Frisian king prepared to enter the baptismal font and receive the Christian waters of eternal life: Furthermore the aforementioned chieftain Rathbod, since he was inspired to receive baptism, inquired of the holy bishop Vulfram, binding him by vows through the name of the Lord, where were the greater number of kings and princes or nobles of the race of the Frisians, namely in that heavenly region, which if he believed and were baptised [Vulfram] promised he would attain, or in that [region] which he was calling infernal punishment. Then blessed Vulfram said, 'Do not make a mistake, noble prince, the number of his saved is sure in the hands of God. For it is certain that those of your princely predecessors of the race of the Frisians who passed away without the sacrament of baptism received the sentence of eternal punishment; truly whoever from now on believes and is baptised will rejoice with Christ in eternity'. Hearing this the unbelieving chieftain-for he had gone forward to the font-even, as is related, withdrew his foot from the font, saying that he could not lack the society of his princely Frisian predecessors and dwell with a small number of paupers in that heavenly realm; nay rather that he could not easily show agreement to the new words, but that he would rather be going to remain in these [words] to which for a long time he, with the whole race of the Frisians, had paid heed. But the blessed bishop of Christ said. 'Alas, ah sorrow, I see that you have been tricked by the misleader who deceives humankind! But unless you pursue penitence and believe and are baptised in the name of the holy trinity, you will not enter the gate of the eternal kingdom, but will be punished by the pain of eternal damnation'. When the holy bishop said these things, many of the Frisians believed and were baptised although the aforementioned king persisted in paganism. (Jonas of Fontenelle 2021, chp. IX) As Meens emphasizes, the main issue is the fate of unbaptised ancestors. Meens notes that although the Vita Vulframni foregrounds this issue, other missionary saints had to deal with it too. For example, Meens argues convincingly that the correspondence between Pope Gregory and Boniface regarding "liturgical offerings for the deceased" indicates some were making liturgical offerings for the non-Christian deceased, that is, they were seeking to honour their Pagan ancestors (Meens 2015, p. 583).
To achieve a fuller interpretation of the failed baptism of Radbod, it would be useful to engage in a detailed comparison with other baptisms of Pagan Germanic leaders, for example, that of the Danish ruler Harald Gormsson, nicknamed Bluetooth, by the German missionary Poppo (Cusack [1998(Cusack [ ] 2001, which succeeded only because Poppo baptised the exhumed corpses of Harald's parents and re-interred them in the church he erected at the royal site of Jelling. However, for the purposes of this article, the loyalty to ancestors expressed in the Vita Vulframni by Radbod is understandable in cultural terms. As James C. Russell puts it, "[s]ocial structure influences ideological structure" and Radbod would therefore prefer an afterlife feasting collectively with his kin than an individual Christian afterlife (Russell 1994, p. 102).
Conclusions
This article has argued that the sea was the dominant geographical reality for the Frisians in the late Roman and early medieval periods (McManus et al. 2013, pp. 255-77). This was a factor in the potential integration of the Pagan Frisians into the Roman Empire (which did not eventuate) and the Pippinid/Carolingian Christian empire, which happened only after more than three decades of war waged by Charlemagne from 772 onwards (Cusack 2011, pp. 44-46). Evidence of hostility to both Roman and Frankish overlordship has been cited. The watery lands of Frisia were viewed as treacherous by the Romans, and feature as a site of Pagan religion in Jonas of Fontenelle's Vita Vulframni. This article has contrasted the sea (which covers much of Frisia) as a site of sacrifice by drowning to Pagan gods (Wood 2008, p. 726;Bremmer 2010, p. 531), with the waters of baptism that confer Christian salvation. The Christian God's power over the sea is exhibited in the incident of the recovered paten, and the rescue of the boys to be sacrificed by drowning. This is a form of conversion of nature and the landscape in which Christians exercised powers of control and transformation over previously Pagan terrain (Siewers 2003, p. 2).
Due to the importance of marginal landscapes/seascapes in all eras and all characterisations of the Frisians, the idea of transformation of the landscape is important. Early medieval Frisia has largely been understood in terms of Christian Francia in much contemporary and historical research; this is valid given the Christian Franks' sustained efforts to conquer and absorb it, and the fact that the transformation of Frisian and Saxon territories included resettling of ethnic Franks in these marginal areas, and the removal of Frisians and Saxons to distant regions of the Frankish empire (Cusack 2011, p. 45). However, it is potentially more useful to see Frisia as a territory that is contiguous with Jutland and linked (like Saxony) with the Scandinavian polity that met aggressive Carolingian evangelism and territorial expansion with sustained sea raiding and hostility (Croix and Ijssennagger-Van Der Pluijm 2019, p. 3). This kinship renders the concerns of Radbod at the font, that integration into the Christian afterlife would deprive him of the society of his ancestors, both more poignant and more urgent, in that the Franks were a colonialist power that sought the obliteration of the Pagan religion and distinctive culture of the peoples they conquered. In Jonas' Vita Vulframni, Wulfram is depicted destroying the marvelous palace the Pagan gods promised Radbod; in effect nullifying the Pagan idea of the afterlife as a banquet in the company of gods and ancestors (Jonas of Fontenelle 2021, chp. X) Yet, the Frisian way of life endured in their watery homeland, unchanged by the Franks and Christianity, until the Vikings, a people equally at home on the sea, took over "their own maritime niche" (TeBrake 1978, p. 28), spelling the end for a distinctive and marginal lifestyle built on reclamation of land from the sea, and commitment to self-determination asserted by the Frisian people.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/rel12080580/s1. The translation of the Vita Vulframni that accompanies this article is by Andrew Gollan.
Funding: This research received no external funding.
Acknowledgments: I am grateful to Andrew Gollan for translating the Vita Vulframni so expertly and inspiring me to write this article. My thanks are due to Keagan Brewer (Macquarie University) who in the past was both my doctoral student and research assistant. His contribution to the research for this article is considerable. My colleagues at the Australian Early Medieval Association Conference at the University of Sydney, 11-12 February 2016 provided helpful feedback on my presentation which covered some of the same material as this article. All errors are my own.
Conflicts of Interest:
The author declares no conflict of interest.
1
The relevant sections are Beowulf 1202-14a, 2354a-66, 2501-08a, and 2910b-21. 2 Marco Mostert notes "the Frisian language seems to have been spoken by only some of those we encounter as 'Frisians' in the (early) medieval sources . . . next to nothing is known of the language spoken by the 'Frisians' mentioned in Roman sources", p. 450. 3 Robert Bartlett's Raleigh Lecture on History, "Reflections on Paganism and Christianity in Medieval Europe", Proceedings of the British Academy, 101 (1998), 55-76 opines that "To seek to understand native paganism from missionary literature is a little like attempting to form a picture of twentieth-century British socialism from the speeches of Margaret Thatcher. A hostile, sometimes highly ideological and tactically inspired viewpoint is the one we have to deal with", p. 56. Bremmer (1992) notes that Paul the Deacon's Historia Langobardorum contains an obituary for Pepin II that states that "He also courageously waged many wars with the Saxons and especially with Ratpot, King of the Frisians (Book VI, Chapter 37)", p. 6. is anonymous (despite the author identifying himself as Jonas of Fontenelle in the opening lines), and repeats the same brief summary he has made in a number of publications, that is, that the text is unreliable, though he admits the desire of Radbod to be with his kin after death is a credible motivation. | 2021-09-09T20:48:09.407Z | 2021-07-28T00:00:00.000 | {
"year": 2021,
"sha1": "f8c56adf10e0b299f8bfef13c2d7f25c498290d4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/12/8/580/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1a74db1cb06f9e77ab5a9d4532b62d84aff84cd1",
"s2fieldsofstudy": [
"History",
"Geography"
],
"extfieldsofstudy": []
} |
251022258 | pes2o/s2orc | v3-fos-license | A Big Role for microRNAs in Gestational Diabetes Mellitus
Maternal diabetes is associated with pregnancy complications and poses a serious health risk to both mother and child. Growing evidence suggests that pregnancy complications are more frequent and severe in pregnant women with pregestational type 1 diabetes mellitus (T1DM) and type 2 diabetes mellitus (T2DM) compared to women with gestational diabetes mellitus (GDM). Elucidating the pathophysiological mechanisms that underlie the different types of maternal diabetes may lead to targeted strategies to prevent or reduce pregnancy complications. In recent years, microRNAs (miRNAs), one of the most common epigenetic mechanisms, have emerged as key players in the pathophysiology of pregnancy-related disorders including diabetes. This review aims to provide an update on the status of miRNA profiling in pregnancies complicated by maternal diabetes. Four databases, Pubmed, Web of Science, EBSCOhost, and Scopus were searched to identify studies that profiled miRNAs during maternal diabetes. A total of 1800 articles were identified, of which 53 are included in this review. All studies profiled miRNAs during GDM, with no studies on miRNA profiling during pregestational T1DM and T2DM identified. Studies on GDM were mainly focused on the potential of miRNAs to serve as predictive or diagnostic biomarkers. This review highlights the lack of miRNA profiling in pregnancies complicated by T1DM and T2DM and identifies the need for miRNA profiling in all types of maternal diabetes. Such studies could contribute to our understanding of the mechanisms that link maternal diabetes type with pregnancy complications.
INTRODUCTION
Maternal diabetes is associated with an increased risk of pregnancy complications and is a significant cause of morbidity for both mother and child (1)(2)(3). The prevalence of diabetes during pregnancy is increasing globally, paralleling the obesity and type 2 diabetes mellitus (T2DM) epidemics (4). According to recent estimates,~16.7% of live births (21.1 million) are associated with maternal diabetes, of which 80.3% are due to gestational diabetes mellitus (GDM), 10.6% due to pre-existing type 1 diabetes mellitus (T1DM) or T2DM, and 9.1% due to T1DM and T2DM first detected in pregnancy (5). All types of maternal diabetes are associated pregnancy complications, with several studies reporting that the frequency and severity of adverse pregnancy outcomes are related with the degree of hyperglycaemia (6,7). Women with pregestational T1DM and T2DM have a higher risk of pregnancy complications including fetal and neonatal loss, congenital malformations, preterm delivery, macrosomia, preeclampsia and caesarean deliveries, compared to women with GDM (8,9). The more severe effects of pregestational diabetes compared to GDM are most likely attributed to the pre-conceptual hyperglycaemic environment, longer intrauterine exposure to hyperglycaemia, and the different pathophysiological mechanisms that underlie the different types of maternal diabetes (10,11).
MiRNAs are short, highly conserved, non-coding RNA molecules that are approximately 22 nucleotides in length. They were first identified in Caenorhabditis elegans in 1993 (12) and have emerged as powerful epigenetic mediators of diverse biological processes including development, proliferation, differentiation, apoptosis and metabolism (13). To date over 2 500 miRNAs have been identified in humans (14,15), which together regulate~60% of genes in the genome (Zhang and Wang, 2017). MiRNAs regulate gene expression through post-transcriptional mechanisms, by binding to the 3' untranslated region (UTR) of messenger RNA (mRNA) and inducing degradation or by translational repression of the mRNA transcript (16). Furthermore, recent studies have proposed an important role for circulating miRNAs in cell-tocell communication, suggesting that these extracellular miRNAs may similarly regulate biological processes (17,18). The dysregulated expression of miRNAs is associated with the development of metabolic disease and conditions including cancer, obesity, T2DM and cardiovascular disease (19; 20).
In recent years, miRNAs have been identified as key regulators of metabolic adaptation during pregnancy (21)(22)(23). They regulate several biological processes that are critical during pregnancy and may reflect the physiological state of the pregnancy and fetal development. A growing body of evidence have reported on the association between maternal miRNAs and pregnancy complications, including placental weight (24), placental abruption (25), placental previa (26), preeclampsia and gestational hypertension (27), and intrauterine growth restriction (28), macrosomia (29) and GDM (30). Therefore, miRNA profiling may aid in elucidating the pathophysiological mechanisms that underlie the different types of maternal diabetes. This review aims to provide an update on the status of miRNA profiling in pregnancies complicated by maternal diabetes. Four databases, Pubmed, Web of Science, EBSCOhost, and Scopus, were searched to identify published studies reporting miRNA profiling during maternal diabetes between the date of inception to January 2022. The search terms "type 1 diabetes", "type 2 diabetes", "gestational diabetes mellitus", "pregestational diabetes", "maternal diabetes", "microRNA", and "pregnancy", including corresponding synonyms and associated terms for each word were used. Studies were considered eligible if they were original articles, investigated miRNA patterns during maternal diabetes, and if the study was published in English. Reference lists of included studies were also searched to identify other potentially eligible studies.
CHARACTERISTICS OF INCLUDED STUDIES
A total of 1800 articles were identified from the search strategy, of which 53 met the inclusion criteria and are included in the review (Figure 1). The 53 included studies were casecontrol studies on GDM conducted between 2011 and 2022 ( Table 1). Studies were conducted across five continents (Africa, Asia, Australia, Europe and North America), with studies conducted in different countries, such as Australian (n = 2), Canada (n = 1), China (n = 33), Estonia (n = 1), Egypt (n = 1), Germany (n = 1), Mexico (n = 3), Iran (n = 1), Italy (n = 2), Italy/Spain (n = 1), South Africa (n = 1), Spain (n = 1), Turkey (n = 3), United States of America (USA) (n = 1) and different places in Europe (n = 1). The sample size of studies ranged from three to 204 women. Studies profiled miRNAs in different biological sources including human umbilical vein endothelial cells (HUVECs) (n = 2), omental adipose tissue (n = 1), plasma (n = 10), placenta/plasma (n = 2), placenta (n = 9), placenta/plasma exosomes/skeletal muscle tissue (n = 1), placenta/whole blood (n = 2), placental-derived mononuclear macrophages (n = 1), serum (n = 16), serum/placenta (n = 1), skeletal muscle tissue (n = 1), urine (n = 1) and whole blood (n = 7). Different measurement platforms and techniques were employed across studies. Studies profiled miRNAs using quantitative real-time PCR (qRT-PCR) with SYBR Green (n = Three studies that reported on the expression of miR-9. Of the three studies, one study reported higher levels of miR-9 in the serum of Mexican women with GDM compared to controls (49). In contrast, two studies profiling miRNAs in placental tissue of Chinese women (46) and in plasma samples of Turkish women (32) reported lower expression of miR-9 in women with GDM compared to controls. Of the seven studies reporting on miR-16, three studies demonstrated higher expression in serum and plasma samples of Chinese and European women with GDM compared to controls (34,62,83). In contrast, Herrera et al. (42) reported lower expression of miR-16 in urine samples of Mexican women with GDM compared to controls in the third trimester, and higher expression in the first and second trimesters (42). Three studies conducted in Turkey and South Africa reported no difference in miR-16 expression in women with GDM compared to controls (40,41,55). Four studies investigated miR-17 during GDM. Of these, three studies reported that miR-17 expression was higher in plasma samples of Chinese and Turkish women with GDM compared to controls (32,34,83). In contrast, Pheiffer et al. (55) showed no significant difference in miR-17 expression in the serum of South African women with GDM compared to controls (55). Five studies profiled miR-19a and miR-19b during GDM. Two studies reported that miR-19a and miR-19b expression was higher in serum and plasma samples of Chinese women with GDM compared to pregnant women without GDM (66,83). However, two studies reported that miR-19a and miR-19b expression did not differ in plasma and serum samples of women with GDM compared to controls (34,55). Stirm et al. (60) demonstrated higher expression of miR-19a and miR-19b in the whole blood of German women with GDM compared to controls in the screening group, however, this difference was not validated in a larger sample (60). Three studies profiled miR-20a, of which two studies reported higher expression of miR-20a in Chinese women with GDM when compared to controls (34,83), while Pheiffer et al. (55) reported lower expression of miR-20a in South African women during GDM compared to controls (55). For miR-21, Wander et al. (64) reported higher expression in plasma samples of American women with GDM compared to controls (64), while two studies reported lower expression of miR-21 in whole blood and plasma samples of Turkish women with GDM compared to controls (32,40). MiR-29a was investigated in seven studies. Of these, three studies showed higher serum expression of miR-29a during GDM in women from Canada, Mexico, and different regions in Europe (38,49,62). Two studies reported lower levels of miR-29a in serum and plasma of Chinese and Turkish women with GDM compared to controls (32,79), and two studies reported no difference in miR-29a expression in serum and plasma samples of American and South African women with GDM compared to controls (55,64). Of the three studies that reported on miR-29b expression during GDM, two studies reported higher expression in serum and plasma samples of Canadian and Turkish women with GDM compared to controls (32,38), while Sun etal. (61) reported lower expression of miR-29b in Chinese women with GDM compared to controls (61).
Three studies investigated miR-30d during GDM. Of these, two studies reported higher expression of miR-30d in plasma and placenta of Estonian and Chinese women with GDM compared to controls (63,78), while one study reported lower expression in placenta samples of Chinese women with GDM compared to controls (46). Three studies reported on miR-92a during GDM. Of these, two studies reported higher expression in plasma of miR-92a during GDM (52,63), while Lie at al. (46) reported lower expression in the placenta of Chinese women with GDM compared to controls (46). The two studies that investigated miR-96, both reported lower expression in plasma/placenta/ whole blood of Chinese women with GDM compared to controls (47,76). Two studies reported contradicting results for miR-125b. Lamadrid-Romero et al. (45) reported higher expression of miR-125b in serum samples of Mexican women with GDM compared to controls (45), while Balci et al. (32) reported lower expression in plasma samples of Turkish women with GDM compared to controls. Of the five studies, only one study reported a higher expression of miR-132 in the serum samples of Canadian women (38). Contradictingly, three studies reported a lower expression of miR-132 in serum and plasma samples of Chinese and Turkish women with GDM (32,79,82). However, Pheiffer et al. (55) observed no significant change in expression of miR-132 in serum samples of South African women with GDM when compared to controls (55).. All three studies that investigated miR-137 reported lower expression in plasma and placenta samples of Chinese and Turkish women with GDM compared to controls. (32,46,53). Two studies reported on the expression of miR-142 and miR-143 during GDM. One study reported higher expression of miR-142 and miR-143 in plasma of Turkish women with GDM compared to controls (32). Stirm et al. (60) reported higher expression of miR-142 and miR-143 in the whole blood of German women with GDM in the screening group, however, these findings were not validated in a larger sample. Of the three studies that investigated miR-155, one study reported higher expression in plasma samples of American women with GDM compared to controls (64). Hocaoglu et al. (40) reported no change in the expression of miR-155 in whole blood of Turkish women with GDM compared to controls (40). However, a more recent study by the same authors reported lower expression of miR-155 in whole blood of Turkish women with GDM compared to controls (41). Both studies that investigated miR-195 reported higher expression in plasma samples of Estonian and Chinese women with GDM compared to controls (63,68). Contradicting results were reported for the expression miR-197. Nair et al. (51) reported higher expression of miR-197 in placenta, exosomes and skeletal muscle tissue samples of Australian women with GDM compared to controls (51), while Balci et al. (32) reported lower expression of miR-197 in plasma samples of Turkish women with GDM compared to controls (32).
Four studies reported on the expression of miR-210. Of these, one study reported higher levels of miR-210 in serum samples of Canadian women with GDM compared to controls (38), while lower levels of miR-210 was observed in placental and plasma samples of Chinese and Turkish women with GDM (32,35). Wander at al. observed no difference in miR-210 expression in plasma samples of American women with GDM compared to controls (64). Of the seven studies that reported on miR-222 expression during GDM, two reported higher expression of miR-222 in omental adipose tissue and plasma of Chinese women with GDM compared to controls (58,63), while three studies observed lower expression of miR-222 in serum of Chinese, South African and Turkish women with GDM compared to controls (32,55,79). Wander et al. (64) observed no difference in the expression of miR-222 in plasma of American women with GDM compared to controls (64). Herrera-Van Oostdam et al. (42) demonstrated higher expression of miR-222 in urine samples of Mexican women with GDM compared to controls in the first trimester and observed no significant difference in the second trimester and lower expression in the third trimester (42). Two studies reported increased levels of miR-223 in serum and plasma of women during GDM from Italy/Spain and Egypt (31,75), however, Wander et al. (64) reported no difference in the expression of miR-223 in plasma of American women with GDM compared to controls (64).
All four of the studies that profiled miR-330 reported higher levels in serum and plasma of Italian, Mexican, Spanish and Turkish women with GDM compared to controls (49,54,56,74). All three studies that profiled miR-342 demonstrated higher expression in serum and plasma of Estonian, Canadian and Turkish women with GDM compared to controls (32,38,63). Sebastiani et al. (56) reported higher expression of miR-483 in plasma of Italian women with GDM compared to controls (56), while Gillet et al. (38) showed no difference in expression of miR-483 in serum samples of Canadian women with GDM compared to controls (38). He et al. (39) reported lower expression of miR-494 in whole blood samples of Chinese women with GDM compared to controls (38), while Gillet et al. (39) demonstrated no difference in the expression of miR-494 in serum samples of Canadian women with GDM compared to controls (38). Of the three studies that investigated miR-517, Herrera-Van Oostdam et al. (42) demonstrated higher expression in women with GDM compared to controls in the first and second trimesters but lower expression in the third trimester (42). The other study that profiled miR-517 showed no difference in expression in serum of women with GDM compared to controls (35,64). Both studies that profiled miR-520h reported higher expression in serum of Canadian and Chinese women with GDM compared to controls (38,72). Two studies investigated miR-657 during GDM, and both studies reported higher expression in placental and placentalderived mononuclear macrophages of Chinese women with GDM compared to controls (65,67). Both studies reporting on miR-1323 observed higher levels in the serum of Canadian and Chinese women with GDM compared to controls (38,48). Two studies reported on the expression of let-7g. Tagoma et al. (63) reported higher expression of let-7g in plasma samples of Estonian women with GDM compared to controls (57). Stirm et al. (60) reported conflicting results on the expression of let-7g. These authors reported higher expression of let-7g in the screening group, however, no difference was observed in the validation group in whole blood samples of German women with GDM compared to controls in the screening group (60) Other articles included in this review reported differential miRNA expression, yet these miRNAs were identified in single studies only (36,37,43,44,48,50,57,59,69,71,74,76,77,80,81).
DISCUSSION
MiRNA profiling in pregnancies complicated by diabetes may aid in elucidating the pathophysiological mechanisms that underlie T1DM, T2DM, and GDM (21)(22)(23)84). This review provides an update on the status of miRNA profiling in pregnancies complicated by maternal diabetes. The main finding of this review is the lack of studies that have profiled miRNAs in pregnant women with pregestational T1DM and T2DM. All the included studies investigated GDM only. Of these, six miRNAs [miR-195 (n = 2), miR-330 (n = 4), miR-342 (n = 3), miR-520h (n = 2), miR-657 (n = 2) and miR-1323 (n = 2)] were similarly differentially expressed in pregnant women with GDM compared to controls in two or more studies ( Table 2). The consistency of expression of these miRNAs across diverse populations and gestational ages and using different methodologies and measuring platforms support their candidacy as biomarkers of GDM.
130a, miR-150, miR-20b, miR-21 and miR-720 were unique to T1DM. Five miRNAs, miR-140-3p, miR-199a-3p, miR-222, miR-30e and miR-451 were unique to T2DM. Ten miRNAs, miR-101, miR-1180, miR-1268, miR-181a, miR-181d, miR-26a, miR-29a, miR-29c, miR-30b and miR-595 were unique to GDM (89). Specific miRNAs may represent biological markers for each type of diabetes, warranting further investigation as potential mechanisms that underlie the different diabetes types. Collares et al. (89) assessed miRNA expression in both females and males, and included non-pregnant individuals, thus their results do not r e fl e c t p l a c e n t a l -d e r i v e d m i R N A s a n d p r e g n a n c y pathophysiology. Ibarra et al. (90) profiled miRNAs during pregestational T1DM and T2DM (90). This research was reported as a conference abstract only and was not included in this review. Data from the abstract report that miR-19a, miR-125b, miR-20a and a miRNA on Chr11-134 were unique to placenta samples of women with T1DM and were not expressed in pregnant women with T2DM (90). Our review highlights the lack of studies profiling miRNAs in pregnancies complicated by pregestational T1DM and T2DM. We propose that future studies on miRNA profiling include all types of maternal diabetes, which may contribute to elucidating the different pathophysiological mechanisms that underlie pregestational T1DM and T2DM, and GDM. Studies on miRNA profiling during GDM were mainly related to biomarker discovery. These studies identified six miRNAs that were consistently expressed at higher levels in serum, plasma, placenta and placental-derived mononuclear macrophages in women with GDM compared to controls in different populations, using different methodologies and measurement platforms, and during different gestational ages. These include miR-195 (n = 2), miR-330 (n = 4), miR-342 (n = 3), miR-520h (n = 2), miR-657 (n = 2) and miR-1323 (n = 2) ( Table 2). MiR-195 levels were reported to be consistently higher in the serum and plasma samples of women with GDM compared to controls across two studies conducted in China and Estonia using miScript miRNA PCR Human T and B cell activation, and SYBR green qRT-PCR (63,68). Previous studies observed high levels of miR-195 associated with fatty acid biosynthesis and metabolism, insulin signaling cascade and glycogen synthesis (63,85), suggestive of miR-195 candidacy as a biomarker for GDM. Furthermore, upregulation of miR-195 in women with GDM was shown to be associated with the development of T2DM (85) and obesity (63,68). Interestingly, circulating levels of miR-330 was consistently higher in the serum and plasma samples of women with GDM compared to controls across four studies conducted in Italy, Mexico, Spain and China using TaqMan microarray, TaqMan and SYBR green qRT-PCR (49,54,56,73). MiR-330 regulates genes involved in beta-cell (b-cell) function and glucose homeostasis, suggesting that increased miR-330 expression may lead to impaired b-cell proliferation and insulin secretion (56). Furthermore, upregulation of miR-330 was shown to be associated with caesarean delivery in women with GDM (54,56). MiR-342 levels were reported to be higher in serum and plasma of Estonian, Canadian and Turkish women with GDM compared to controls using TaqMan qRT-PCR, MiScript miRNA PCR array Human T & B cell activation and SYBR green qRT-PCR (32,38,63). MiR-342 has been associated with the regulation of fatty acid biosynthesis and metabolism (63), impaired insulin secretion (38) and b-cell development (32). Furthermore, upregulation of miR-342 in women with GDM was shown to be associated with obesity (63) and cardiovascular disease in children born to mothers with GDM (86). MiR-520h levels were reported to be higher in serum of Canadian and Chinese women with GDM compared to controls using miScript miRNA array and SYBR green qRT-PCR (38,72). MiR-520h is implicated in impaired insulin secretion in pancreatic b-cells (38), and has been demonstrated to inhibit cell viability and promote apoptosis (72). Furthermore, the upregulation of plasma miR-520h during the first trimester was associated with the onset of preeclampsia (28). MiR-657 levels were reported to be higher in placentalderived mononuclear macrophages and placenta samples of women with GDM in a Chinese population using SYBR green qRT-PCR and TaqMan qRT-PCR (65,67). MiR-657 regulates inflammation via targeting Interleukin-37/Nuclear factor-kB signalling axis, that is responsible for the regulation of inflammatory responses (65). Furthermore, the upregulation of miR-657 was associated with the pathogenesis of T2DM (87). MiR-1323 was expressed at higher levels in serum samples of women with GDM compared to controls in studies conducted in Canada and China using SYBR green qRT-PCR (38,48). MiR-1323 regulates insulin secretion (38) and trophoblast cell activity crucial for placental cell development (48). MiR-1323 was implicated in patients with preeclampsia (88). MiRNAs that are commonly expressed across diverse populations and gestational ages, biological samples and using different measurement platforms present opportunities as biomarkers for GDM. Although it could be argued that miRNAs offer little advantage over measurement of glucose concentrations, the oral glucose tolerance test, the gold standard for GDM diagnosis, is associated with several disadvantages which include the requirement for fasting, multiple blood draws, and association with nausea, vomiting and bloating, lead to decreased patient compliance (30). Furthermore, as discussed above, these miRNAs have been reported to be associated with adverse pregnancy outcomes, supporting their use as biomarkers to predict pregnancy outcomes.
Findings from this review show heterogenous miRNA expression across studies with a general lack of reproducibility. MiRNA heterogeneity may be attributed to factors such as diet, physical activity, medication use, population differences such as ethnicity, socioeconomic status, environmental factors and viral infections (91)(92)(93)(94)(95), and differing gestational ages between women (42). Furthermore, different GDM diagnostic criteria and glucose cut-off values across studies may have also contributed to miRNA variability. Pre-analytical and analytical factors such as sample collection and storage, miRNA isolation procedures, measurement platform, and normalisation methods (96)(97)(98) affect miRNA expression analysis. The development of optimized protocols for standardizing sample collection, transport, and storage, as well as miRNA isolation procedures and data analysis for the diversity of technological methods used are important to improve reproducibility across studies. Importantly, the non-specificity of miRNAs is another factor that may limit its clinical applicability. MiRNAs are able to regulate multiple genes across different biological pathways in different diseases (99,100), therefore, miRNA signatures based on a pool of miRNAs may have more clinical applicability than individual miRNAs. Although rapid technological advances could facilitate the use of miRNAs as inexpensive, point-of-care biomarkers in the future, at present, miRNA profiling during GDM remains inconclusive, largely due to poor reproducibility between studies. Many pre-analytical, analytical and biological challenges must be addressed before miRNAs can become clinically applicable. Although it could be argued that miRNAs offer little advantage over measurement of glucose concentrations, the oral glucose tolerance test, the gold standard for GDM diagnosis, is associated with several disadvantages which include the requirement for fasting, multiple blood draws, and association with nausea, vomiting and bloating, which leads to decreased patient compliance (30). Furthermore, as discussed above, these miRNAs have been reported to be associated with adverse pregnancy outcomes, supporting their use as biomarkers to predict pregnancy outcomes.
CONCLUSION AND FUTURE PERSPECTIVES
This review highlights the lack of studies profiling miRNA expression in pregnancies complicated by pregestational T1DM and T2DM. Future studies should prioritise miRNA profiling in all types of maternal diabetes, which may aid in identifying the mechanisms that underlie the different types of diabetes during pregnancy. Such studies could contribute to unravelling the link between diabetes type and pregnancy outcomes. Furthermore, this review confirms the growing evidence supporting the potential of miRNAs to serve as biomarkers of GDM. Six miRNAs with similar expression in women with GDM compared to controls in two or more studies, across different populations and gestational ages, using different methodologies and measuring platforms are highlighted. These six miRNAs represent candidates as future GDM biomarkers and should be prioritized in future studies. | 2022-07-25T13:08:52.199Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "d988bf90b476ae70990e54484442e0b9c107bc61",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d988bf90b476ae70990e54484442e0b9c107bc61",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118639362 | pes2o/s2orc | v3-fos-license | Scalable ion-photon quantum interface based on integrated diffractive mirrors
Quantum networking links quantum processors through remote entanglement for distributed quantum information processing (QIP) and secure long-range communication. Trapped ions are a leading QIP platform, having demonstrated universal small-scale processors and roadmaps for large-scale implementation. Overall rates of ion-photon entanglement generation, essential for remote trapped ion entanglement, are limited by coupling efficiency into single mode fibres5 and scaling to many ions. Here we show a microfabricated trap with integrated diffractive mirrors that couples 4.1(6)% of the fluorescence from a $^{174}$Yb$^+$ ion into a single mode fibre, nearly triple the demonstrated bulk optics efficiency. The integrated optic collects 5.8(8)% of the {\pi} transition fluorescence, images the ion with sub-wavelength resolution, and couples 71(5)% of the collected light into the fibre. Our technology is suitable for entangling multiple ions in parallel and overcomes mode quality limitations of existing integrated optical interconnects. In addition, the efficiencies are sufficient for fault tolerant QIP.
A small chain of trapped ions, confined along the node of an oscillating electric field in a Paul trap, provides a well controlled quantum system that can be cooled to the quantum ground state and precisely manipulated with lasers and microwaves. The ions are simultaneously strongly coupled to each other through the Coulomb force, and decoupled from the surrounding environment. The strong mutual coupling is critical for implementing deterministic multi-qubit entangling gates 10,11 while the external decoupling enables memory coherence times approaching one minute 2 and single qubit gate error rates 12 below 10 −4 . Recent developments include implementation of the Shor factoring algorithm 13 and a programmable quantum computer module based on five ions 14 .
In these experiments the ions' fluorescence was collected using complex multi-element bulk optics, which are unsuitable for scaling to massively parallel systems.
Ion fluorescence plays two complementary roles in quantum information processing: state readout through the collection of multiple photons, and the creation of remote entanglement through entanglement swapping of photons coupled into single optical modes. The fidelity and readout speed for qubit state detection in trapped ion QIP depends only on the fluorescence collection efficiency 15 . While efficient coupling into single-mode structures, including optical fibres or arrays of waveguides for collection from multiple ions 16 , requires a collection apparatus that is also capable of producing a high quality ion image. Collection efficiencies in free space of up to 54.8% 17 have been realised in custom fabricated parabolic mirror with very large solid angle coverage. However these devices are highly labour intensive to construct and not scalable. A more scalable approach used reflected curved surface optics into microfabricated ion traps 8 , but the ion image quality of such system remains insufficient for good single mode coupling. In fact for both local processing and remote networking a scalable optical interface that can efficiently interface multiple ions with single mode guiding structures is necessary to achieve large, massively parallel implementations. (2) . Ion fluorescence is analysed with a linear polariser (P) to filter out σ photons, leaving only π photons to be divided by a 50/50 non-polarizing beamsplitter (NPBS) between two photomultiplier tube detectors (DET1, DET2). Arrival time statistics are accumulated by a digital interval analyser. D. Measured coincidence counts and second-order correlation g (2) . Peak spacing corresponds to 3.25 µs experimental repetition period.
Diffractive optics offers a solution to this problem since they are scalable, can engineer out geometrical aberrations at the design phase for efficient coupling into single mode structures, and are compatible with several platform such as neutral atom chip traps 18 or crystal colour centres 19 . Previous efforts using diffractive lenses with traditional macro traps have demonstrated a 4.2% collection efficiency 20 in free space as well as near diffraction-limited imaging in both fluorescence 21 and absorption 22 modalities, the latter being important for implementing quantum photonic receivers.
Here we exploit the potential of diffractive optics and demonstrate a scalable photon-ion interface realised on a multi-zone micro-fabricated surface trap with integrated diffractive mirrors. Our trap is based on a proven design 23 and is shown in Fig 2A. The substrate under the central grounding electrode was patterned and etched before coating with 100 nm of aluminium to also act as an array of diffractive mirrors (see Methods). We tested the properties of the integrated mirror by measuring its collection efficiency with a protocol for the creation of triggered single photons and acquiring a near diffraction-limited image of a single ion.
With such image quality we were able to obtained an overall coupling efficiency from ion into single mode fibre which doubled what was previously achieved with micro-trap and multimode fibres by direct collection 6 and with the use of diffractive lenses 7 respectively.
The diffractive mirror has a focal length of 59.6 µm which correspond to the ion height necessary for high resolution imaging when the collimated flourescence is refocused by an external lens at the desired magnification ( Fig 2C). The integrated optic has a design diffraction efficiency of 50%, which includes 92% aluminum reflectivity in the UV, and is 80 µm wide by 127 µm long. The width of the mirror is limited by the centre ground electrode size and the length is set to match the industry standard 127 µm pitch of V-groove fibre arrays for simultaneous coupling of multiple ions. The diffractive mirror has a numerical aperture of 0.55 in the 80 µm direc-tion and 0.73 in the 127 µm direction parallel to the RF rails, capturing 13.3% of the total solid angle. This is equivalent to a circular NA of 0.68, which maximises the overall rate of entanglement generation per unit surface area of the trap 16 .
The trap is loaded with 174 Yb + ions by isotope selective photo-ionisation from an effusive Yb beam passing through a slot in the chip surface (Fig 2A farthest left ion image). Trapped ions are Doppler cooled and shuttled from the loading zone to the various diffractive optic focus sites by varying the potential in the segmented DC electrode arrays. A conventional bulk optics imaging system (Fig 2B) allows the ion to be observed at all points along the RF rail for diagnostic purposes. As an indication of the robustness of diffractive optics as an imaging solution, a test diffractive mirror patterned around the Yb oven loading slot was successfully used to image ions, despite the large central void which reduced its collection efficiency. A magnetic field coplanar with the chip, at 45 • to the RF rails, sets the direction of the quantisation axis. The laser's direction is aligned with the magnetic field such that circularly polarised light excites either the σ + or σ − transitions.
We measured the collection efficiency of the integrated optics using a single photon generation protocol based on optical-pumping (see Fig 1 and Methods). The protocol relies on selective illumination with σ + and σ − polarised lasers followed by emission of a π photon which pumps the atom into a dark ground state (Fig 1A,B). Differences in the radiation distribution patterns results in our optic collecting 17.4% of the fluorescence from π polarised and 11.3% from σ polarised transitions. In the direction perpendicular to the quantisation axis and parallel to the collection optic axis, π and σ photons are orthogonally polarised. With this geometry single π photons were selectively detected placing a linear polariser before a photomultiplier tube (PMT). To verify the integrity of our single photon generation protocol we measured the second order correlation g (2) (τ ) of the fluorescence collected through the integrated mirror with a 50/50 non-polarising beam splitter and two detectors (Fig 1C). Our large numerical aperture results in polarisation blurring 24 of the σ photons far from the collection optic axis. To reduce the transmission of σ photons through the polariser we used an iris to temporarily decrease the NA from 0.68 to 0.48 and measured a g (2) (0) of 0.12(2) (Fig 3D). This value is well below threshold of 0.5 for a single photon emitter 25 , but larger than our expected residual value for g (2) (0) of 0.069. The difference is likely due to residual polarisation blurring, imperfections of the polarisation purity of the excitation laser and background scattering of the incident laser light off the chip's surface. The 308 kHz repetition rate of this protocol is comparable with the present state of the art for remote entanglement ion trapping experiments 5 .
In order to measure the collection efficiency of the diffractive mirror, we ran the triggered single photon protocol 184,000 times and counted 770 photons. Adjusting for known loss processes (i.e. 19% detector quantum efficiency, 50(5)% transmission through the iris, and 24% losses through other optical elements including the polariser) we measured a 5.8(8)% collection efficiency for our diffractive mirror on the π transition. In comparison, our theoretical efficiency of 8.7% is the product of the 17.4% solid angle coverage for the π transition and the 50% expected diffraction efficiency. This value indicates that our diffraction efficiency is 33(7)% and not the expected 50%, most likely due to fabrication imperfections in the optics. This non-ideal behaviour is driven by increased divergence in the beam due to astigmatism from the aperture and the fundamentally non-Gaussian distribution of the ion's emission.
We benchmarked the collection and imaging capabilities of the integrated mirror by coupling the ion's fluorescence into a single mode fibre. We remove the iris and polariser from the set-up and used a mode matching telescope for adjusting the average radius of the ion fluorescence image to that of a single fibre mode with an estimated spatial mode overlap of 98%. The ion image's average M 2 of 1.45 reduces the predicted coupling efficiency to 68%. We measured a transmission from the ion through a single mode fibre of 57(4)% which combined with a 80% fibre transmission due to propagation and Fresnel losses correspond to a single mode coupling efficiency of 71(5)%. This value is in good agreement with the estimated 68% coupling efficiency and combined with the 5.8(8)% π transition collection efficiency from the diffractive optics and 8.3% losses through other optical elements, gives us a total measured coupling efficiency from ion to fibre of 4.1(6)%. This ion-fibre coupling efficiency is nearly triple the previous best of 1.4% using a conventional lens 5 , corresponding to a 8.6 times gain in entanglement generation rate. This efficiency could be substantially improved with larger NA optics or modifying the optic's design to mode-match a specific transition's intensity distribution to a single mode fibre. More sophisticated multi-level diffraction gratings 26 could improve the diffraction efficiency towards 99%.
We have realised a scalable architecture for interfacing a single ion with single mode optical fibre and free space based on integrated diffractive mirrors. Using a triggered single photon generation protocol we measured a collection efficiency for the integrated optics of 5.8(8)% a coupling efficiency into a single mode optical fibre of 71(5)%. These mirrors are monolithic with the metallic trap electrode, cover a large solid angle, and can be within a few tens of microns from the ion without distorting the RF trapping potential. They are therefore an ideal platform for the implementation of quantum networks and remote entanglement sharing with trapped ions but also with other quantum light sources such as neutral atom chip traps 18 or fixed emitters such as crystal colour centres like NV − in diamond 19 .
Trap Fabrication
The micro-fabrication procedure 23 for the surface trap includes depositing multiple layers of aluminium (R=92%, λ=370 nm) separated by silicon dioxide insulating layers of various thicknesses. In order to incorporate the diffractive optical elements into this procedure, the oxide surface separating the control electrodes from the ground layer was patterned using e-beam lithography and reactive ion etching of the oxide. The patterned area was subsequently metallised with a 100 nm thick layer of aluminium and the chip was inspected to ensure that the contours of the diffractive optic had not compromised the electrical integrity of the ground electrode. The groove step design is a hybrid of a four level grating near the centre with low spatial period and a two level grating near the edges where there is a high spatial period. Grating height step size is 45 nm for 4 level area and 90 nm for 2 level area. Approximating small chunks of the grating as 1D structures, the blazing profile was optimised in GSolver to account for the finite height of the grating and vector diffraction effects.
Fabrication imperfections can cause a mismatch between the optic focal point and the expected RF node altitude of 58.6 µm above the surface of the trap. This is a critical matching problem due to the small depth of focus in low aberration, large aperture ion imaging 21 . To guard against this possibility our five collimating optics were fabricated with focal lengths ranging from f = 58.6 µm to 62.6 µm in 1µm steps. Stray electrical fields from the neighbouring oven loading zone precluded the use of the nominal f 0 =58.6 µm mirror site and instead experiments were performed on the f +1µm =59.6 µm focal length collimator. A DC potential was applied to shift the ion off the RF node and position it at the focal point. We observed it was more reliable to pull the ion towards the surface of the trap using a DC potential, rather than pushing it away the trap. While we did not observe a mismatch, for higher numerical apertures this parameter will become even more stringent. In future iterations this could be corrected dynamically with additional RF lines or statically by laser trimming the RF electrodes.
Triggered Single Photon Generation Protocol To measure the collection efficiency of the integrated mirror we implemented an optical pumping based single photon generation protocol (Fig 1). The protocol relies on decay into a dark optically pumped state requiring the emission of a terminal π polarised photon rather than a σ polarised photon (Fig 1B). The ion was first Doppler cooled for 500 ns with two laser beams of σ + and σ − polarisations with 7 µW and 100 µm diameter at 370 nm, detuned -10 MHz from resonance. Each scattered photon has a 2/3 chance of returning to its original ground state via a σ decay, or a 1/3 chance of being optically pumped into the other ground state via a π transition. The ion was then optically pumped in the 2 S 1/2 m F = +1/2 state by 500 ns illumination with just the σ − light. 750 ns of wait time was introduced for acousto-optic modulator switching time to ensure all laser beams were off. The detection window was then activated for 1000 ns with a 250 ns pulse of σ + occurring 250 ns into the detection window. On average this process takes three scattering events (2 σ and 1 π) and the whole duration of one cycle is 3.25 µs. Since the collection optic was oriented perpendicular to the quantisation axis, the σ and π photons had perpendicular linear polarisations allowing us to filter out the single π photon with a linear polariser and detect it on the PMT. Occasional decays (0.5%) into the 2 D [3/2] dark state were repumped out with 650 µW of 935 nm light and did not have a meaningful impact on the experiment. | 2016-07-01T02:57:22.000Z | 2016-07-01T00:00:00.000 | {
"year": 2016,
"sha1": "281af7e10fb7cd87dff2afc07e3eda72829d1d89",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41534-017-0006-6.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "53e7cfb940d6508f6916c3dd97492ad99b65e45d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
3188281 | pes2o/s2orc | v3-fos-license | UNIQUENESS AND COMPARISON THEOREMS FOR SOLUTIONS OF DOUBLY NONLINEAR PARABOLIC EQUATIONS WITH NONSTANDARD GROWTH CONDITIONS
The paper addresses the Dirichlet problem for the doubly nonlinear parabolic equation with nonstandard growth conditions: ut = div ( a(x, t, u)|u|α(x,t)|∇u|p(x,t)−2∇u ) + f(x, t) with given variable exponents α(x, t) and p(x, t). We establish conditions on the data which guarantee the comparison principle and uniqueness of bounded weak solutions in suitable function spaces of Orlicz-Sobolev type.
Problem (3) will be the subject of the further study. Equations of the types (1) and (3) with constant exponents α and p arise in the mathematical modelling of various physical processes such as flows of incompressible turbulent fluids or gases in pipes, processes of filtration in porous media, glaciology -see [5,6,16,17,22,33] and further references therein. The questions of existence and uniqueness of solutions to equations like (1) and (3) with constant exponents of nonlinearity α and p were studied by many authors -see [6,14,15,16,24,28,29] for equations of the type (1) and [17,21] for the equations of the type (3) with the prescribed function B ≡ B(x, t) independent of the solution v. Existence, uniqueness, and qualitative properties of solutions for parabolic equations with variable nonlinearity corresponding to the special cases α(x, t) = 0, or p(x, t) = 2 were studied in [1,2,3,4,8,9,10], see also [7] for a review of results concerning elliptic equations with variable nonlinearity. The Cauchy problem for doubly nonlinear parabolic equations with constant exponents of nonlinearities is studied in [30,31,32]. The theorem of existence of weak solution to problem (3) (and correspondingly problem (1)) is proved in [11]. Existence of bounded solutions for elliptic equations of this type is proved in [12].
In the present work we prove comparison principle and uniqueness of weak solutions for the Dirichlet problem (3) in which the exponents α and p are allowed to be variable. Also we consider localization properties of weak solutions.
The paper is organized as follows. In Section 2 we prove several auxiliary assertions and collect some known facts from the theory of Orlicz-Sobolev spaces. The precise assumptions on the data and main results are given in Section 3. In Section 4 we derive formulas of integration by parts. In Sections 5, 6 we give the proofs of the main comparison theorems. The comparison principle and uniqueness are proved for the solutions subject to some additional restrictions, but under weaker assumptions on the data, and is independent of the proof of the existence theorem. To be precise, the comparison principle and uniqueness are true for the weak solutions with ∂ t Φ 0 (z, v) ∈ L 1 (Q). In order to ensure that this class of solutions is nonempty, in the final Section 7 we show that the already constructed solution belongs to this class, provided that the data of the problem satisfy some additional conditions.
2.2.
Parabolic spaces L p(·,·) (Q) and W(Q). Let p(z), z = (x, t) ∈ Q, satisfy condition (4) in the cylinder Q. For every fixed t ∈ [0, T ] we introduce the Banach space and denote by V t (Ω) its dual. By W(Q) we denote the Banach space W (Q) is the dual of W(Q) (the space of linear functionals over W(Q)): Since V + (Ω) is separable, it is a span of a countable set of linearly independent functions {ψ k (x)} ⊂ V + (Ω). We will need two elementary inequalities.
which is formally equivalent to problem (1). Throughout the paper we assume that the coefficient a(z, r) and the exponents on nonlinearity p(z), α(z) satisfy the following conditions: • a(z, r) is a Carathéodory function such that there exists a ± such that The solution of problem (10) is understood in the following sense.
The main existence result is given in the following theorem.
Let conditions (11), (12) be fulfilled. Then for every weak solutions v 1 , v 2 , such that The uniqueness is proved in a narrower class of functions than the existence, but since the proofs of Theorems 3.3, 3.4 are practically independent on the proof of Theorem 3.2, the conditions on the exponents α(z), p(z) are less restrictive. For the sake of completeness of presentation, in the end of the paper we present the conditions on the data of problem (10) which guarantee that the corresponding solution satisfy the conditions of the comparison and uniqueness theorems.
4. Formulas of integration by parts. Let ρ be the Friedrich's mollifying kernel Given a function v ∈ L 1 (Q T ), we extend it to the whole R n+1 by a function with compact support (keeping the same notation for the continued function) and then define Take For every k ∈ N and h > 0 The last two integrals on the right-hand side exist because v h , w h ∈ L 2 (Q T ). Letting h → 0, we obtain the equality In the same way we check that By the Lebesgue differentiation theorem Proof. Let u h ∈ C ∞ (Q) be the mollification of u ∈ W(Q) and Since u and u h are bounded by a constant 1 + K 0 , and γ(z) ≥ γ − > −1, it follows from Propositions 1, 2 that Explicitly calculating the primitive, in the same way we check that for every s > 1 Let ψ k (z) = χ k (t) γ + 2 with the function χ k introduced in (15). Following the proof of Lemma 4.3, we find: Since u ∈ W(Q) ∩ L ∞ (Q) and γ − > −1, v ∈ W(Q) for every > 0. Indeed: since u L ∞ (Q) ≤ M , we have the estimates which provide the inclusion |∇v| ≤ ( + |u|) γ(z) |∇u| + |∇γ| |u| 0 ( + |s|) γ(z) | ln ( + |s|)| ds ∈ L p(·) (Q).
We may now pass to the limit as h → 0 in every term of (17), following the proof of Lemma 4.3: Letting k → ∞ and applying the Lebesgue differentiation theorem, we arrive at (16).
Under the foregoing conditions on the exponents p(z) and γ(z) the following formula of integration by parts holds: ∀ a.e.t 1 , Let us introduce the function space with Φ 0 defined in (2) and define the functions with It is easy to see that For a.e. θ ∈ (0, T ) there exists the limit Proof. Since w ∈ L ∞ (Q) and φ = φ k,δ,θ are uniformly bounded, it follows from the dominated convergence theorem that and, because sign v = sign w, On the other hand, repeating the same arguments with the test-function φ k,δ,θ ≡ χ k,θ (t) T δ (w), we find that The straightforward computation shows that Letting k → ∞, δ → 0 and applying the Lebesgue differentiation theorem, we find that for a.e. θ ∈ (0, T ) By (14) for every test-function φ ∈ W(Q) Taking for the test-function φ k,δ,θ defined in (18) and applying Lemma 4.5 we have that for a.e. θ ∈ (0, T ) there exists the limit of the first term on the left-hand side of (19): The second term on the left-hand side of (19) with φ(z) = χ k,θ (t) T δ (v(z)) is represented in the form Let us denote (3)). Passing to the limit as k → ∞, for every fixed δ and θ we obtain the equality Making use of the well-known inequality dz. Next, To estimate J (1) (δ) we make use of the following elementary lemma.
Proof. The assertion follows from Young's inequality Applying Lemma 5.1 we have: By Young's inequality Gathering (21), (22) and (23) we arrive at the inequality Choosing ≡ (p − ) sufficiently small we then have with a positive constant C ≡ C(p ± ). It remains to show that the right-hand side of the last inequality tends to zero as δ → 0. We will use the following Lemmas.
7.
Existence of solutions u ∈ V(Q): L 1 -estimate for ∂ t Φ(z, v). Let us check that problem (10) indeed admits solutions in V(Q), which means that the class of uniqueness is nonempty. Following [11], we construct a solution as the limit of the sequence of solutions of the regularized problems with the coefficient depending on the given parameters > 0, K > 0. For every ∈ (0, 1) and 1 < K < ∞ the coefficient A ,K (z, u ) is separated away from zero and infinity, so that problem (27) can be regarded as the Dirichlet problem for the evolutional p(z)-Laplacian.
Theorem 7.1 ( [10]). For every u 0 ∈ L 2 (Ω), f ∈ L 2 (Q), > 0, K > 0 problem (27) has at least one weak solution u ∈ L ∞ (0, T ; L 2 (Ω))∩W(Q) such that ∂ t u ∈ W (Q) and for every test-function φ ∈ L ∞ (0, T ; Moreover, if u 0 ∈ L ∞ (Ω), f ∈ L 1 (0, T ; L ∞ (Ω)), this solution belongs to L ∞ (Q) and obeys the estimate As a byproduct we also have that for every φ ∈ W(Q) (see [10]) The solution of problem (27) is obtained as the limit as m → ∞ of the sequence of Galerkin's approximations, where the family {ψ i (x)} is dense in V + (Ω) and forms an orthogonal basis of L 2 (Ω). Estimate (28) makes the coefficient A ,K (z, u ) independent of K, provided that K ≥ K 0 + 1: Problem (27) is considered then as a problem with the unique regularization parameter . Passage to the limit as → 0 is justified in [11,Sec.5] in the proof of Theorem 3.1. To this end problem (27) is substituted by the formally equivalent problem and The proof is based on the uniform a priori estimates for the functions v , ∇v and ∇v + B(v ) in the variable Lebesgue spaces L p(z) (Q), the integration-by-parts formulas (see Lemma 4.4), and the monotonicity of the elliptic part of equation (31). The proof of integrability of ∂ t Φ 0 (z, v) ≡ ∂ t u is thus reduced to checking that for the solutions v (m) of the regularized problems (31) the norms ∂ t Φ (z, v (m) ) 1,Q are bounded uniformly with respect to and m. By virtue of (30) and (32), the coefficients c i,m, (t) are defined as the solutions of the system of the ordinary nonlinear differential equations where u 0i and f i (t) are the Fourier coefficients of the functions u 0 (x) and f (z) in the basis {ψ i }: The function u (m) = Φ (z, v (m) ) defined by (30) is a weak solution of problem (31) with the data u (m) 0 , f (m) and satisfies (29) with an arbitrary φ ∈ W(Q). Let us fix some > 0, m ∈ N, and introduce the function in t, we write the equation for V in the form The straightforward calculation gives the equalities Combining these formulas we conclude that Let us introduce the functions According to the definition h µ (σ) ≥ 0, lim η→0 σh µ (σ) = 0, |H µ (σ)| ≤ 1, lim η→0 H µ (σ) = sign σ, lim µ→0 H µ (σ) = |σ| .
Multiplying (33) by H µ (V ) and integrating by parts in t, we arrive at the equality Let us consider the simple case: p t = 0, γ t = 0, Φ ≡ Φ (x, v). In this case the previous equality becomes Let us write (35) in the form with Dropping the nonpositive term I 1 on the right-hand side of (36), letting µ → 0 and using (34) we finally obtain: Since the right-hand side of this inequality is independent of m and , the needed estimate follows by passing to the limit as m → ∞ and → 0. | 2017-09-14T21:07:33.322Z | 2012-11-01T00:00:00.000 | {
"year": 2012,
"sha1": "50780a7b59eb38e9e80dfb12f137dfb035476160",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/cpaa.2013.12.1527",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c4b3bda582298c4da271ac74dab14fd4e012252b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
14402093 | pes2o/s2orc | v3-fos-license | Lessons learnt during the process of setup and implementation of the voucher scheme in Eastern Uganda: a mixed methods study
Background In spite of the investments made by the Ugandan Government, the utilisation of maternal health services has remained low, resulting in a high maternal mortality (438 maternal deaths per 100,000 live births). Aiming to reduce poor women’s constraints to the utilisation of services, an intervention consisting of a voucher scheme and health system strengthening was implemented. This paper presents the lessons learnt during the setup and implementation of the intervention in Eastern Uganda, in order to inform the design and scale up of similar future interventions. Methods The key lessons were synthesised from a variety of project reports, as well as qualitative data drawn from six focus group discussions and four in-depth interviews conducted in the Buyende and Pallisa districts during the implementation phase of the voucher scheme. Results and Conclusions To promote the successful implementation of interventions with demand and supply side initiatives, such as voucher schemes, the health system should be able to respond to the demand created by providing the additional required resources such as health workers, essential supplies and equipment. Involving a diverse, multi-sectoral group of stakeholders is important for addressing the different barriers experienced by women when seeking maternal health services. Voucher schemes should have a mechanism of detecting unintended consequences and mitigating them. Sustainability plans should be built into such interventions to maintain the gains achieved. Lastly, health policy planners can use this information to develop follow-up programmes to test modified versions that are more sustainable. Such programmes could use locally existing community structures for management and resource mobilisation for self-sustainment.
Background
Although Uganda has achieved reductions in maternal mortality over the last decade [1], the current maternal mortality rate of 438 per 100,000 live births remains unacceptably high. The underutilisation of maternal care services like skilled delivery is one of the factors contributing to that high mortality [1]. The underutilisation of maternal care services is due to a combination of demand and supply side problems. The demand side problems include long distances to facilities, lack of transport, unaffordable transport costs, lack of power to make decisions among women, preference for traditional delivery positions and lack of knowledge [2][3][4][5][6]. The supply side barriers include unaffordable care costs, lack of skilled care, drug stock outs, lack of equipment, poor health worker attitudes towards clients and late referrals [2,5,7,8].
Over the years, the government invested in supply side interventions, such as construction of health facilities, recruiting trained personnel, supplying drugs and medical equipment [9,10]. Despite such efforts, the utilisation of maternal services has persistently remained low. Less than half (48 %) of pregnant women attend the fourth antenatal care visit and not more than two thirds of these women (57 %) deliver in health facilities [1]. The quality of care in public facilities has remained unsatisfactory with frequent shortages of equipment, supplies and drugs [11,12]. Supply side incentives provide little inducement to encourage patients, especially those from vulnerable communities, to utilise facility-based services. They also do not give service providers an incentive to provide better services beyond the basic essentials. However, evidence shows that combining demand and supply side incentives takes advantage of each to intensely improve utilisation and quality of care services [13].
In an attempt to address the above constraints, Makerere University School of Public Health, in collaboration with Johns Hopkins University, with funding from the Melinda Gates Foundation and United Kingdom Department for International Development (DFID), piloted the Safe deliveries project. The project applied a voucher scheme with both demand and supply side initiatives.
The demand side initiative included two vouchers, for transport and maternal care services. A transport voucher was provided to pregnant women and recently delivered mothers from poor communities. The transport voucher could be used with local transport providers based in the communities where the beneficiaries resided. The services voucher covered costs for antenatal, delivery and postnatal care. These demand side incentives subsidised the costs incurred during the process of seeking maternal care services thus providing protection from catastrophic expenditure that could have occurred when getting treatment for medical emergencies [14]. Demand side incentives also provided pregnant mothers a chance to choose which provider to seek services from and therefore encourage health providers to provide quality care through increased competition [15]. The supply side incentives served as a complement to the demand side in order to improve the quality of care services. The supply side incentives provided by the project included training health workers how to conduct Safe deliveries project, provision of basic essential equipment, drugs and support supervision.
During the implementation period, the intervention evolved in response to changes in the political and economic context, as well as to stakeholders' needs. As a result, the implementation of the intervention did not always occur according to plan because of several anticipated and unanticipated factors that influenced its delivery. Thus, subsequent outcomes were both positive and negative. Systematically documenting these factors can provide lessons about the process of implementation and adaptations which can inform the replication and successful scale up of similar interventions in different settings. Despite the value of documenting adaptation, this is rarely done in practice or published in the peer-reviewed literature. This paper contributes to filling this significant gap in the implementation research literature. We provide a brief description of how the Safe deliveries project was setup. We then draw attention to the lessons learnt during the process of project setup and implementation.
The Safe deliveries project study
The Safe deliveries project was implemented for 2 years (2009 to 2011) in four districts: Buyende, Kamuli, Pallisa and Kibuku districts of Eastern Uganda [16]. The area has an estimated population of 1,293,990 with a large proportion (25 %) earning less than a dollar a day, through subsistence farming [17].
Initially, this project was supposed to be implemented in four health sub-districts (HSD) of Kamuli and Pallisa districts. Each district was supposed to have an intervention and a control HSD. However, after the study was designed and initiated, the Ugandan government enacted another district-splitting effort which resulted in Kamuli being split into Kamuli and Buyende districts. Then, Pallisa was split into Pallisa and Kibuku districts. Buyende and Pallisa districts were the intervention areas while Kamuli and Kibuku districts were used for comparison. The criteria used to select the study districts included having health facilities and health workers that can offer emergency obstetric care. There were 104 health facilities from the public, private not for profit, and private sectors that participated in the intervention ( Table 1). The facilities were selected by the district health departments based on the standard requirements from the Ministry of Health.
The project was first piloted in 14 health units in Buyende district for 3 months and 2 Kamuli district hospitals. It was then scaled up and implemented in both Buyende and Pallisa districts for a 1-year period. In Pallisa district, it worked with nine health facilities.
The beneficiaries were identified using universal targeting in the intervention areas. Therefore, all the pregnant women residing in the intervention area were entitled to vouchers. The distribution of vouchers was done by the health workers during antenatal care, with the assumption that over 90 % of pregnant women in Uganda attend at least one antenatal care session at the health facility [1].
The health care providers were reimbursed after submitting evidence of providing service to the beneficiaries. The payments were delivered in cash to the health facilities. The providers in private not for profit and private for profit health facilities were reimbursed the full cost of their services. The public providers received 75 % of their costs from the project because they received more funding from the government ( Table 2). These funds were meant to be used by providers to improve the quality of services through ensuring adequate medical supplies, repair of facilities and facilitating support staff.
During the pilot phase, the voucher for care services covered four antenatal visits, facility delivery, emergency obstetric care, such as caesarean sections at district hospitals, and one postnatal visit within the first 7 days. The transport voucher covered transport costs to and from the health facility during the visits. The transport voucher also covered referral costs from the lower level health facility to the district hospital for emergency care.
During the pilot phase, the transport voucher was valued at a flat rate of US$2.5 per trip within the intervention area. This was based on negotiations with the transport providers. However, the response overwhelmed the scheme resulting in high voucher costs. Consequently, the benefit packages were revised. The transport voucher value was revised based on the distance from the beneficiaries' village to the health facility. The revised transport voucher value ranged from US$ 1.5 to US$ 2.5 per trip.
During the implementation phase, the voucher for maternal care services covered facility delivery, one postnatal visit, sick newborns and caesarean sections. However, the pregnant women enrolled during the pilot phase continued receiving antenatal services during the implementation phase.
Sensitization was also done within the communities to create awareness about the intervention. The methods used included district level meetings, posters, mobile film vans, radio talk shows and radio messages about maternal care in local languages. Further details about the design of the Safe deliveries project are provided in a paper by Ekirapa-Kiracho et al. [16].
Methodology
Two sources of secondary data were used for this study. First, a review of study documents that included research team field reports, study report and the project implementation manual during the pilot and implementation phase of the study was conducted in order to document the process of how key intervention elements changed over time.
Second, transcripts from focus group discussions and indepth interviews that were done with study participants and stakeholders were analysed to explain the adaptations that occurred.
The focus group discussions were done with women, men and transporters while the in-depth interviews were done with community opinion leaders. These leaders included male village council representatives from two villages in Buyende, a female local council representative from a sub-county in Pallisa district and a transporters' representative from Buyende district. The focus group discussions were conducted between 28 September and 1 October 2010 while in-depth interviews were conducted between 6 and 16 December 2010. The women interviewed were beneficiaries of the programme. The men interviewed were a mix of those whose spouses benefited from the vouchers and those whose spouses did not. The transporters interviewed were directly involved in the programme because they provided transport services to pregnant women.
The key lessons were then synthesised from the project reports, field reports made by the project team members and district health team during the implementation and qualitative data drawn from the above interviews. The qualitative data was analysed using thematic analysis which involved extensive reading of transcripts, coding of the data and categorisation into themes. Table 3 below summarises the data sources we selected to identify the lessons learned during the implementation of the Safe deliveries project voucher scheme.
Ethical considerations
Approval to conduct this study was sought from the Makerere University Higher degrees Research and Ethics Review Board and the National Council of Science and Technology. Informed consent was obtained from the study participants before they were interviewed.
Results and discussion
A number of lessons related to the implementation process of the voucher scheme were identified and organised into key themes.
Theme 1: Engaging community and service providers to foster buy in for the voucher scheme In keeping with previous work [18], one of the lessons learnt was that it is important to raise awareness about the scheme. In areas where this was not well done initially, uptake of the scheme was poor. The use of local community leaders to mobilise community support and collaboration for implementing the scheme was an effective way of engaging with the community. This was because local leaders are trusted by their communities which were more inclined to listen and trust messages from them. One of the local leaders commented that: When you sensitize women on the scheme ............, involve community leaders because the people listen more to their leaders or elders. (In depth interview with female council representative from Kamuge Sub-County, Pallisa district during the implementation phase) Feedback from the stakeholders revealed that the radio talk shows or radio messages were more effective in raising awareness if they are aired at regular times. This encouraged the listeners to tune in for the programme.
Another lesson learnt was that it is important to maintain continuous dialogue with the service providers. Constant communication with the transporters was achieved through continuous dialogue meetings and a phone line for open communication. For the health workers, regular stakeholder meetings were held and support supervision visits were conducted when delivering cash payments. This open communication allowed the development of trust between the providers and the programme implementers. These channels of communication motivated the providers because they provided a venue for sharing their concerns. It also made them own the intervention and encouraged innovative mechanisms to share best practises that could lead to improvements in access to transport and maternal health services. Lastly, it provided an opportunity to communicate and resolve problems amicably. For example, when there were delays of payment, the providers agreed to wait without disrupting the programme.
Theme 2: District Health system response to the increased demand for services As mentioned in the introduction, the demand for services generated was higher than expected and created new challenges to the existing ones especially during the pilot period [16]. A local transport provider observed that: The problem we find is that we wait for long hours; we have few health workers in the health facilities and there are big numbers of women coming to get maternal health services. So we request you to address this problem so that it can be solved. (FGD for transporters, Kidera village, Buyende district) This demonstrated the fact that when demand is generated, the health system needs to be able to respond by increasing the supply of services. When the patient numbers increased, the number of district health staff remained the same in some facilities. Therefore, the few existing health workers became overstretched. This resulted in long waiting times at health facilities. The government ban on recruitment of new health workers due to lack of funds to pay their salaries at the time meant that districts could not respond immediately to the staff shortages.
The increased demand also led to increased utilisation of drugs and medical sundries. Despite the supplies from the National Medical Stores and additional supplies provided by the voucher scheme, stock outs of essential drugs were observed. A mother who benefited from the transport intervention noted that: They can bring drugs today and when you come the next day you don't find anything. They just write for you medicine to buy from private drug shops. (FGD for women in Nkondo village, Buyende district) When mothers came to the health units and waited for very long, or did not receive services or drugs, they became discouraged. This in some cases may have led to the shunning of health facilities.
Lastly, the existing infrastructure was also strained due to lack of adequate maternity space and delivery beds in the lower level facilities. Construction of new buildings is a capital investment which could not be met from the voucher funds. As a result of this limited space, in some cases, the mothers could not stay at the facility for more than 24 h after giving birth as recommended.
Theme 3: How to target the beneficiaries A universal distribution system was used to target all pregnant women located within the intervention areas of these rural districts since they were all considered poor. It was difficult to distinguish the very poor from the well-off because this would have required the study to develop its own ranking system, an endeavour that would have called for additional financial resources which had not been budgeted for. Although this method was easy to implement, it may have contributed to some wastage of resources since both the very poor and the least poor benefited. Distribution of the vouchers at health facilities was beneficial because it encouraged more women to attend ANC services but it increased the workload for the health workers. Furthermore, it could have led to the exclusion of women who were not able to come to the facility. However, in this programme, such women could still receive vouchers at the time of delivery so this reduced their possible exclusion.
Theme 4: Identifying the right incentives for providers
The programme also showed that in a voucher scheme like this one, it is not only the type of incentive that matters but also the amount of the incentive. It was necessary to keep negotiating with providers, health workers and transporters to ensure that the programme provides the right amount of incentives. Failure to do this could have led to minimised effects of the incentives. Similar findings were reported by Sengooba [19]. Transport costs • How to target beneficiaries using a motorcycle was US$2.5 during the pilot irrespective of the distance covered. However, during the implementation phase, this rate was reduced to between US$1.5 and US$2.5. During the pilot phase, the transporters were very active, and they travelled even to villages that were very far to search for the mothers. However, when the rates were revised, the transporters preferred to transport pregnant women who were nearer to the health facility, neglecting those who stayed far in remote villages. This suggests that sometimes private for profit partners are driven by profit, so attention should be paid to this when negotiating prices with private for profit partners. The prices for the health facility vouchers were also changed twice. The initial change after the pilot resulted in lowering of prices; thereafter, feedback from the health workers after a few months of implementation indicated that the amount allocated was not enough for achieving the intended benefits (money for allowances, supplies and equipment). In response, the amount was increased to achieve the intended benefits.
Theme 5: Benefits of working with local private providers
Such interventions bring to light other partners who can contribute to improving utilisation of maternal and child health care services; for example, the transporters became agents of change. The transporters moved from household to household in villages searching for pregnant women, who they encouraged to seek care at the health facilities. This was because the women with vouchers became potential clients for the transporters. This trend was confirmed by one of the community leaders below: In fact the transporters used to search for these women from the villages and bring them to the health centres. (In depth interview of male council representative in Iringa village, Buyende district) Theme 6: Challenges with voucher verification using facility registers The verification team used health facility registers to verify both transport and MCH service vouchers presented for payment. However, community individuals conniving with some health workers and transporters tried to take advantage of the voucher system through distributing forged vouchers, inflating registers, not recording in registers and reclaiming unused vouchers from women who delivered at home. This finding was similar with the literature which acknowledges fraud as one of the problems encountered by voucher schemes [20][21][22]. Therefore, the verification team selected a number of women randomly from facility registers and conducted household visits in villages to follow up women who had benefited from the vouchers. The team discovered that maiden names used by some pregnant women in the health facility register were not known by the locals in the villages. Instead, they were known by their husbands' first names which were not recorded in the facility registers. This made the follow-up exercise difficult for some cases, underlining the difficulty of tracking patients in areas where there is no active national system for identification. This requires the need to have a strong system of detecting fraud and confirming provision of services.
In this study, the unused vouchers could not be easily traced. A clear way of tracking vouchers distributed and those used is also important to avoid having excess vouchers that could be misused. Household distribution or distribution by community health workers may help curb misuse; however, this should be weighed against the financial costs of using such a system.
Theme 7: Sustainability of the intervention beyond donor funding
The intervention relied on external donor funding. When the funding ended, the intervention too ceased. This meant that reliance on only external funding cannot sustain the intervention in the long term. The community leaders suggested that mobilising community contributions through different participatory mechanisms can help sustain such interventions in the long term. A community leader from one of the villages that benefited from the intervention suggested that: I would suggest that you continue with this project because women here are ready to contribute support, and I have never heard of any woman these days saying that let me abort because I will not get support. So you continue to help them. (In Depth interview of male council representative in Iringa village, Buyende District) Therefore, to sustain the gains achieved by the intervention, two other programmes were developed. One programme, The Maternal and Newborn Study (MAN-EST), is aiming at targeting the vouchers according to distance so that it is less costly while the second, Maternal and Newborn Implementation of Equitable Systems (MANIFEST), is looking at identifying local systems that can generate financial resources and systems that can be used to manage them. Such interventions should not end when funding ends; they should be developed further to make them sustainable in the local context.
Theme 8: Unintended consequences
There is evidence that suggests that voucher schemes may encourage increased fertility if the number of children does not limit entry into the scheme [14]. In this particular project, all pregnant women benefited regardless of the number of pregnancies or children they had.
Anecdotal evidence suggested that in a few cases, it may have contributed to changes in fertility decisions. During the focus group discussions, it was reported that some women decided to change their fertility decisions and to have more children because of the support provided by the programme. This is demonstrated in the quotation below: I am grateful about this benefits we have got from this project. Continue providing us with the same good services. One has to just call a boda boda cyclist, and he will come and take you to the hospital very fast to get treatment. Even those who were on family planning have now stopped and are now conceiving because there is now free transport to hospital. (FGD of women from Nkondo village, Buyende District) We should think about how to avoid this happening. This could be through careful detection and providing health education about family planning services plus partnering with other programmes that provide family planning services.
Limitations
It is important to note that the effectiveness of the different methods in communicating the programme messages was not comprehensively evaluated in this study. The effect of vouchers on changes in fertility decisions needs to be studied too. The experience of mothers who were beneficiaries when 18 years and below was not captured as there was no data. Since we used secondary data from the interviews, some information could have been lost during the analysis process which involved merging similar data under themes. However we reviewed the project report and field reports to verify the findings and minimise loss of important learning. This study had no control on the selection of the participants for the interviews and how they responded. But the interviews of different groups from the participating districts were analysed, and findings from the interviews were collaborated by a review of study field reports. We believe this minimised any response bias. | 2017-06-28T04:52:42.214Z | 2015-08-06T00:00:00.000 | {
"year": 2015,
"sha1": "22dcadd3892ff7e084edb358e48933766da990fe",
"oa_license": "CCBY",
"oa_url": "https://implementationscience.biomedcentral.com/track/pdf/10.1186/s13012-015-0292-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9a1482a6787cd987d492252d0e3cb8ab520dde4",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257632348 | pes2o/s2orc | v3-fos-license | D -Module Techniques for Solving Differential Equations in the Context of Feynman Integrals
Feynman integrals are solutions to linear partial differential equations with polynomial coefficients. Using a triangle integral with general exponents as a case in point, we compare D -module methods to dedicated methods developed for solving differential equations appearing in the context of Feynman integrals
Introduction
Differential equations are ubiquitous in physics.Their solutions are often expressed in terms of special functions.Well-known examples are the hypergeometric function, or the Hermite polynomials that appear in the solution of the quantum mechanical hydrogen atom.In perturbative quantum field theory, a crucial bottleneck is the evaluation of Feynman integrals.In particular, obtaining analytic or numerical results for higher-loop Feynman integrals is paramount for deriving more precise theory predictions for processes at the Large Hadron Collider.Because of this, substantial research efforts are dedicated to evaluating Feynman integrals.These integrals can be seen as solutions to linear partial differential equations (PDEs) with polynomial coefficients.Analyzing these PDEs and extracting useful information from them is an important problem in this field.In this paper, we analyze this problem from two perspectives.The first one consists of D-module techniques from algebraic analysis in mathematics, and the second one consists of dedicated tools developed for solving the specific differential equations satisfied by Feynman integrals.
In the first approach, one uses the fact that systems of linear PDEs generate ideals in the Weyl algebra, which is denoted by D. Powerful D-module techniques [55,56] allow us to extract precious information from the PDEs.For instance, the singular locus of a D-ideal I describes where the solutions to the system of PDEs encoded by I may have singularities.The notion of a D-ideal being regular singular is inherently linked to its solutions having moderate growth-which is expected for Feynman integrals.The holonomic rank of a holonomic D-ideal tells us about the number of linearly independently solutions.It corresponds to the number of master integrals in the physics terminology.Finally, Gröbner basis methods allow us to compute canonical series solutions.These are solutions of a very special form: they are polynomials in the variables x i 's and log(x i )'s, with coefficients in formal power series in the x i .In the regular holonomic case, the solution space is fully spanned by solutions of that particular form.We compute these solutions with the help of algebro-geometric properties and the Gröbner data of the D-ideal via an algorithm of Saito, Sturmfels, and Takayama (SST).Given a regular holonomic D-ideal I and a real weight vector w, this algorithm allows us to compute all terms up to a specified w-weight k in the canonical series solutions to I.These truncated series take the form where we use the multi-index notation x a = x a 1 1 • • • x an n , and similarly for the logarithms.The index set C * Z over which the sum in (1.1) runs can be read completely from the D-ideal; this will be made precise in the article.
The second approach builds on various techniques that have been developed for evaluating Feynman integrals.The latter satisfy Picard-Fuchs equations [45,52], which are higher-order linear PDEs for individual Feynman integrals, or, equivalently, systems of firstorder equations for master integrals, see [6,38] for reviews.The Fuchsian nature of the singularities-which is expected for Feynman integrals-allows one to derive asymptotic expansions around singular points using a method of Wasow [65].
There have been various interactions between the relevant mathematics and physics communities already, resulting in several interesting works, for example on determining the number of master integrals [1,9,47], on deriving Picard-Fuchs equations for Feynman integrals [45,52], and on relating Feynman integrals and GKZ systems [3,21,27,42,43,64].However, to our knowledge, a systematic synthesis and comparison of D-module techniques and methods employed in high energy physics has not yet been done.In this work, we initiate such a comparison, with the aim of providing insights for both communities.For this initial study, we focus on a particular system of physically motivated PDEs and show how one obtains canonical series solutions of the form (1.1), both using the SST algorithm and Wasow's method.In the latter, we capture the weight vector from the Gröbner techniques via an auxiliary variable.
Outline.Section 2 introduces the systems of PDEs we are investigating here, together with their origin in physics.Section 3 provides background on ideals in the Weyl algebra and their (multi-valued) holomorphic solutions that will be needed throughout this article.It also outlines how to compute canonical series solutions of regular holonomic D-ideals via the SST algorithm, which is based on Gröbner basis computations in the Weyl algebra.In Section 4, we compute canonical series solutions for the three-particle case using the SST algorithm.In Section 5, we recover these solutions via a method of Wasow for computing solutions of systems that are in Fuchsian form.We conclude with an outlook to future work in Section 6. Appendices A and B contain some background on conformal symmetry as well as on integrable connections.In Appendix C, we provide an application of this method to a four-loop Feynman integral.
Running example: a triangle integral and conformal differential equations
As a test case for this work, we study a particular class of Feynman integrals: the one-loop "triangle" Feynman integrals, which correspond to the Feynman graph shown in Figure 1.These integrals are relevant to describe the scattering of three particles with momenta p 1 , p 2 , p 3 ∈ R d (with p 3 = −p 1 −p 2 due to momentum conservation) in a d-dimensional Minkowski spacetime.We denote by |v| 2 the Minkowski norm of the R d -vector v, i.e., |v| 2 := v ⊤ • g • v, where g = diag(1, −1, . . ., −1) is the metric tensor of Minkowski spacetime.The triangle Feynman integrals depend on the momenta of the scattering particles only through three independent variables x = {x 1 , x 2 , x 3 }, This becomes apparent in the Feynman parameterization, Figure 1: The Feynman graph representing the one-loop triangle Feynman integrals defined in Equation (2.1).Due to momentum conservation, p 1 + p 2 + p 3 = 0. Next to the internal edges, we record the corresponding exponent ν i as well as the loop momentum.
We approach the computation of these integrals by exploiting symmetries, which are encoded via differential equations.The triangle integrals are in fact solutions to a system of linear second-order PDEs, originating from conformal symmetry, where Here, c 0 = d is the number of spacetime dimensions, and c 1 , c 2 , c 3 are the conformal weights.We give some background on conformal symmetry and derive these PDEs in Appendix A. For the triangle integrals J triangle d;ν 1 ,ν 2 ,ν 3 are solutions to the conformal PDEs (2.5).Let us review a few useful properties of the operators P i in (2.6).First of all, the operator P 3 implies that the solutions are homogeneous of degree (2c 0 − c 1 − c 2 − c 3 )/2.Secondly, the system (2.5) is symmetric under permutations of the variables x i (together with the corresponding conformal weight c i ).While the operator P 3 is manifestly symmetric, the permutations map P 1 and P 2 into Q-linear combinations of themselves.As a result, the solution space is symmetric as well.For computing the solutions, we will focus on the physically motivated case c 0 = 4, c 1 = c 2 = c 3 = 2, which corresponds to the (classically) conformal φ 4 -theory in four spacetime dimensions. 1 In this case, the operators from (2.6) become Later on, we will consider the D 3 -ideal generated by the operators in (2.8) and will denote it by I 3 .Through Equation (2.7), we see that the relevant triangle integral has d = 4 and unit exponents, ν 1 = ν 2 = ν 3 = 1.A convenient analytic expression for the latter is from [67], where the function Li 2 denotes the dilogarithm (cf.[36]), λ is the polynomial and The function in Equation (2.9) is a solution of I 3 .It is a multivalued function and its analytic continuations are solutions of I 3 as well.The C-vector space of the analytic continuations of J triangle 4;1,1,1 is spanned by the four linearly independent functions (2.12) The functions f 2 , f 3 , f 4 are called discontinuities of f 1 in physics, i.e., they are differences of f 1 and an analytic continuation of f 1 .This can be shown conveniently using the symbol method [33] (for a review, see e.g.[30] and the references therein).We will see in Section 4 that the holonomic rank of I 3 is 4 and that this implies that the four functions in (2.12) span the whole space of holomorphic solutions to the system of PDEs encoded by I 3 .All these functions have moderate growth when approaching points of the singular locus and we will argue later that the D-ideal I 3 is indeed regular singular.As anticipated, the solution space is symmetric under permutations of the variables x: the functions f 1 and f 4 are invariant, while f 2 and f 3 transform into Q-linear combinations of themselves.In Section 5.2, we will recover these solutions by solving the Pfaffian system associated with I 3 .
D-ideals and canonical series solutions
We here recall basics about ideals in the Weyl algebra and their (multi-valued) holomorphic solutions that will be needed for the computation of canonical series solutions later on.This theory will also allow us to derive crucial information about our system of PDEs, such as the singular locus and number of holomorphic solutions, before even computing the solutions.
For further details about the theory of D-modules and holonomic functions, we refer our readers to [40,55,56] and the references therein.
D-ideals and their solutions
The (n-th) Weyl algebra, denoted D n or just D, is the free C-algebra generated by x 1 , . . ., x n , ∂ 1 , . . ., ∂ n modulo the following relations: all generators are assumed to commute, except ∂ i and x i .Their commutator is Elements of D are linear differential operators with polynomial coefficients: The rational Weyl algebra R n = C(x 1 , . . ., x n ) ∂ 1 , . . ., ∂ n is the ring of linear differential operators with coefficients in the field of rational functions C(x 1 , . . ., x n ) = {p/q | p, q ∈ C[x 1 , . . ., x n ], q = 0}.We will denote the action of a differential operator P on a function f (x) by the symbol •.For instance, ∂ • f denotes ∂f /∂x.
By ord (u,v) (P ), we denote the largest (u, v)-weight among the monomials appearing in P .The characteristic ideal of a D-ideal I is the ideal in (0,1) (I) ⊂ C[x 1 , . . ., x n ][ξ 1 , . . ., x n ] generated by the parts of highest (0, 1)-weight of all differential operators in I, where (0, 1) denotes the vector (0, . . ., 0, 1, . . ., 1) ∈ R 2n , i.e., one puts weight 0 to all x i and weight 1 to all ∂ i .The characteristic ideal lives in the polynomial ring, where one replaces ∂ i by ξ i to stress that they are now commuting variables.This is due to the fact that the commutator of two operators P, Q has (0, 1)-weight strictly smaller than deg(P ) + deg(Q).For n = 1, the initial in (0,1) (P ) is precisely the principal symbol of the differential operator P .The characteristic variety of I is Here, C 2n is the affine 2n-space C n x × C n ξ with coordinates x 1 , . . ., x n , ξ 1 , . . ., ξ n .A D-ideal is holonomic if dim(Char(I)) = n.By Bernstein's inequality, all components of Char(I) have dimension at least n; hence, a D-ideal I is holonomic if the dimension of its characteristic variety is smallest possible.The Zariski closure of the projection of Char(I) to the x-coordinates is the singular locus of I and is denoted by Sing(I) ⊂ C n x .Algebraically, Sing(I) is computed as the elimination ideal where (I : J (∞) ) denotes the saturation of an ideal I by an ideal J, the definition of which we recall now.Let I, J be ideals in a polynomial ring C[x 1 , . . ., x n ].For k ∈ N, the ideal quotient is (I : Then, the saturated ideal (I : J ∞ ) with respect to J is the C[x 1 , . . ., x n ]-ideal If I is holonomic, it follows that rank(I) < ∞.The reverse implication is not true.The holonomic rank can be computed as the number of standard monomials for a Gröbner basis of I in the rational Weyl algebra, see Section 3.2 for more details.Theorem 3.3 (Cauchy-Kovalevskaya-Kashiwara).Let I be a holonomic D n -ideal.On a simply connected domain U ⊂ C n \ Sing(I), the C-vector space of holomorphic solutions to I on U has dimension rank(I).Remark 3.4.In Theorem 3.3, finite holonomic rank is sufficient for the statement to hold.One arrives at a holonomic D-ideal by taking the Weyl closure (see Definition 3.5). ⋄ is a holonomic D-ideal.Numerous functions in the sciences are holonomic, e.g., hypergeometric functions, many trigonometric functions, some probability distributions, and many special functions such as Airy's or Bessel's functions, and polylogarithms.Zeilberger [66] was the first to study holonomic functions in an algorithmic way.If all solutions to a D 1 -ideal I have moderate growth (cf.[62, p. 146]) when approaching the singular locus of I, including at ∞ (and at boundary components of a compactification, respectively), the D 1 -ideal is called regular (singular), and irregular singular otherwise.For n > 1, it is more involved to read if a D n -ideal is regular singular.
Given an annihilating D-ideal of some functions, there is a straightforward way to slightly enlarge it, which is at the same time of theoretical interest: the Weyl closure.Hence, techniques that are valid for holonomic D-ideals can be applied to non-holonomic D-ideals of finite holonomic rank by passing to their Weyl closure.
Pfaffian systems
If I is a holonomic D-ideal, the D-module D/I gives rise to a vector bundle of rank rank(I), say rank(I) = m, with an integrable connection induced by the action of D on D/I.We refer the interested readers to Appendix B, where we explain this geometric interpretation.
In particular, the origin of the integrability conditions of Pfaffian systems becomes visible there.Here, we stick to what is needed for the sake of our computations in this paper.Let S = {s 1 , s 2 , . . ., s m } be the set of standard monomials of I in R n for a chosen term ordering ≺, i.e., those monomials ∂ b , b ∈ N n , that are not contained in the initial ideal of I; cf.[55, p. 33] for more details.Without loss of generality, we can assume s 1 = 1.Let f ∈ Sol(I) be a solution to I and let F = (f, s 2 • f, . . ., s m • f ).Since rank(I) = m, there exist unique matrices P 1 , . . ., P n ∈ C(x 1 , . . ., x n ) m×m such that The matrices P i can be computed via a Gröbner basis reduction (cf.[56, p. 23]).The system in (3.12) is the Pfaffian system of I for the chosen term ordering on the Weyl algebra.The matrices P i obey the integrability condition, which translates as On the right hand side of (3.13), entry-wise differentiation the matrices P i is meant.
Indicial ideals and the Nilsson ring
We first recall some results from [55, Sections 2.2-2.6] in Theorems 3.6-3.16below, in order to present the overall strategy to compute solutions of our D-ideals.Here and throughout the rest of the file, weight vectors for the Weyl algebra are allowed to be taken from Among others, W contains the set {(−w, w)|w ∈ R n }.For a weight vector (u, v) ∈ R 2n and P ∈ D n , the initial form in (u,v) (P ) of P denotes the part of P of maximal (u, v)-weight.
For weights of the form (−w, w), one denotes the initial form simply by in w (P ).Each weight vector (u, v) induces an increasing filtration F • (u,v) (D n ) of the Weyl algebra by the (u, v)-weight.For weights of the form (−w, w), the associated graded ring is isomorphic to the Weyl algebra itself, cf.[55, p. 4].For (u, v) = (0, 1) ∈ R 2n , the associated graded ring gr (0,1) (D n ) is the polynomial ring C[x 1 , . . ., x n ][∂ 1 , . . ., ∂ n ], in which case one typically replaces ∂ i by ξ i to highlight the commutativity of the variables.The initial ideal in (u,v) (I) of a D n -ideal I is the ideal generated by in (u,v) (P ) of all P ∈ I.It naturally lives in the graded ring gr (u,v) (D n ).The initial ideal for the weight (0, 1) ∈ R 2n is exactly the characteristic ideal of the D n -ideal, which was introduced in Section 3.
In what follows, we focus on weights of the form (−w, w).The following theorem summarizes Theorems 2.2.1 and 2.5.1 of [55] about the holonomic rank of the initial ideal.Theorem 3.6.Let I be a holonomic D n -ideal and w any weight vector in R n .Then also the initial ideal in (−w,w) (I) is holonomic and rank(in (−w,w) (I)) ≤ rank(I).If I is regular holonomic, then rank(I) = rank(in (−w,w) (I)).(1) I is torus-fixed if and only if in (−w,w) (I) = I for all w ∈ R n .
(2) If w is generic for I, then in (−w,w) (I) is torus-fixed.
For the precise definition of what "generic" means here, we refer to the later Section 4.1.
We are going to exploit that distractions of torus-fixed ideals (such as indicial ideals) take on a very special form.The indicial ideal of I with respect to w is the distraction of the initial ideal, i.e., the C[θ 1 , . . ., θ n ]-ideal where θ i = x i ∂ i denotes the ith Euler operator.
Definition 3.8.The zeros of the indicial ideal are called the exponents of I with respect to w.
A D n -ideal F which is generated by elements of C[θ 1 , . . ., θ n ] is called Frobenius ideal.Frobenius ideals hence are of the form F = D n J with J an ideal in C[θ 1 , . . ., θ n ].A Frobenius ideal F = D n J is holonomic if and only if J is Artinian, i.e., if C[θ]/J is a finite-dimensional C-vector space.Now assume J is Artinian.The primary decomposition of J is of the form where Q A is an Artinian ideal, and Q A (θ − A) denotes the ideal obtained from replacing each θ i by θ i − A i in Q a .Solutions of Frobenius ideals take on the very special form x A • g(log(x)), which can be read from the primary decomposition of J. Namely, A ∈ V (J) and g runs over the finite-dimensional C-vector space the orthogonal complement of the Artinian ideal Q A .Since we will make use of that strategy regularly later on, we summarize what was said above in the following proposition.
Proposition 3.9.Let F = D n J, where J ⊂ C[θ], be a holonomic Frobenius ideal.The solution space of F is spanned by the functions x A • g(log(x)), where A runs over the points of the variety V (J), and g runs over the orthogonal complement Proposition 3.10.Let I be a holonomic ideal and w ∈ R n generic for I. Then ind w (I) is a holonomic Frobenius ideal whose rank equals rank(in (−w,w) (I)).
By N, we denote the ring of functions of the Nilsson class, i.e., those functions which can be represented by an element of for suitable vectors u 1 , . . ., u n , β 1 , . . ., β n ∈ C n (see [55, (2.31)]).The coefficients lie in the ring C x u 1 , . . ., x u n of formal power series in the x u i 's.
Definition 3.11.The w-weight of a monomial x A log(x) B is the real part ℜ(w • A) of w • A.
The initial series of a function f = A,B c AB x A log(x) B ∈ N, denoted in w (f ), is defined to be the finite subsum of all terms of minimal w-weight.Note that a weight vector w ∈ R n induces a partial order on functions of the Nilsson class as follows: (3.21) Since the w-weight does not give a monomial ordering, one needs a monomial order ≺ as a tie breaker and denotes the resulting monomial order by ≺ w .We will take ≺ to be the lexicographical ordering on N obtained as restriction of the lexicographic ordering on C n ⊕N n .The set of starting monomials of I with respect to ≺ w is where in ≺w (f ) = x A log(x) B for some A ∈ C n and B ∈ N n .Here, (3.23) , then A is an exponent of I with respect to w.For each exponent A, the number of starting monomials of the form x A log(x) B is the multiplicity of A as a root of the indicial ideal ind w (I).
Theorem 3.13 ([55, Theorem 2.5.12]).For each starting monomial , there exists a unique f ∈ N with the following properties: (1) f is annihilated by I, i.e., f ∈ Sol(I). ( (3) The monomial x A log(x) B is the only starting monomial that appears in f with non-zero coefficient.
Solutions to I as in the theorem above are called canonical (series) solutions of I with respect to ≺ w .The solution functions f in Theorem 3.13 can be shown to actually live in the Nilsson ring N w ⊂ N, which is the content of the next proposition.
and [55]).The elements of C C w (I) * Z are power series in x whose exponent vectors lie in C w (I) * Z .More precisely, the canonical solutions to I with respect to ≺ w have the form x A • g, where A is an exponent of I and g is an element of C C w (I) * Z [log(x 1 ), . . ., log(x n )], such that the degree of each log(x i ) in g is at most rank(I) − 1 (see [55,Theorem 2.5.14]).Definition 3.15.Let w ∈ R n .A finite subset G of a D-ideal I is called a Gröbner basis with respect to w if I is generated by G and in (−w,w) (I) ⊂ D is generated by the initial forms in (−w,w) (g), where g runs over the set G.
Moreover, if G is a Gröbner basis of I with respect to ≺ w , where ≺ denotes any term order on D (see [55, p.5]), G is also a Gröbner basis of I with respect to w.
The Saito-Sturmfels-Takayama algorithm
We now state the theorem which allows us to compute canonical solutions starting from a Gröbner basis of I with respect to a generic weight vector w ∈ R n .This is [55, Theorem 2.6.1], and we here recall it together with its proof, which contains the algorithm to lift solutions of the indicial ideal to canonical series solutions of I up to arbitrary w-weight.We are going to refer to this algorithm as SST algorithm.
Theorem 3.16.Let I be a regular holonomic ideal in Q[x 1 , . . ., x n ] ∂ 1 , . . ., ∂ n and let w ∈ R n be a generic weight vector for I. Let I be given by a Gröbner basis G w.r.t.w.There exists an algorithm which computes all terms up to specified w-weight in the canonical series solutions to I with respect to ≺ w .
For the convenience of our readers, we summarize the main steps of the SST algorithm as a procedure before turning to the proof.We also give an example of the algorithm running on a one-variable hypergeometric system.We here already mention Gröbner fans; the definition is contained in the later Section 4.1.
Procedure 3.17 (Computing canonical series solutions of a D-ideal up to a chosen order).Input: A regular holonomic D n -ideal I, its small Gröbner fan Σ in R n , a weight vector w ∈ R n that is generic for I, and the desired order k ∈ N.
Step 1. Compute the indicial ideal ind w (I) and its rank(I) many solutions.They are the form x A log(x) B with A ∈ V (ind w (I)), and will be the starting monomials of the canonical series solutions.For each starting monomial, carry out Steps 2-5.
Step 2. Determine a Gröbner basis G of I with respect to the weight w.
Step 3. Create the recurrences.By a recurrence, we mean a way of writing each element .., h r as matrices acting on the vector spaces L p , L p−β(1) , . . ., L p−β(r) , which contain the coefficient vectors c p of x p log(x) b , 0 ≤ b i < rank I.
Step 4. Apply the recurrences.Assuming one has the coefficient vectors c p−β(1) , ..., c p−β(r) , this amounts to solving the system of matrix equations to obtain c p .
Step 5.If the matrix of f is singular, then one may need to use the condition that for i = j, the series expansion s i must have coefficient 0 for the starting monomial of s j .
Output: The canonical series solutions of I with respect to w, truncated at w-weight k.
Step 1.The order of x with respect to w = 1 is −1.Thus in (−w,w) (I) = θ(θ −3) .The initial ideal is already torus-fixed, so ind w (I) = in (−w,w) (I).Solutions to the indicial ideal are x 0 = 1 and x 3 .We select the starting monomial Step 2. The ideal is principal, hence its generator is a Gröbner basis of the ideal.
Step 3. We write P = f − h, where f = θ(θ − 3) and h = x(θ + a)(θ + b).It suffices to compute the action of θ on each element of L p and extend it k[θ]-linearly.We have Thus, the matrix of the operator θ in the basis {x p+3 , x p+3 log(x)} is p + 3 1 0 p + 3 .
Step 4. Let c p,1 and c p,2 be the coefficients of x p+3 and x p+3 log(x) in the power series expansion.Then we can write our operators as matrices, and our recurrence as with initial values c 0 = 1, d 0 = 0. Solving the recurrence yields the explicit formulae where (a) p = a(a + 1) is the Pochhammer symbol.
Step 5.If we choose the starting monomial x 0 = 1 instead, the matrix of f is singular for p = 3.We leave it to the reader to find the series expansion, or to see [55, pp. 98-99].
⋄
We now turn to the proof of Theorem 3.16, making the steps of Procedure 3.17 precise.Proof and algorithm.Let w ∈ R n be generic for I. Compute the roots of ind w (I) and extend the field of coefficients by them.Denote the resulting, computable field extension of Q by K. To compute the canonical solution of I whose starting monomial is x A log(x) B , one proceeds as follows.For p ∈ Z n , denote by L p the K-vector space The monomials of L p are a K-basis of it.They are ordered by the term order ≺ w on the Nilsson ring, starting with the smallest.The matrix of f in this basis is an upper triangular square matrix.Let L ′ p denote the set of monomials in L p that are not contained in Start ≺w (I).Now let {f 1 , . . ., f d } be any generating set of ind w (I) and restrict f i : L p → L p to L ′ p ; this corresponds to deleting some of the columns in the associated matrix.Denote the resulting matrix by F i .Then the map is injective and is represented by the matrix obtained as vertical concatenation of F 1 , . . ., F d .Now let G = {g 1 , ..., g d } be a Gröbner basis of I with respect to w; its Gröbner cone in R n is denoted by C w .For each g ∈ G, choose a Laurent monomial x α such that where Here, ord (−w,w) (h) denotes the largest w-weight of a monomial appearing in h.Then the set of operators f 1 , . . ., f d obtained this way generate ind w (I) and, as maps as in (3.27), they are injective.The Laurent monomial x α for a Gröbner basis element g is obtained by taking a highest-weight term x a ∂ b m in g, where m is a monomial in the θ i 's and for all k, at least one of {a k , b k } is zero.Intuitively, this corresponds to pulling out as many θ's as possible into m.Then x α = x b−a .The h i may have terms of different weights; in this case, we get a recurrence that involves L p for more than two different p's.The coefficients of the canonical series solution are now computed by induction on the w-weight k.We start from a canonical series solution x A log(x) B + • • • and assume the coefficients c pb are already known for 0 and let M k be the space of terms with w-weight greater than k, i.e.M k = p•w>k,p∈C * Z L p .Then, by definition, we have that Assuming we know F k (x) for some k, we are going to construct a recursion which allows us to determine the additional terms which are needed to lift F k (x) to F k+1 (x).The starting point of that recursion will be the starting monomials.We hence look for an element To achieve this, observe that ord (−w,w) (h i ) < 0 implies that h i • x ℓ has higher w-weight than x ℓ .We can show this on monomials as follows.Suppose that h i = x q for some q ∈ N n with q • (−w) < 0. Then q • w > 0, so h i • x ℓ = x ℓ+q has higher w-weight than x ℓ .Similarly, suppose that h i = ∂ r where r • w < 0. Then (−r) • w > 0. Thus h i • x ℓ = Cx ℓ−r for some constant C, and x ℓ−r has higher w-weight than x ℓ .Together with (3.28) and (3.30), this implies that , which gives the desired recursion relation for By the injectivity of the map F from (3.27) and the existence of a canonical series solution, there exists a unique solution E k+1 (x) to (3.32), and this lifts F k (x) to F k+1 (x).
Remark 3.19.The choice of α in (3.28) is in general not unique, and sometimes there might not be an α at all which satisfies all of the conditions.For example, consider w = (−1, 0) and the operator 4 Computing solutions of I 3 with Gröbner techniques In this section, we compute the canonical series solutions of the D-ideal I 3 generated by the operators in (2.8) using the SST algorithm from the proof of Theorem 3.16.We begin in Section 4.1 by showing that I 3 is holonomic, and by computing some preliminary data, such as its holonomic rank, its singular locus, and its Gröbner fan.In Section 4.2, we discuss thoroughly the computation of the canonical series solutions for one of the cones of the Gröbner fan.Thanks to the symmetry of the system, the computations for the remaining two cones work analogously; we give more details for that at the end of the section.
The Gröbner fan of I 3
In order to employ the SST algorithm for computing the canonical series solutions, we first need to certify that the ideal is holonomic.Let I 3 (c) be the D-ideal generated by the operators in (2.6), with c = (c 0 , c 1 , c 2 , c 3 ).A computation in Singular [26,35] approves that I 3 (4, 2, 2, 2) is holonomic.The output of the Singular code below proves that I(c) is holonomic also for generic c.For that, we use the Weyl algebra with the degree reverse lexicographical order for The code uses the D-module libraries [5], and can be carried out by running the following lines, which also compute the holonomic rank, the singular locus, and the characteristic variety of I 3 (c).Since we compute over the field of rational functions in the parameters c, the results are valid for generic c.The characteristic ideal in (0,1) ]-ideal generated by the 8 operators and The cones C 1 , C 2 , and C 3 correspond to the following cones in R 3 : and ) ranges over any of its cones.The small Gröbner fan is the restriction of the Gröbner fan to Hence, the cones of the small Gröbner fan of I are in one-to-one correspondence with the initial ideals in (−w,w) (I), with w ∈ R n .
Definition 4.1.A weight vector is generic for I if it lies in an open cone of the small Gröbner fan of I.
To compute the small Gröbner fan of our system I 3 , our strategy is to first approximate the fan by computing the initial ideals for various lattice vectors, and finding where they change.Once we have a good enough approximation to guess our fan, we can verify our guess by computing initial ideals along the candidate rays.The system I 3 is homogeneous with respect to the vector (1, 1, 1).Thus, our small Gröbner fan really lives in R 3 /(1, 1, 1).More specifically: if our Gröbner fan is Σ, then Σ = Σ ′ × (1, 1, 1) where Σ ′ is a fan in R 2 .We choose the orthogonal basis v 1 = (1, 0, −1) and v 2 = (−1, 2, −1) of R 3 /(1, 1, 1).By explicitly computing the initial ideals of I 3 , we obtain Σ ′ as depicted in Figure 2. Note that this picture is not canonical; it depends on the choice of basis of R 3 /(1, 1, 1).
The D-ideal I 3 is moreover regular singular, as we will read later from the system (5.28).
Remark 4.2.The triangle integral-read as a function of the coefficients of the graph polynomial-is a solution of a GKZ system [27, Section 3.5], in which case the SST algorithm simplifies substantially.The conformal ideal I 3 (4, 2, 2, 2) is a restriction of that GKZ system to a subspace of special values of some of the coefficients.Our further investigations will not rely on that fact; we here provide a general implementation of the SST algorithm.⋄
Solutions to the indicial ideal
We will compute the initial and indicial ideal of the D-ideal I 3 from (3.16) with respect to the weight vector w = (−1, 0, 1) ∈ C 1 .In the basis v 1 = (1, 0, −1), v 2 = (−1, 2, −1) of Section 4.1, w is (−1, 0).From Figure 2, we read that w is contained in the relative interior of the cone C 1 in the Gröbner fan, hence w is generic for I 3 .The initial ideal in (−w,w) (I 3 ) is generated by the three operators We compute the indicial ideal Users of Singular may compute the initial ideal and a Gröbner basis with respect to the weight vector w with the following code.
LIB "dmod.lib";int c0 = 4; int c1 = 2; int c2 = 2; int c3 = 2; ring r = 0,( This returns the initial ideal as in (4.3).We deduce that the indicial ideal ind w (I 3 ) is generated by the three operators They are a Gröbner basis of in (−w,w) (I 3 ), and in this case in (−w,w) (I 3 ) = ind w (I 3 ).One reads that i.e., the only zero of the indicial ideal is the point (−1, 0, 0).For the next step of the algorithm, we need to compute solutions to ind w (I 3 ).These will become the starting monomials of the canonical series solutions to I. Recall that ind w (I 3 ) = DJ for J the C[θ 1 , θ 2 , θ 3 ]-ideal generated by the operators in (4.5).We need to compute the primary decomposition of J.In Singular, one obtains the primary decomposition with the library primdec by running the following code, where we encoded the Euler operators θ 1 , θ 2 , θ 3 as variables th1, th2, th3.
LIB "primdec.lib";ring r = 0,(th1,th2,th3),dp; ideal J = th1 + th2 + th3 + 1, th2^2, th3^2; list pr = primdecGTZ(J); pr; From the output, one reads the ideals coinciding with the Gröbner basis in (4.5).Recalling that V (J) = {A} with A = (−1, 0, 0), we read that Q A from (3.18) is the ideal generated by θ 1 + θ 2 + θ 3 , θ 2 2 , and θ 2 3 .The orthogonal complement Q ⊥ A from (3.19) hence is This is a finite-dimensional vector space spanned by the 4 polynomials Hence, by Proposition 3.9, the solution space to ind w (I 3 ) is spanned by the 4 functions In Macaulay2 [34], the generators of the indicial ideal in (4.5) and its solutions are obtained conveniently using the commands distraction and solvePDE as follows.Running this code outputs the following: As is explained in [2], the solutions to ind w (I 3 ) are then obtained by replacing dthi by log(x i ) and multiplying the resulting functions by x A = x −1 1 .We will take advantage of the homogeneity of the system to write it in fewer variables.Consider the change of variables Lemma 4.3.Every solution f (x 1 , x 2 , x 3 ) to I 3 can be written in the form where f (y 2 , y 3 ) denotes the function f (y 2 , y 3 ) = f (1, y 2 , y 3 ).
Therefore, we can write each canonical series in the form We will call an element of the form (log y 2 ) i (log y 3 ) j a monomial.We now encode our D-ideal in the new variables.Now D denotes the Weyl algebra in the y-variables, i.e., In the y-variables, we thus obtain the following basis of the solutions space of ind w (I 3 ): 1 log(y 2 ) log(y 3 ) . (4.15) After substituting back to the x i variables, these functions are-up to reordering and a sign-exactly the functions found in (4.10).Since each m i is entirely contained within a single L p , we have Start ≺w (m i ) = m i (cf.[55, Lemma 2.5.10]).
Lifting the initial monomials to solutions of I 3
We proceed using the algorithm described in Theorem 3.16.The space L p is 16-dimensional and defined as the span of elements of the form x −1 1 y p (log y 2 ) m (log y 3 ) n where p ∈ Z 2 is fixed and 0 ≤ m, n ≤ 3.In order to execute the algorithm, we must first find the matrix of the transformation from L ′ p to (L p ) d .As we will see below, this matrix consists of 3 vertically concatenated blocks, where each block is an upper triangular 16 × 16 matrix.First, we compute that a Gröbner basis G w of I 3 with respect to w is given by Switching to the y-variables, the operators in (4.16) turn into Replacing θ y 1 by −1 gives (4.18) Observe that This justifies to substitute θ y 1 by −1.We point out that the ideals generated by the operators in (4.17) and (4.18), respectively, are not the same ideals; but the quotients by them are isomorphic as D-modules on the algebraic torus, and hence their solution spaces are isomorphic.⋄ We now check that the order condition on the h i 's is satisfied.Indeed, for w = (−1, 0, 1), the variable Thus ord (−w,w) (h i ) = ord (−w,w) (y i ) < 0 for i = 2, 3, as desired.In other words, g 1 gives rise to a recurrence relation between L p and L p+(1,0) , while g 2 gives a recurrence relation between the spaces L p and L p+(0,1) .
We have implemented the SST algorithm from the proof of Theorem 3.16 in Sage for PDEs in two variables.Running it produces the canonical series solutions that are shown in Equation (4.19) at the very end of this section.
In summary, we achieved a basis of holomorphic solutions to I 3 purely from the Gröbner data of I 3 .We carried the computations out for a weight vector w from the cone C 1 of the Gröbner fan of I 3 .For our computations, we changed to y-variables.To refine the weight-ordering of w to a term ordering on the Weyl algebra, we used the degree reverse lexicographical ordering for In order to compute a basis of the solution space to I 3 , one could equally have started from cone C 2 or C 3 , resulting in different orderings.One would change the refining order to The Gröbner bases of I 3 w.r.t.weights from C 2 and C 3 are then obtained by a relabelling of the indices, and accordingly for the starting monomials and canonical series solutions.In Lemma 4.3, one would factor out x −1 2 for C 2 , and x −1 3 for C 3 .
We end this section by giving the canonical series solution of I 3 for w from C 1 obtained by our implementation of the SST algorithm.We display them from w-weight 0 to 4 on the next page.We recall that these four functions are a basis of the space of solutions to the D-ideal I 3 .In order to pick out a specific solution, such as the one corresponding to the triangle Feynman integral from which we started, one needs to prescribe rank(I 3 )-many initial conditions.This can be done in a number of ways.For example, we may numerically evaluate the triangle Feynman integral J triangle 4;1,1,1 at four arbitrary kinematic points. 3For each cone, the kinematic points x = (x 1 , x 2 , x 3 ) must be chosen so that consecutive truncations of the canonical series evaluated at x converge.In this way, the truncation of the canonical series at w-weight k provides an approximation of the solutions, with an error comparable to the size of the omitted terms of w-weight k + 1.For example, in the cone C 1 with weight vector w = (−1, 0, 1), we must have that |x 1 | ≫ |x 2 | ≫ |x 3 |.Comparing the numerical evaluations obtained using AMFlow [51] to a linear combination of the canonical series in (4.19) allows us to determine that We validated this against further numerical evaluations, as well as against the analytic solution in (2.9).
Series solutions using Wasow's method
In this section, we discuss how to construct series solutions of the ideal I 3 using the toolkit developed for computing Feynman integrals in high energy physics.We provide an alternative approach for constructing the canonical series solutions discussed in the previous sections without resorting to Gröbner basis techniques, and give a dictionary between the two methods.We also touch base with the problem of solving a first-order Pfaffian system of PDEs, namely a system of PDEs of the form where F (x) is a vector-valued function of x = (x 1 , . . ., x n ).This is a typical problem that arises in the evaluation of Feynman integrals.Often, one aims to approximate the solution F (x) for small values of some variable, say of x 1 .If the Pfaffian system (5.1) is in a manifestly Fuchsian form at x 1 = 0, i.e., if the matrices P i (x) have at most simple poles at x 1 = 0, there is an algorithm [65] to construct the asymptotic series of the solution F (x) around x 1 = 0; see e.g.[67] for a discussion in the context of Feynman integrals.They will be of the form up to an arbitrary power of x 1 , with m max a positive integer which depends on the specific Pfaffian system, and c k,m are Laurent series in x 2 , . . ., x n .The series in (5.2) is a particular case of (1.1): for the weight w = (1, 0, . . ., 0), the series in (1.1) is the truncation of (5.2).
A number of issues needs to be addressed in order to apply this method to D-ideals.First of all, we need to construct a Pfaffian system associated with the considered D-ideal.In [56, p. 23], it is explained how to achieve this by Gröbner basis computations.In Section 5.1, we present an alternative algorithm which relies on linear algebra only, and apply it to the D-ideal I 3 .This method is inspired by the problem of integration by parts (IBP) reduction for Feynman integrals [22,58].We spell this analogy out, and provide a dictionary of the relevant concepts.As a by-product, this method allows us to determine the holonomic rank and the singular locus of the D-ideal without computing Gröbner bases.The resulting Pfaffian system is in general not in a manifestly Fuchsian form; double or higher-order poles may appear in the DEs, preventing us from applying the algorithm from [65] to construct the asymptotic series of the solutions.Finding a gauge transformation which puts the Pfaffian system into Fuchsian form is a well known problem in Feynman integrals [37], and a number of strategies have been developed in that context, see e.g.[38,50].In Section 5.2, we argue that, if a D-ideal is regular holonomic, for every singular point x * it is possible to find a gauge transformation such that the associated Pfaffian system has at most simple poles at x = x * .In the case of I 3 , we show that we can even construct a Pfaffian system which is manifestly Fuchsian everywhere, namely which has at most simple poles at any singular point.The resulting system is particularly simple; we can solve it analytically, and by doing so, we will recover the solutions to I 3 given in (2.12).Finally, we need to fill an important conceptual gap.The canonical series computed by Gröbner bases are truncated by the w-weight of the terms.The asymptotic series expansions, on the other hand, rely on the notion of a "small parameter", e.g.x 1 in Equation (5.2), and are truncated at a given power of it.To link the two notions, in Section 5.3, we introduce an auxiliary variable t, which captures the w-weight of the monomials.We compute the asymptotic solutions of the Pfaffian system around t = 0, and verify that they coincide with the canonical series computed in Section 4.
Pfaffian systems
In this section, we present a method for constructing a Pfaffian system of PDEs associated with a given D-ideal using linear algebra.We are going to apply that method to the conformal D-ideal I 3 .A closely related method to efficiently construct Pfaffian matrices is discussed in [21].Our presentation builds on the analogy with IBP reduction, to the benefit of the readers who are familiar with Feynman integrals.
In Section 3.2, we have seen that the vector-valued function F (x), which satisfies a Pfaffian system (5.1)associated with a D n -ideal I with rank(I) = m is given by where f (x) is a general holomorphic solution of I, and p(1), . . ., p(m) ∈ N n such that {∂ p(1) , . . ., ∂ p(m) } are the standard monomials of the left R-ideal RI for a given monomial ordering.We may assume ∂ p(1) = 1, so that the first entry of F (x) is a holomorphic solution of I.The standard monomials can be determined by Gröbner bases in R, and the matrices P i (x) in the Pfaffian system (5.1) can be obtained through reduction in R, Our goal is to construct a Pfaffian system of I using linear algebra only.Let Q 1 , . . ., Q N be generators of I. First of all, the R-monomials ∂ p(i) defining F (x) as in (5.3) need not to be standard monomials for F (x) to satisfy a first-order Pfaffian system of PDEs.It suffices that they are a C(x)-basis of R/RI.One way to find such a basis is to write down the relations among the monomials ∂ a in R/RI, and solve them.There are infinitely many such relations in R/RI, obtained by multiplying the generators Q i of I by any ∂ a , We view this as a linear system of equations in the "variables" ∂ a .In Section 3.2, we have seen that, if I has finite holonomic rank m, the linear system (5.5) has m independent variables.In other words, any ∂ a can be expressed as a linear combination of m ∂ a 's in R/RI by solving the linear system (5.5).The issue is that there are infinitely many equations.We cannot solve them all, nor do we need to.We instead adopt an approach inspired by Laporta's algorithm [46] for solving integration-by-parts relations in the context of Feynman integrals.
We introduce an ordering > of the unknowns ∂ a graded by the total degree of the derivatives.The ordering among unknowns with the same degree is instead arbitrary.It will lead to a different basis of R/RI.For instance, we choose the graded lexicographic order.Explicitly, for n = 2, we have (5.6) Next, we multiply the generators Q i ∈ D by ∂'s from the left, up to a certain degree d max . 4his yields the equations where a ∈ N n runs over all vectors of natural numbers with We denote the resulting finite system of equations by S dmax .We dub this operation seeding, in analogy with the equivalent step in constructing IBP relations for Feynman integrals.Since we have truncated the system of equations, some unknowns cannot be solved for with the equations in S dmax .The key idea is to solve them only for a subset of the unknowns, in particular those with total degree up to some value d needed below d max (e.g.d needed = d max − 1), and to eliminate the other unknowns through Gaussian elimination.We denote by D the vector containing all unknowns ∂ a appearing in S dmax sorted according to the ordering defined above, and by D needed the needed unknowns, (5.8) By construction, D has the block form with D extra = D\D needed .We represent the equations in S dmax in matrix form as where M(x) is a [S dmax ] × [D] matrix whose entries depend on x.We row-reduce M(x) and drop the null rows.The resulting system of equations has the form Finally, we perform back substitution, but only for the needed unknowns.In other words, we solve the subset of equations (5.12) As a result, we obtain the expressions of the needed unknowns in terms of a subset of independent monomials ∂ a in R/RI.Let us make a few comments about this algorithm.
Construction of a Pfaffian system IBP reduction with Laporta's algorithm a set of master integrals Table 1: Analogies between the construction of a first-order Pfaffian system of DEs associated with a given D-ideal and IBP reduction of Feynman integrals.
(1) The value of d max must be higher than the maximal degree of the standard monomials.
In practice, we iterate the algorithm for various increasing values of d max until we see that the resulting independent monomials stay the same.These are a basis of R/RI.
(2) If the D-ideal I has infinite holonomic rank, the algorithm will not terminate.In practice, as we increase d max , we will obtain more and more independent monomials.Thus, this procedure is a linear algebra approach for computing the holonomic rank of a D-ideal.
(3) Let {∂ p(1) , . . ., ∂ p(m) } be a C(x)-basis of R/RI.If the needed unknowns include ∂ i ∂ p(j) for any i = 1, . . ., n and j = 1, . . ., m, the matrices P i (x) of the Pfaffian system (5.1) can be read off from the solution of the system of equations discussed above.
(4) This algorithm does not prove that the identified monomials ∂ a are a basis of R/RI.However, the resulting Pfaffian system of DEs must satisfy the integrability conditions (3.13).If the latter are not satisfied, either a higher value of d max must be used, or the ideal has infinite holonomic rank.If they are satisfied, the solution space has a C-dimension equal to the length of F (x) (see Section 3.2).Since the first component of F (x) is by construction a general solution of the original ideal I, we can conclude that the algorithm has terminated, and that a basis of R/RI has been identified.
(5) The common denominator of the entries of the matrices P i in the Pfaffian system (5.1)defines a hypersurface which contains the singular locus of I.
(6) There is a strong analogy with IBP reduction in Feynman integrals, which we summarized in Table 1.
We now apply this algorithm to the conformal ideal I 3 .It is convenient to use the variables y defined in (4.11), and to eliminate y 1 from the problem as follows.Lemma 4.3 guarantees that every solution of I 3 has the form f (y 2 , y 3 )/y 1 .Any function of such form is annihilated by the generator P3 in (2.8).The action of the other two generators can be expressed as follows: (5.13) Here, Q 1 and Q 2 are differential operators which depend on y 2 and y 3 only: (5.14) We denote by I y 3 the D-ideal generated by Q 1 and Q 2 .It is holonomic, it has holonomic rank 4, and its singular locus is where λ is obtained from the polynomial λ (2.10) as λ = λ| x 1 =1, x 2 =y 2 , x 3 =y 3 . (5.16) The general solution of I 3 is given by the general solution of I y 3 divided by y 1 .Hereinafter, we will thus focus on I y 3 .Since the latter depends on y 2 and y 3 only, we use the shorthand notations y = (y 2 , y 3 ), y a = y a 2 2 y a 3 3 , and ∂ a y = ∂ a 2 y 2 ∂ a 3 y 3 .We compute a basis of R/RI y 3 by running the algorithm described above.We use d needed = d max − 1.For d max = 1, we find only one basis element, namely 1. Starting from d max = 2 and onwards, we identify four basis elements.We define where f (y) is a holomorphic solution of I y 3 .It satisfies a system of first-order Pfaffian DEs and the 4 × 4 matrices Pi satisfy the integrability conditions.For example, P2 is the matrix From Equation (5.19), we see that the common denominator is y 2 y 2 3 λ, which is in agreement with the singular locus computed via Gröbner bases.Furthermore, some entries have double poles at y 3 = 0.The system is thus not in Fuchsian form.In the next section, we will perform a gauge transformation to obtain a Pfaffian system which is in a manifestly Fuchsian form.
Writing the system in Fuchsian form
If a D-ideal I is regular holonomic, all solutions have moderate growth when approaching the singular locus (cf.Section 3).This implies that the solutions cannot have essential singularities.This has implications for the form of the associated first-order Pfaffian system of DEs.A first-order ODE having poles of order higher than 1 implies that its solution has an essential singularity.The moderate growth condition on the solutions to regular holonomic D-ideals thus implies that the associated Pfaffian system has at most simple poles.The Pfaffian systems may however have "spurious" double poles.For example, the system exhibits a double pole at x = 0, yet the solution has no essential singularity at x = 0.Such spurious poles can be removed by a suitable gauge transformation, The goal is to construct the transformation matrix T (x) such that the new vector-valued function G(x) satisfies a first-order system of DEs which is manifestly Fuchsian at x = 0.For this simple rank-two example this can be achieved by which leads to The problem of finding a gauge transformation which puts a Pfaffian system into a manifestly Fuchsian form is central in the modern methodology for computing Feynman integrals [37].For our purposes, it suffices to put the Pfaffian system in a form which is manifestly Fuchsian at the singular point around which we compute the asymptotic expansion.If the entries of the required transformation matrix are in the field of rational functions, this problem is solved (see [38,50] and the references therein).For the Pfaffian system associated with the D-ideal I y 3 constructed in the previous subsection, we perform the gauge transformation with the following transformation matrix:5 As a result, G(y) satisfies a linear system of first-order DEs which is in Fuchsian form.It can be elegantly expressed as where In the Feynman integral literature, the arguments li of the logarithms in the Fuchsian Pfaffian system (5.27) are called symbol letters, and their ensemble { li } symbol alphabet [33].They encode the possible singular points of the solution.Indeed, the letters in Equation (5.29) vanish exactly on the singular locus of I y 3 given in (5.15).The system (5.27) is therefore in a manifestly Fuchsian form, i.e., it has at most simple poles at every singular point.In (5.29), the readers with some experience in Feynman integrals may recognize the alphabet of the three-mass triangle Feynman integrals [20], minus an additional letter equal to λ.The relation between this alphabet and conformal symmetry was already observed in [23], and is a promising hint that this alphabet may be valid at any loop order.The matrix in (5.28) exhibits the block triangular structure observed in [19] for the differential equations satisfied by finite Feynman integrals.We expect that differential equations corresponding to (5.27) can be obtained using the method of [19].
Before we move on to constructing the asymptotic series solutions in the next section, we point out that the form of the first-order Pfaffian system in (5.27) is particularly wellbehaved; its solutions can be written down explicitly in terms of logarithms and dilogarithms.The matrix B(y) (5.28) is upper block-triangular, which allows us to solve the system (5.27)iteratively.We obtain where f 1 (x), . . ., f 4 (x) are the solutions to I 3 from (2.12), and k 1 , . . ., k 4 ∈ C are arbitrary constants.Recalling the relation (5.25) between F (y) and G(y) with T (y) as in (5.26), and that the first component of F (y) divided by y 1 is a general solution to I 3 , we have proven that f 1 (x), . . ., f 4 (x) are indeed a basis of the space of solutions to I 3 .
Capturing the weight via an auxiliary variable
We now compute the asymptotic solutions to the first-order Pfaffian system (5.27).Since the latter is in Fuchsian form, we can use Wasow's algorithm [65] to expand around any singular point without resorting to Gröbner bases.In order to build a bridge to the canonical series solutions, which are defined with respect to a weight vector w ∈ R n , we introduce an auxiliary variable t as follows.Let f (x) be the general solution to the ideal under consideration.We define a new function f w , which depends both on the original variables x and on t as where t w x is a short-hand notation for (t w 1 x 1 , . . ., t wn x n ).
Remark 5.1.To motivate the construction in (5.31), consider the toy example where the function is a monomial, say f (x) = x a for some a ∈ N n .The exponent of t in the auxiliary function f w (t, x) = t w•a x a gives the w-weight of the monomial, namely w • a.In this sense, the exponent of the auxiliary variable t captures the notion of weight.⋄ We then compute the asymptotic expansion of f w (t, x) around t = 0, where m max is a natural number which depends on the specific ideal.By construction, the monomials in c k,m (x) have w-weight k.By definition, we have that f w | t=1 ≡ f.Hence, we expect that the asymptotic expansion (5.32) around t = 0, truncated at t k and evaluated at t = 1 equals the canonical series expansion truncated at w-weight k.
Example 5.2.Consider the simplest of the solutions to the ideal I 3 from Equation (2.12), where λ is the homogeneous polynomial of degree 2 defined in Equation (2.10).For the weight vector w = (−1, 0, 1) in cone C 1 , we introduce the auxiliary variable t via The asymptotic expansion around t = 0 is given by a Taylor expansion in t, We see that the monomials x a appearing as coefficients of t k have w-weight w • a = k.
Truncating the expansion (5.35) at order k and setting t = 1 gives the canonical series solution truncated at w-weight k, e.g., where the dots denote terms of w-weight 5 or higher.Indeed, this coincides with the series f1 (y 2 , y 3 ) in (4.19), upon substituting y in terms of x in the latter, and dividing it by x 1 to obtain a solution of I 3 through Lemma 4.
⋄
The derivatives of f w (t, x) with respect to x and t can be obtained from the derivatives of f (x) through the chain rule, so that we can straightforwardly construct a system of PDEs to which f w (t, x) is a solution, starting from that for f (x).In the next section, we will use this approach to compute the canonical series solutions to the Fuchsian system derived in Section 5.2, and verify that they reproduce those computed in Section 4.
Computations for the cone C 1
We now turn to the asymptotic solution of the Fuchsian system (5.27).We choose the weight vector w = (−1, 0, 1) from the cone C 1 of the small Gröbner fan.The corresponding weight for the y-variables is w y = (−1, 1, 2).We thus introduce the auxiliary variable t via where G(y 2 , y 3 ) is a holomorphic vector-valued function which satisfies the Fuchsian Pfaffian system (5.27).The auxiliary function Gw (t, y) satisfies a Pfaffian system in Fuchsian form; for i = y 2 , y 3 , t, The matrices Bi (t, y) are obtained from the derivatives of G(y) through the chain rule, In order to compute the asymptotic expansion of the solutions around t = 0, we need to study the behavior of the matrices Bi (t, y) around t = 0.The Laurent expansion of Bt (t, y) is (5.40) Since the system is in Fuchsian form, there can be no pole of higher order.The residue at t = 0, i.e., the constant matrix Bres t , has the eigenvalue 0 only.The matrices By 2 (t, y) and By 3 (t, y) are instead non-singular at t = 0.
The first step of the algorithm in [65] is to perform a gauge transformation Gw → G′ w , Gw (t, y) = Ũ (t, y) • G′ w (t, y) . (5.41) In the new basis, our vector-valued function satisfies the system for i = y 2 , y 3 , t, with The key idea is to choose the gauge transformation such that the DEs for G′ w (t, y) are simpler than those for Gw (t, y) in view of the asymptotic solution around t = 0.In particular, we wish to simplify B′ t (t, y) so that it has a simple pole only, i.e., is of the form B′ t (t, y) = Bres t t . (5.44) We can construct the transformation matrix Ũ (t, y) which achieves (5.44) as a series in t,6 Ũ (t, y) = k≥0 t k Ũk (y) . (5.45) We plug this expansion into Equation (5.43) and impose that the resulting B′ t (t, y) satisfies (5.44).The first term in the expansion, Ũ0 (y), must commute with Bres t .The simplest way to achieve this is to choose Ũ0 (y) = I 4 to be the 4 × 4 identity matrix.The terms of expansion (5.45) of higher order are determined recursively in terms of the lower ones through the solution of a linear system of equations, (5.46) As an example, we spell out the first few terms of the resulting transformation matrix: (5.47) As expected, each order in t involves only monomials whose w-weight matches the power of t.As a result, G′ w (t, y) satisfies the following simplified system of PDEs, with B′ y 2 (t, y) and B′ y 3 (t, y) determined by (5.43), and non-singular at t = 0. We can solve the simplified system (5.48) using the path-ordered exponential formalism (see e.g.[14] and the references therein).We integrate this system along a path in the (t, y)-space of the form 0 , y (0) −→ (0 , y) −→ (t, y) , ( for some arbitrary values y (0) of y which do not belong to the singular locus of the ideal.In other words, we pick (t = 0, y = y (0) ) as boundary point, 7 restore the dependence on y by integrating along the first piece of the path, and then on t by integrating along the second.
and h(y) is the vector-valued holomorphic function which solves the system (5.52) Explicitly, we have We now have all the ingredients to compute the asymptotic expansion around t = 0 of the solution of the first-order Pfaffian and Fuchsian system of DEs (5.27).It is given by Gw (t, y) = Ũ(t, y) • e Bres t log(t) • h(y) . (5.54) The gauge transformation matrix, Ũ (t, y), is given by a Taylor series around t = 0 starting with the identity matrix, and can be computed algorithmically up to any order in t.The matrix exponential, e Bres t log(t) , is irrelevant for our purposes, as we will eventually set t = 1, and e Bres t log(1) = I 4 .The matrix-valued function h(y) given in Equation (5.53) is therefore the only source of powers of log(y) in this approach.
We can now write down the asymptotic expansions of the solution to the ideal I y 3 .We recall that a solution to the latter, f (y), is by construction given by the first component of the vector-valued function F (y) (see (5.17)).The latter is related to the vector-valued function G(y) through the gauge transformation (5.25) with transformation matrix T (y).Putting everything together, we have where the subscript 1 denotes the first component of the vector.The series expansion of G(y) truncated at w-weight k is given by the asymptotic expansion of Gw (t, y) around t = 0 up to order k, given by (5.54), setting t = 1.Similarly, the series expansion of the transformation matrix is obtained by the Taylor expansion of Tw (t, y) = T (t w y) around t = 0, truncated at t k and evaluated at t = 1.Then, the four linearly independent series solutions to I y (5.56) In the above functions, the square brackets highlight terms of different weight, and the dots denote terms of w-weight 4 or higher.From these, we see that the starting monomials coincide with the solutions to the indicial ideal in (4.15).The series expansions in (5.56) match those computed by Gröbner bases in (4.19) (with hi (y) = fi (y)), and are related to the known solutions f i from (2.12) via (5.57) We thus find agreement among the known solutions, the canonical series solutions computed via the SST algorithm, and those computed by solving the Pfaffian system asymptotically.
The computations for the remaining cones of the Gröbner fan follow the same strategy.The results can be obtained by relabelling the indices as discussed at the end of Section 4.
Conclusion and outlook
In this paper, we studied algorithms for obtaining solutions of linear partial differential equations relevant for particle physics.In principle, this applies to all Feynman integrals, as the latter are known to satisfy systems of differential equations (see e.g.[52]), as well as to further special functions appearing as their solutions, such as multiple polylogarithms.
We implemented a method, originally proposed by Saito, Sturmfels, and Takayama [55], to compute canonical Nilsson series expansions.The SST algorithm had been in use already [27,57] for GKZ systems.To our knowledge, our paper is the first one to implement the algorithmic ideas of SST without requiring hypergeometricity.Using the particular example of conformal differential equations satisfied by a one-loop triangle Feynman integral with general propagator powers, we implemented the SST method and compared it to a method of Wasow [65] that is more commonly used in the context of differential equations for Feynman integrals [38,50].
Using the example of the triangle Feynman integral, we showed how methods [55,56] from the theory of D-modules can be used to determine key features of the differential equations, as well as canonical series expansions of the solutions.In particular, for holonomic systems, the holonomic rank corresponds to the number of master integrals in the physics literature.The singular locus determines the set of (possible) singularities of the corresponding Feynman integrals.The solutions to the indicial ideal are the starting terms of the series.Following SST, we use Gröbner basis methods to obtain canonical series expansions.Leveraging the techniques developed for computing Feynman diagrams, we present alternative methods which allow us to extract this precious information from the PDEs without relying on the computation of Gröbner bases.
It is interesting to ask about the scope of the method.Proxies for the complexity of the differential equations are: the number of variables; the order of the differential operators; the number of differential equations.A potential bottleneck in the SST approach could be the reliance on Gröbner bases.However, compared to other situations where Gröbner methods are used (e.g. for analyzing equations of high polynomial degree), one would expect the polynomials appearing in the PDEs of physical interest to have moderate degree.(As a proof of principle, we showed in Appendix C an application to a four-loop ladder integral.)Moreover, our comparison with Wasow's method shows that, in principle, these methods can be replaced by linear algebra operations, which could potentially scale better.Then again, Wasow's method relies on the Pfaffian system being in Fuchsian form at the regular singular point around which we wish to expand.While several strategies to achieve this have been developed especially in the context of Feynman integrals, they have limitations.This makes it even more interesting to have two complementary approaches.
There are various interesting practical and conceptual questions for follow-up work.On the practical side, the physics literature provides many important classes of Feynman integrals that are relevant for the phenomenology of elementary particles, in the context of gravitational wave physics, or in cosmology, for example.A prerequisite of our method is the knowledge of the relevant differential equations.Obtaining the latter is an interesting active area of research in itself, see e.g.[45] and the references therein.It would be interesting to study the scope of the SST algorithm for the Picard-Fuchs equations obtained in [45].
On the conceptual side, let us mention the following questions.Firstly, when dealing with equations in multiple variables, one may ask how one can restrict the differential equations to lower-dimensional submanifolds.Physical examples include on-shell limits, or restrictions to submanifolds corresponding to (spurious) singularities.Secondly, there are important situations where only a subset of the relevant differential equations is available (e.g. because they are easier to obtain than the complete equations, as e.g. in [29]).In other words, in these situations, the holonomic rank of the D-ideal is not finite, and hence there is functional freedom in the solutions.This happens for example for the differential equations that follow from conformal or Yangian symmetry beyond the three-particle case.It is then interesting to study how holonomic techniques can be used to obtain useful information, such as which additional constraints may be added to make the system holonomic.Finally, the triangle Feynman integral considered here has a finite value, but Feynman integrals typically exhibit divergences in the infrared and ultraviolet regions of the loop integration.In dimensional regularization, these divergences are regulated by analytically continuing to generic d ∈ C spacetime dimensions.The divergences are then manifested as poles in the Laurent expansion around d = d 0 for some integer d 0 (typically d 0 = 4).The coefficients of this Laurent expansion are solutions to regular holonomic D-ideals in the kinematic variables and, as such, can in principle be treated using the D-module techniques discussed here.It would therefore be important to find a way to set up the expansion around d = d 0 within the D-module approach.
Beyond these specific research questions, we believe that the exchange of methods promoted in this work will be fruitful for both the communities of mathematicians and theoretical physicists.Given the ubiquity of PDEs, we envision these methods will be beneficial for many further applications.
A Conformal symmetry
Conformal symmetry underlies many of the quantum field theories which are important to our understanding of fundamental physics: massless quantum electrodynamics, quantum chromodynamics, Yang-Mills theory, scalar φ 4 theory, and more, all have conformal symmetry at the classical level. 8Despite its ubiquity, little is known about its implications for on-shell scattering amplitudes.The latter are functions of the particles' momenta which play a central role in describing the interactions of fundamental particles.One of the obstacles which has hindered the study of this topic is that the constraints imposed by conformal symmetry have the form of second-order differential operators in momentum space.Determining the conformally-invariant functions relevant for the scattering of particles thus requires to solve systems of linear second-order PDEs.Given the large number of variables which describe the particles' scattering, this problem is challenging from a technical point of view.As such, it is an optimal testing ground for D-module techniques.
The conformal symmetry group is an extension of the Poincaré group SO(1, 3)⋉R, which is the semi-direct product of the Lorentz group and the Abelian group of translations.The Poincaré group plays a central role in fundamental physics, as it is the symmetry group of Einstein's theory of special relativity, and hence of all theoretical models which stem from it.The conformal group is obtained by extending the Poincaré group by two classes of transformations: dilatations and conformal boosts (alias "special conformal transformations").We refer the interested readers to [28] for a thorough discussion of conformal symmetry in field theory, and present here what is necessary to introduce the system of PDEs addressed in this work.We begin by introducing some notation.We denote by the vector of d-dimensional spacetime coordinates, where z 0 is the time coordinate, and z i for i = 1, . . ., d − 1 are the spatial coordinates.We define the scalar product of two coordinate vectors z 1 and z 2 as The conformal transformations are given by the composition of the coordinate transformations shown in Table 2. 9 They form a group with respect to composition.A function is conformally-invariant if it is annihilated by the generators of the conformal group.The latter are first-order differential operators in the spacetime coordinates which obey the commutation relations of the Lie algebra associated with the conformal group.The generator of the group of dilatations for n points of coordinates z 1 , . . ., z n is for instance given by where the c k ∈ R are parameters called conformal weights, and ∂ z k denotes the vector of partial derivatives, Scattering amplitudes are functions of the particles' momenta.Like the spacetime coordinates z k , the momenta p k are d-dimensional vectors.Their zeroth components, p 0 k , give the particles' energy, while the other components, p µ k with µ = 1, . . ., d − 1, are related to their velocity.Coordinate and momentum space are related by a Fourier transform.
The momentum-space realization of the generators of the Poincaré group are given by first-order operators also in the momenta p k .The constraints they impose are particularly simple.The invariance under translations implies the conservation of total momentum, namely that p 1 + • • • + p n = 0. We can use this constraint to eliminate one of the momenta, say p n .The invariance under Lorentz transformations instead implies that the invariant functions depend on the momenta only through the scalar products p k •p ℓ , called Mandelstam invariants in the physics literature.Therefore, a generic Poincaré invariant function of n momenta p 1 , . . ., p n depends only on n(n − 1)/2 variables, which can be chosen as where the integration domain is the Minkowski spacetime (see Section 2).The corresponding Feynman graph is shown in Figure 3.The normalization factor of (|p 1 + p 2 | 2 ) 2(6−d) |p 2 + p 3 | 2 is chosen to make the integral dimensionless and simplify the results.We take the momentum p 4 to be off-shell, i.e.Thanks to the chosen normalization, the ladder integral depends only on y 2 and y 3 .The state of the art for Feynman integrals with these kinematics is three loops, cf.[63,18,39].Any information about its analytic structure would therefore be of great interest.For this reason, we focus on its maximal cut in d = 4 dimensions.The maximal cut-which amounts to replacing all propagators, 1/|q| 2 for some momentum q, with delta functions δ(|q| 2 )-captures precious analytic information about the integral.In particular, it is central in the "method of differential equations" (see e.g.[38] for a review) to obtain the so-called "canonical form" [37], which then allows for a systematic solution in terms of special functions.
In our presentation here, we take a more general approach to derive differential operators which annihilate the integral.Instead of using conformal symmetry, we construct the Picard-Fuchs operators [45,52].We do this by following a path which is the reverse of what we discussed in Section 3.2.First, we construct a Pfaffian system of PDEs for the ladder Note that, for the given generators, the D-ideal P 1 , P 2 is already too complicated for Singular to compute a Gröbner basis within a couple of minutes.But, one can check that P 2 ∈ W (I) \ I. Indeed, I W (I) = W ( P 1 , P 2 ) as D-ideals.Their singular loci are Sing(I) = Sing(W (I)) = V (y 2 y 3 (y 2 + y 3 )) . (C.6) The operator G 1 encodes that the solution functions f (y 2 , y 3 ) of the D 2 -ideal I are homogeneous of degree 0, and therefore f (y 2 , y 3 ) = f (y 2 /y 3 , 1).This is not obvious by looking at the Feynman integral, and is therefore a powerful insight of the D-module approach.
Remark C.2.It is possible to deduce that the system can be rewritten in one variable purely from the geometry of the Gröbner fan.The Gröbner fan has rays generated by the vectors (1, 1) and (−1, −1), and is a one-dimensional fan in R 2 , perpendicular to (1, −1).The only cones in the dual fan are R ≥0 • (1, −1) and R ≥0 • (−1, 1).Thus, the SST algorithm tells us that there are two families of series expansions in y 2 and y 3 , with exponents only in N • (1, −1) and N • (−1, 1), respectively.From this, we see that we can rewrite these series expansions in the single variable y = y 2 y −1 3 .⋄ We hence reduce the system to a single variable, namely the ratio y := y 2 /y 3 .To do so, we change variables from (y 2 , y 3 ) to (y, z) via For solving the associated differential equation, we can ignore the pre-factor of z −1 and hence are in the ODE case, with a single variable y.
Since the system is in a single variable, we will only need to distinguish between w being positive or negative, and can hence restrict to the weight being w = ±1.The starting monomials for w = 1 can be obtained from the initial ideal in (−1,1) (G y 2 ) = θ 4 .They are 1, log(y), log(y) 2 , log(y) 3 . (C.10) As in Example 3.18, we encode the coefficients of our Puiseux series in a vector c p of length 4 specifying the coefficients of y p , y p log(y), y p log(y) 2 , and y p log(y) 3 .As before, we can encode the action of θ on the series as multiplication of the coefficient vector c p by the matrix One can easily verify that these are solutions to the operator G y 2 .Similarly, for w = −1 we obtain the exponents {0, −1} and starting monomials 1, log(y), y −1 , y −1 log(y) . (C.16) We use the recursion Here, there is a slight complication: with starting monomial log(y), the matrix F p has a singularity at a = 0 and p = −1.However, we use the definition of canonical series to impose that neither of the terms y −1 , y −1 log(y) appears in the series expansions of the remaining solutions.Explicitly, we set the vector c −1 equal to zero.As before, expressing the recurrences in closed form gives us the Laurent series
Example 3 . 1 .
The Airy equation f ′′ (x) − xf (x) = 0 (3.4) is encoded by the operator P = ∂ 2 − x ∈ D. The Airy functions Ai and Bi are solutions of P , i.e., P • Ai = P • Bi = 0.They span the 2-dimensional solution space of P .⋄ A (left) D-ideal is a subset I ⊂ D that is closed under addition and multiplication by elements of D from the left.They encode systems of linear PDEs.For a vector
Definition 3 . 5 .
The Weyl closure of a D n -ideal I, denoted W (I) is the D n -ideal W (I) := R n I ∩ D n .(3.10) Clearly, I ⊆ W (I). Hence, for the singular locus and the space of holomorphic solutions to the system of PDEs encoded by I, one has Sing(I) ⊇ Sing(W (I)) and Sol(I) ⊇ Sol(W (I)) .(3.11)Moreover, rank(I) = rank(W (I)), since R n I = R n W (I). Since every element Q of W (I) can be written as Q = r • P for some r ∈ C(x 1 , . . ., x n ) and P ∈ I, we also have the inclusion Sol(I) ⊆ Sol(W (I)).Hence, Sol(I) = Sol(W (I)).In summary, a D-ideal I and its Weyl closure W (I) have the same solution space, but W (I) might contain additional operators that annihilate all solutions of I.If rank(I) < ∞, it does not follow that I itself is holonomic; but there is a rescue: taking the Weyl closure turns I into a holonomic D-ideal.
Program (WASP) funded by the Knut and Alice Wallenberg Foundation.This project received funding from the European Union's Horizon 2020 research and innovation programs Novel structures in scattering amplitudes (grant agreement No. 725110) and High precision multi-jet dynamics at the LHC (grant agreement No. 772099), and from the European Union's Horizon Europe Research and Innovation Programme under the Marie Sk lodowska-Curie grant agreement No. 101105486.This research was supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP), which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC-2094 -390783311. | 2023-03-21T01:27:12.360Z | 2023-03-20T00:00:00.000 | {
"year": 2024,
"sha1": "54d93351962281522f39bbb0ca5814a147c9ad4e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11005-024-01835-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "54d93351962281522f39bbb0ca5814a147c9ad4e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
16504497 | pes2o/s2orc | v3-fos-license | Gamma-Ray Bursts as a Probe of the Epoch of Reionization
GRBs are expected to occur at redshifts far higher than the highest quasar redshifts so far detected. And unlike quasars, GRB afterglows may provide"clean"probes of the epoch of reionization because no complications (such as the presence of a strong Ly-alpha emission line) or"proximity effects"(such as the Stromgren sphere produced by ionizing photons from the quasar) are expected. Thus NIR and optical observations of GRB afterglows may provide unique information about the epoch of reionization. In particular, the flux at wavelengths shortward of Ly-alpha provides a direct measure of the density fluctuations of the IGM at the GRB redshift, while the flux at wavelengths longward of Ly-alpha provides an integrated measure of the number of ionizing photons produced by stars in the host galaxy of the GRB up until the burst occurs. A comparison of the sizes of the Stromgren spheres produced by stars in the host galaxies of GRBs and by quasars then provides an estimate of the relative contributions of star formation and quasars to reionization. We use detailed calculations of the expected shape of the GRB afterglow spectrum in the vicinity of Ly-alpha for GRBs at a variety of redshifts to illustrate these points.
Introduction
Exactly when and how the universe was reionized are two of the most important outstanding questions in astrophysical cosmology. The answers to these questions are of fundamental importance for understanding the moment of first light, the formation of the first galaxies, and the nature of the first generation of stars and of quasars, Spectral observations of Lyα-emitting galaxies can be used as a probe of the epoch of reionization (Haiman 2002). However, such galaxies are faint and exceedingly rare (Hu et al. 2002), and the ability to draw inferences from the study of such Lyα-emitting galaxies is complicated by the necessity of disentangling the shape of the trough due to the red damping wing of the Lyα resonance from the (unknown) intrinsic profile of the Lyα emission line and the continuum spectrum at nearby wavelengths of the galaxy (Haiman 2002). In addition, scattering of the Lyα photons by a neutral IGM can broaden the line, reducing its visibility and further complicating the task of recovering the shape of the red wing of the Gunn-Peterson trough.
Observations of bright quasars at high redshifts can also be used as a probe of the epoch of reionization. As an example, the recently discovered bright quasar SDSS 1030+0524, which lies at z = 6.28, shows a distinct Gunn-Peterson 2 D. Q. Lamb & Z. Haiman trough (Becker et al. 2001). The lack of any detectable flux shortward of (1 + z)λ α = 8850Å implies a strong lower limit (x H >∼ 0.01) on the mean mass-weighted neutral fraction of the IGM at z ≈ 6 (Fan et al. 2002). This suggests that the IGM is neutral beyond z ∼ 7.
However, such quasars are rare (∼ 10 −3 deg −2 ) and therefore difficult to find. And again, the ability to draw inferences from the study of such quasars is complicated by the necessity of disentangling the shape of the trough due to the red damping wing of the Lyα resonance from the (unknown) intrinsic profile of the bright Lyα emission line (Madau & Rees 2001;Cen & Haiman 2001). Figure 1 places GRBs in a cosmological context. GRBs have several distinct advantages over Lyα-emission galaxies and quasars as a probe of the epoch of reionization: • GRBs are by far the most luminous events in the universe, and are therefore easy to find.
• Somewhat surprisingly, the infrared and near-IR afterglows of GRBs are detectable out to very high redshifts because of cosmological time dilation (Lamb & Reichart 2000).
• No "proximity effect" is expected for GRBs.
• GRB afterglows have simple power-law spectra and dramatically outshine their host galaxies, making it relatively easy to determine the shape of the red damping wing of the Lyα resonance.
Calculations
A stellar population with a Salpeter initial mass function extending from 0.1 to 120 M ⊙ produces ≈ 4000 ionizing photons per stellar proton over its lifetime. Assuming a (steady) SFRṀ * and an age t * for the star-forming episode, the total number N ion of ionizing photons produced by the host galaxy is Here f esc is the fraction of the ionizing photons that are produced by the stars in the host galaxy and that escape from the galaxy. The fraction of ionizing photons that escape from nearby (low redshift) galaxies is f esc ∼ 0.1. However, extinction is due primarily to dust, which may play a smaller role at redshifts z ∼ 7 where the metallicity is expected to be far smaller. Pop III stars may produce a factor of ∼ 10 more ionizing photons, but they are not expected to be a major contributor to the starlight from galaxies at redshifts z = 7 − 9. Cosmological context of very high redshift (z > 5) GRBs. Shown are the epochs of recombination, first light, and re-ionization. Also shown are the ranges of redshifts corresponding to the "dark ages," and probed by QSOs and GRBs. From Lamb (2000).
Consider a GRB host galaxy at redshift z GRB with a (steady) SFRṀ * and a star-forming episode of age t * . We assume the source is embedded in an initially neutral IGM with a mean density ρ IGM = Ω b ρ crit (1 + z GRB ) 3 . The radius R S of the ionized photons around a GRB host galaxy can then be written asṄ ion t * where t * is the age of the star-forming episode. We have equated the number of ionizing photons to the number of hydrogen atoms inside R S . This is valid for low gas clumping factors and small ages, but recombinations can decrease the size of R S for C > 10 and t * > 10 8 yr. The results presented below take the UV spectrum of the GRB afterglow to have the form F ν = F 0 (ν/ν 0 )) −0.5 .
Results
Figure 2 (upper panel) shows the near-IR spectrum of the afterglow of a GRB in a host galaxy lying at a redshift z = 6.5 in which stars have been forming at a rateṀ * = 100M ⊙ yr −1 for t * = 10 8 yrs. The nearly horizontal solid line near the top of the panel shows the adopted intrinsic spectrum, and the bottom solid curve shows the spectrum including absorption in the IGM and by the neutral atoms inside the 0.75 Mpc (proper) H II region surrounding the host galaxy of the GRB. The lower panel shows optical depth as a function of wavelength from within the H II region (short-dashed curve), from the neutral IGM outside the H II region (dotted curve), as well as from the sum of the two (solid curve). In both panels, the long-dashed curves describe an alternative, more realistic treatment of the residual H I opacity within the H II region (see text). The arrow indicates the wavelength of Lyα in the rest frame of the GRB and its host galaxy.
The shape of the afterglow spectrum of the GRB and the optical depth as a function of wavelength near the wavelength of Lyα in the rest frame of the GRB depend sensitively on (1) the clumpiness of the IGM, (2) the product of the star-formation rateṀ ⊙ and the age t 8 of the star-formation episode, and (3) Figure 2.
Near-IR spectrum of the afterglow of a GRB lying at z = 6.5.
redshift, because of the rapidly increasing density of the IGM with increasing redshift [ρ IGM ∝ (1 + z) 3 ]. Thus near-IR observations of GRB afterglows in the vicinity of the wavelength of Lyα in the rest frame of the GRB can provide unique information
Conclusions
We have shown that NIR and optical observations of GRB afterglows may provide unique information about the epoch of reionization. In particular, highresolution near-IR observations of GRB afterglows can not only provide unique information about the properties of the IGM at very high redshifts, but also about the amount of star-formation that has occurred prior to the GRB in the host galaxy of the GRB. | 2014-10-01T00:00:00.000Z | 2003-03-01T00:00:00.000 | {
"year": 2003,
"sha1": "404e04ab2fab894804d897bb59d308bd36d15c15",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0d3953336eecef5a259457b3b5072c52d7e9ff03",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
221174632 | pes2o/s2orc | v3-fos-license | Impact of global climate change on livestock health: Bangladesh perspective
The global carbon emission rate, due to energy-driven consumption of fossil fuels and anthropogenic activities, is higher at any point in mankind history, disrupting the global carbon cycle and contributing to a major cause of warming of the planet with air and ocean temperatures, which is rising dangerously over the past century. Climate change presents challenges both direct and indirect for livestock production and health. With more frequent extreme weather events including increased temperatures, livestock health is greatly affected by resulting heat stress, metabolic disorder, oxidative stress, and immune suppression, resulting in an increased propensity for disease incidence and death. The indirect health effects relate to the multiplication and distribution of parasites, reproduction, virulence, and transmission of infectious pathogens and/or their vectors. Managing the growing crossbreeding livestock industry in Bangladesh is also at the coalface for the emerging impacts of climate change, with unknown consequences for the incidence of emerging and re-emerging diseases. Bangladesh is now one of the most vulnerable nations to global climate change. The livestock sector is considered as a major part of food security for Bangladesh, alongside agriculture, and with one of the world’s largest growing economies, the impacts are exaggerated with this disaster. There has been no direct study conducted on the impact of climate change on livestock health and the diseases in Bangladesh. This review looks to explore the linkage between climate change and livestock health and provide some guidelines to combat the impact on livestock from the Bangladesh perspective.
Introduction
Climate change is the complex and multidisciplinary change in global or regional climate patterns, which pose a significant risk for human and natural systems. The most intricate multifactorial global challenge, which jeopardizes human and natural system, is similarly threatened livestock production and productive performance. It is projected that global mean surface temperature will be increased by about 3.7°C (likely range of 2.6°C-4.8°C) by 2100 (IPCC, 2013), and with changes to the frequency, intensity and duration of extreme weather events will be evident (Howden et al., 2008). Such the changes will directly and indirectly impact the production and health parameters of livestock and include a complex suite of interacting biophysical parameters that influence growth performance; meat and milk yield and quality; egg yield, weight, and quality; reproductive performance; metabolic and health status; and carcass traits as some examples (Nardone et al., 2010;Henry et al., 2012). The direct health impacts for livestock due to climate change by temperature-related illness, changes in metabolic functions, and morbidity due to extreme weather events pose a significant challenge (Nardone et al., 2010). This coupled with the indirect impacts on livestock health, making it vulnerable to unprecedented diseases by the effects of any climate change, hard to predict. A vast number of studies have been demonstrated on climate change leading to impaired health and adverse effects on the animal immune system that can be compromised with distribution, growth, and incidence of diseases and reproductive health (Thornton et al., 2009;Lacetera, 2012;Bett et al., 2017;Caminade et al., 2019). Heat load and subsequent heat stress (HS) alone result in economic losses and health management costs in the USA of over $900 million/year for dairy industry and over $300 million/year for beef and swine industry (St-Pierre et al., 2003). The world population is projected to grow by 33% from 7.2 to 9.6 billion by 2050 (UNDESA, 2017). Hence, it is estimated that up to 70% of increases in agricultural productivity will be required to meet the future demand for the standard of living and food security (O'Mara, 2012). As an example, the global demand of milk and meat production predicted for 2050 to meet the global demand is estimated to increase 1,077 and 455 million tonnes, respectively, equating to almost double that of 2006 production (Alexandratos and Bruinsma, 2012). Global climate change represents a major challenge to achieve the predicted productivity growth required to meet those future demands. The additional factors that pose an impact are contributions of the agricultural sector itself directly to the livelihoods of the poorest populations in the world, which currently estimated to employ 1.1 billion people (Hurst et al., 2005). Any impact on this sector directly affects the most vulnerable populations. In case of milk production, it is forecasted that about 1,077 million tonnes of milk will be produced in 2050 up from 664 million tonnes produced during 2006, and in the same way, meat production will be doubled from 258 to 455 million tonnes (Alexandratos and Bruinsma, 2012). Bangladesh has the eighth highest population density (people per sq. km of land area) in the world, with a unique geographical location, classified as a tropical area locating on a deltaic plain. It is dominated by floodplains, with significant expanses of low elevation, making it vulnerable to rising sea levels and flooding (Mahmood, 2012;World Population Review, 2019). Currently, new generations in Bangladesh are suffering from mounting adverse effects comparatively from the previous generations (Mahmood, 2012). Bangladesh is considered to be one of the developing countries with lower energy demand and spotted low carbon print, as well as low carbon emission country (World Bank, 2014). However, it is identified as one of the top 10 nations in the world vulnerable to global climate change (UNDESA, 2017). Bangladesh's livestock and agriculture resources are considered an important part of food security for the nation, and with one of the world's largest growing economies, the impacts of climate change are exacerbated. The livestock sector in Bangladesh consists of 24.23 million cattle, 1.48 million buffalo, 3.53 million sheep, 26.26 million goats, 289.28 million chickens, and 57.75 million ducks. This sector is contributed to animal protein production predominantly milk (9.923 million metric tons), meat (7.514 million metric tons), and eggs (171.1 nos) (DLS, 2019). Changing climate is considered as a threat to livestock production because of the impact on the quality of concentrate and roughage feeds, availability of clean drinking water, meat and milk production, disease prevalence and incidence, reproduction, and biodiversity (Thornton et al., 2009;Nardone et al., 2010;Henry et al., 2012). Globally, it is estimated that livestock disease reduces productivity by 25% with the heaviest burden falling on the poor (Grace et al., 2015). Every year, several reports from industry and government are demonstrating the alarming incidence of infectious, noninfectious, and reproductive health issues of livestock throughout Bangladesh. The objective of this study is to describe the direct and indirect effects of global climate change on livestock health performance in Bangladesh.
Definition of climate change
Climate change is defined as an average weather condition of an area that is characterized by its own internal dynamics, and it can affect by changing its external factors. The United Nations Framework Convention on Climate Change (UNFCC) defines climate change as the change resulting from longterm direct and indirect activities that induce changes in the compared time, which are much more than the natural change (UNFCCC, 1992). On the other hand, the weather is a set of all the phenomena occurring in a given atmosphere at a given time (Stephenson et al., 2008). Climate change is an association of multidimensional effects on climate including physical characteristics, causes, and consequences (Visschers, 2018).
Impact of climate change on livestock health
The projected average increase of surface temperatures by the IPCC (IPCC, 2013) is primarily due to an increase of global atmospheric temperature and greenhouse gases mostly carbon dioxide (CO 2 ) concentration, precipitation variation, and/or permutation of these factors (Henry et al., 2012). Temperature plays a central role on livestock by affecting rainfall, forage, production, reproduction, and health. Forage productions are influenced by increased temperature, CO 2 , and/or combination of precipitation variation (Sawalhah et al., 2019). However, livestock health is mainly affected by increased temperature and precipitation variation. The factors that influence livestock health are extremely complex, involving environmental forces including ecological, social, economic interest, individual, and/or community behavior (Forastiere, 2010). Climate change will have many knock-on effects, both direct and indirect. The direct effect of climate change on livestock health includes temperature related to frequent disease incidence and death. The indirect effect follows more intricate pathways and includes the climate influences on pathogen density and distribution and multiplication of vectors as well as vector-borne diseases and soil-, food-, and water-borne diseases. These are discussed in the following context of their impact on livestock production for Bangladesh.
Direct effects
The increases in the frequency, severity, and duration of temperature to the edge of extremity are anticipated in the near future as a result of global climate change and directly impairing animal production systems due to health effects. The direct effect of climate change on animal health has been described as a reduced competence of the host to mount a response to infection (Bett et al., 2017). These effects are compounded by thermal stress or HS conditions. Depending on the degree, duration, and severity of heat exposure, livestock health can be affected by causing metabolic disorder, oxidative stress, immune suppression, decreased reproductive performance, and death. Heat stress HS can simply be defined as the point when animals cannot dissipate an adequate amount of heat from the body to balance the body thermal condition (Mondal and Reddy, 2018). To avoid HS, every animal has an ideal ambient temperature to maintain a thermoneutral condition. The impact of ambient conditions on animals' performance can be estimated by calculating http://www.openveterinaryjournal.com M. Z. Ali et al. Open Veterinary Journal, (2020) (Hamid et al., 2017). Now, it has been established that the higher production cows are less tolerant to HS than low production cows (Staples and Thatcher, 2011). Holstein-Friesian dairy cows are renowned for their milk production but predominantly susceptible to HS (West, 2003). When the ambient temperature is over 25°C, high yielding dairy cows become heat stressed with primary signs shown as increased body temperature and respiration rates (Staples and Thatcher, 2011). As body temperature increases, concurrently feed intake and milk production decrease (Du Preez et al., 1990). Staples and Thatcher (2011) reviewed that milk production can decrease from 22.4 to 19.2 kg/day if body temperature (rectal) increased from 38.8°C to 39.9°C. From these studies, it can be inferred that the crossbred cows in Bangladesh will succumb to similar production limitations due to HS.
Metabolic disorders
The livestock is physiologically homeothermic animals, and they respond to high ambient temperatures by increasing heat loss while concurrently decreasing the internal heat production rate. This is possible by increasing respiratory and sweating rates and decreasing feed intake (Lacetera, 2019). Metabolic disorders occur during this physiological process, and its impacts are described as follows: (a) Lameness: Cook and Nordlund (2009) revealed that HS influences the lameness of farm animals by the contribution of ruminal acidosis or increased output of bicarbonate. The heat-stressed animal eats less frequently during cooler times of the day but consumes more during each feeding (Shearer, 1999). Hence, there was a reduced feed intake in the hotter part of the day, followed by increased feed intake at below ambient temperature of the day, which is considered as a major cause of ruminal acidosis by increased output of bicarbonate, resulting in laminitis (Stone, 2004). HS also affects an animals' behavior such as lying behavior in pen. The relationship between the proportions of cows standing with ambient temperature is linear. On the other hand, an inverse relationship was found between the proportions of cows lying down and ambient temperature (Zähner et al., 2004). Hence, longtime standing may aggravate changes in claw by compromising the structure of the claw, and consequently, reductions in lying per day have been correlated with lameness in dairy cows (Cook et al., 2004). (b) Ketosis: Ketosis is a metabolic disease characterized by a relatively high concentration of ketone bodies such as acetone, β-hydroxybutyrate, and acetoacetate with a concurrent decrease of blood glucose levels (Dann et al., 2005). It is developed while animal affected in the severe state of negative energy balance, suffers forceful lipomobilization, and accumulates ketone bodies, which develops from inadequate catabolism of fat (Lacetera, 2019). During HS, animals' reduced feed intake combined with increased energy requirement for fulfillment of body physiological demand may cause negative energy balance that influences the mobilization of adipose tissue consequently, resulting in the development of ketosis (Abuajamieh, 2015). When cattle are affected by ketosis, they lose weight and produce less milk. The climate change also affects animal health by hampering endocrine status, liver functionality, glucose, protein and lipid metabolism (Bernabucci et al., 2006), saliva production, salivary HCO 3 content, and fitness and longevity (King et al., 2006).
Oxidative stress
In the past decades, the oxidative stress on livestock by HS has been increasing research demands (Akbarian et al., 2016). In livestock, oxidative stress may be related in a number of pathological conditions, including conditions that are relevant for livestock production and the general welfare of animals (Lykkesfeldt and Svendsen, 2007). It is caused due to imbalance between oxidant and antioxidant molecules by increasing oxidants and/or decreasing antioxidants. Mirzad et al. (2017) demonstrated that total serum antioxidant levels decrease during the summer and postpartum periods in heifers and found a correlation between them with HS. It was also identified that total carotenes and Vitamin E level were reduced during summer. Finally, HS has been associated with an increased activity of antioxidant enzymes such as superoxide glutathione, dismutase, and catalase peroxidase, which can interrupt adaptation response to increase the levels of reactive oxygen species (ROS) (Trevisan et al., 2001). The other ROS such as hydrogen peroxide (H 2 O 2 ), superoxide anions (O 2 −), and hydroxyl radicals (OH-) have a negative impact on lipid peroxidation, and the enzyme http://www.openveterinaryjournal.com M. Z. Ali et al. Open Veterinary Journal, (2020), Vol. 10(2): 178-188 inactivation process causes cell damage (Mondal and Reddy, 2018).
Immune suppression
The immune system is a complex physiological defense mechanism that protects individuals from pathogens. It has been demonstrated to be compromised by several factors (Lacetera 2012). Several studies reported that HS may have a negative impact on the immune system in livestock. In brief, chronic exposure to HS has been demonstrated to impair immune response in poultry (Regnier and Kelley, 1981), with reduced colostrum immunoglobulins such as IgG and IgA in dairy cows, which impair calf immunity (Nardone et al., 1997), depression in lymphocyte function that hampers the efficacy of vaccinations (Lacetera et al., 2005), and impaired function of neutrophils that are important for defense against bacteria (Lecchi et al., 2016). The immune suppression accelerates the chance of infections resulting in the potential for the increased use of antimicrobials. The increased use of antimicrobials potentially leads to an increase in antimicrobial resistance (AMR) which is ranked as one of the current global challenges. About 7,000,000 people die every year due to AMR, and it is expected that it will rise by up to 10 million deaths per year by 2050 (O'Neill, 2016). Researchers have reported a high prevalence of the resistance of bacteria to most antibiotics including reserve group antibiotics (Colistin) in Bangladesh, demonstrating AMR to be an important and growing issue (Ahmed et al., 2019;Rousham et al., 2019). Mastitis is the major health problem in the dairy industry worldwide. Several studies reported that mastitis incidences are increased during summer and have a significant correlation with HS due to suppressed immunity, thermal injury of the udder, increased survival capacity, and spread of pathogens in the summer (Chirico et al., 1997;Vitali et al., 2016). In Bangladesh, subclinical mastitis is a great concern in growing dairy sectors and prevalent in 20%-44% of cows tested by the California mastitis test (Islam et al., 2010). Effect on reproduction HS affects the reproductive performance by changing blood flows and production of different hormones, with reductions of 20%-30% in conception rates and anoestrus in ruminants during summer seasons (King et al., 2006). Talukder et al. (2018) demonstrated an increased incidence of post-partum anoestrus in cattle during the summer season in the selected areas of Bangladesh reaching up to 14.43%. The pig industry suffers considerably due to impaired reproductive disturbances during late summer and early autumn months compared to spring and winter. It is called seasonal infertility with pigs influenced by photoperiod and temperature (Auvigne et al., 2010). HS also impacts semen quality (Nichi et al., 2006), estrus cycle (Wolfenson et al., 2000), and oocyst and embryo development (Stewart et al., 2011).
Mortality
A number of studies have reported increases in animal mortality rate during once temperature above the average range and increased mortality incidence during extreme weather situation (Vitali et al., 2009;Vitali et al., 2015). Temperature increases between 1°C and 5°C above average have been suggested to give rise to higher mortality in grazing livestock (Howden et al., 2008). The daily THI reached between 80 and 70, where 80 is the maximum value and 70 is the minimum value, which can induce dairy cows' mortality rate (Vitali et al., 2009). It is also revealed that daily upper and lower critical maximum and minimum THIs are 87 and 77, respectively, above which the threat of heat-induced mortality rate becomes maximum. Finally, THIs (78.5 and 73.6) were the thresholds, which increase mortality rate significantly during transport and lairage, respectively. Purusothaman et al. (2008) reported an increased mortality in Mecheri sheep during the summer season in India due to HS. There have been many reports published on the increase of livestock mortality during extreme weather events. In France during summer 2003, pigs, poultry, rabbits, and also over 35,000 people died due to extreme and prolonged heat waves (Lacetera, 2019). Vitali et al. (2015) described that the mortality of farming animals increases on the day of a heat wave compared to a day without heat wave. Bangladesh has recorded dramatic increases in the maximum summer temperatures over the past 40 years. Over the past 63 years, Bangladesh has recorded monthly maximum, minimum, and average temperature increases at 0.12°C, 0.08°C, and 0.56°C, respectively (Ahmed and Hossen, 2014). These diverse documented effects of increasing temperature on livestock health have not been collected and systematically recorded in Bangladesh; however, considering the evidence of broad health impacts from increased temperatures, the livestock sector of Bangladesh needs to prepare and implement the strategies to accommodate for the adverse effect of increasing temperatures on animal production systems.
Indirect effects
The indirect effect of climate change on livestock health is associated with climate-driven ecosystem changes and physiological adaptations which could trigger vectors or pathogen virulence and/or genome diversity and vector-pathogen-host exposure (Bett et al., 2017). The emergence and re-emergence of vectorborne pathogens have globally provided evidence for the relationship between climate change and effects on the human/animal health interface (Caminade et al., 2019). In the disease process, three epidemiological factors such as the agent, host, and environment are closely entangled and persist in ecosystems. The climate change could enhance pathogen replication and/or virulence which can negatively affect livestock health (Harvell et al., 2002). The indirect effect of climate change on livestock health is defined as follows.
Vector-borne pathogen
The global warming and changes in precipitation and humidity positively affect the reproduction and spread of vector-borne pests such as midges, flies, ticks, and mosquitoes (Thornton et al., 2009). This intern has the potential to increase the geographic spread of vector-borne diseases such as bluetongue, lumpy skin diseases (LSDs), anaplasmosis, babesiosis, and theileriosis. The IPCC 2007 report warned that global climate change patterns could positively affect the spatial distribution of vectors such as mosquitoes and ticks (IPCC, 2007). These transmission dynamics of vector-borne diseases could be affected in two ways including (i) vector survival and geographical range changes and (ii) nature of vector activity, efficiency, and susceptibility to infectious changes. Arthropod vectors are very fragile to fluctuations in temperature and humidity. Several researchers reported that the warmer conditions accelerated disease transmission into the host (Thornton et al., 2009). About 18% of body weight can decrease by tick infestation in climate change condition in Australian livestock simulated by White et al. (2003). A model simulated by Wittmann et al. (2001) demonstrated that an increase of 2°C of environmental temperature can extensively spread Culicoides imicola, which is responsible for the transmission of bluetongue virus in sheep cattle, goats, and also wild ruminants. So far, to the best of authors' knowledge, the bluetongue virus is not identified yet in Bangladesh, but it is frequently identified in the Indo-Bangladesh borders (Rao et al., 2016;De et al., 2019). With currently changing weather patterns, it is possible that bluetongue virus may be found in Bangladesh, and the epidemiological investigation is needed to monitor for this. This virus has been spreading globally since 1990, and the spread is being enabled by global warming (Lacetera, 2019 (Woods, 1988). The mosquito Aedes aegypti acts as a potential mechanical vector for LSD transmission (Chihota et al., 2001). A report has been published that the density of A. aegypti becomes 13 times higher in Dhaka city during 2019 than 2018 when LSD emerged in Bangladesh (Prothomalo, 2019), and climatic factors can influence the population of Aedes mosquitoes (Karim et al., 2012). Climate change has been demonstrated to impact vector populations and distribution and as such may influence the transmission of LSD in Bangladesh and further the risk of spread across a wide geographical area (Tjaden et al., 2018). This outbreak of LSD is a sign of continuous influences of climate change on livestock in Bangladesh. Deforestation and vegetation clearance are also contributing to ecological imbalance by increasing local temperature and humidity that accelerate the spread of vectors. In Bangladesh since 1930-2014, a total of 39.1% (from 23,140 km 2 to 9,054 km 2 ) deforestation has occurred with continued annual deforestation at about 0.77% (2006-2014) .
Parasites and helminths
Climatic variables directly affect an increase in the abundance, prevalence, severity, and geographical distribution of helminths. It has been observed that the development rates of free-living larval stage of the Haemonchus contortus become increased in the tropical regions with increased temperature (Fox et al., 2015). This worm sucks blood from the stomach of sheep and causes severe anemia and death. Kim et al. (2012) shown that the development of Ascaris suum eggs through enhanced embryonation becomes accelerated when ambient temperature increases from 25°C to 35°C in a laboratory condition. Global climate change could influence the rapid development of parasites in their invertebrate intermediate hosts like snails. The lifecycle of lungworms is also reliant on weather conditions and increases incidence in the summer/autumn than the winter season. Fascioliasis, schistosomiasis, and nematodiases including heterakiasis and different trichostrongyliases are the helminth diseases that are mostly influenced by climatic changes (Fox et al., 2012). On the other hand, climate change influences land-use changes, particularly with the development of dams and irrigation schemes for agricultural productivity, which can contribute to a positive impact on livelihoods and henceforth nutritional status, and can also have a substantial negative effect on human and animal health. Standing water from these systems results in higher localized humidity that can trigger the development of a range of vectors. Pfeiffer et al. (2005) reported rift valley fever outbreaks occurred in dams and irrigated areas in Egypt, Sudan, and Mauritania/Senegal. Other parasitic diseases like trematodiases (fascioliases and schistosomoses) became endemic in this area http://www.openveterinaryjournal.com M. Z. Ali et al. Open Veterinary Journal, (2020), Vol. 10(2): 178-188 due to standing water masses and increased humidity influences of the development rates of intermediate hosts (snails) and cercariae. Afshan et al. (2014) reported an increased incidence of fascioliasis in animals occurring in an irrigation area of Punjab province of Pakistan. A similar study was conducted in Tanzania that Fasciola gigantica and paramphistomes in cattle were higher in irrigation areas than non-irrigation areas and absent in non-grazing areas (Nzalawahe et al., 2014). The overall parasite incidence in cattle in different geographical areas of Bangladesh is 81% in hilly areas (Nath et al., 2016), 64% in Chattogram District , 73% in Sylhet District (Paul et al., 2016), 66% in Sirajganj District (Karim et al., 2015), and 51% in Mymensingh District (Ghosh et al., 2016). These records report the incidence of parasites and helminths in Bangladesh and provide researchers with information to further explore the relationship between increasing incidence and climate change.
Mycotoxins
Most of the roughage and concentrate feeds are perishable in the environment. The altered environmental temperature and humidity due to climate change can directly affect livestock feed quality and consistency of nutrients ingredients, altering the growth of molds in feeds. The growth of molds and association of mycotoxins production are closely related to temperature and degree of moisture, which are dependent on weather conditions during harvest and techniques for drying and storage (Mannaa and Kim, 2017). The major mycotoxins are aflatoxin, ochratoxin, T2, fumonisin, and zearalenone, which are a metabolic product of mycotoxicogenic molds produced at the optimum temperature of 25°C-37°C and moistures of 80%-85% (Coppock et al., 2018). It can cause acute disease cases when animal consumes above the critical level of contaminated feeds and has a negative effect on different organs such as the liver, kidney, gastrointestinal tract, and reproductive tract (Nwangburuka et al., 2019). The chronic exposure of these mycotoxins even at low levels dangerously leads to immune suppression and reduces production performance and quality of end product as a result of downgrading (Hoerr, 2017;Bernabucci et al., 2011). Aflatoxin type B1 is more toxic and has a significant public health impact due to hepatotoxic and its carcinogenic properties in humans (McCullough and Lloyd, 2019). Fakruddin et al. (2015) reported that about 50% of nuts and pulses of livestock feeds from Bangladesh are contaminated with Aspergillus flavus and contained at least three aflatoxigenic genes, and 90% of them can produce aflatoxin B1 ranging 7-22 μg/g in agar, which are alarming for public health.
Other pathogens
Climate change leads to extend diseases to spread through pathogen biodiversity since pathogen copes with the new situation and causes unhabitual outbreaks (out of season diseases). Furthermore, the most important climate change plays a significant role in pathogen evolution in general. Temperature and humidity play a substantial role on pathogens that maintain a part of their lifecycle outside of the final host body such as those responsible for black quarter, dermatophiloses, and anthrax. Anthrax is a deadly zoonotic disease caused by Bacillus anthracis spores. This spore can be viable for up to 10-20 years in pasture land. Temperature and other climatic conditions such as humidity, rainfall, pH, water activity of soil, and availability of nutrients all affect the successful germination of anthrax spores. Heavy rainfall, soil erosion, and drought may stir up the dormant spores and increase the exposure of an animal (WHO, 2008).
In the bank of Jamuna River of Bangladesh, there are four districts (Pabna, Shirajgonj, Rajbari, and Bogura) considered endemic zones for anthrax, and every year, a significant number of outbreaks occur in both livestock and humans (as a cutaneous form) (Islam et al., 2018). Global climate change affects the seasonal temperature variation of migratory birds traveling from the Arctic area to a tropical area such as Bangladesh. Evidence exists that during migration, they carry and disperse avian influenza virus through contact with riverside domestic ducks in Bangladesh during the winter season, with these migratory birds identified as the potential source of avian influenza spread in Bangladesh (Haider et al., 2017;Sarker et al., 2017).
Natural disasters in Bangladesh
Bangladesh is the worst sufferers for global warming and climate change due to low elevation geography and its disaster-prone areas. According to the CRI 2019 (Global Climate Risk Index 2018) report, Bangladesh is ranked eighth as a high-risk country for suffering extreme weather events (Eckstein et al., 2018). Salinity has caused significant negative effects on fodder and feed crop production as well as access to fresh drinking water for both human and animal consumption. Saline intrusion is the major cause of salinity in the coastal belt of Bangladesh, and it is an effect of rising sea level due to global warming (Chen et al., 2012). About 1.5-2 m sea level rise was recorded in the coastal belt of Bangladesh (Ortiz, 1994). The coastal belt of Bangladesh consists of 19 districts that cover 32% of the total land area (Alam et al., 2017). During 1973, salinity level was 0.9-2.1, but now it has increased 26% in the past 35 years in the coastal belt areas, significantly impacting land usage (Iftekhar and Islam, 2004). It has a significant impact on livestock health and production with many negative consequences such as nutritional deficiency, lack of fresh water, increased incidence of diarrhea, skin diseases, liver fluke, loss of bodyweight, and breakdown of the immune system (Alam et al., 2017). Additional concerns for Bangladesh include temporal variations to drought characteristics as a result of climate change (Mohsenipour et al., 2018). The climate change has influenced average rainfall, evapotranspiration, and atmospheric water storage and thereby changed precipitation patterns in dry http://www.openveterinaryjournal.com M. Z. Ali et al. Open Veterinary Journal, (2020), Vol. 10(2): 178-188 seasons, especially in the northern area of Bangladesh, having downstream impact disease distribution, in turn influencing factors causing diseases in livestock (Bett et al., 2017). The incidence and intensity of the extreme events such as cyclones, tidal surges, floods, salinity intrusions, and droughts have increased significantly in recent decades in Bangladesh due to global climate change (Dastagir, 2015).
Conclusion
Climate change is now a global concern due to its multidimensional effects and impact on humans, animals, plants, and environment. Research has established that changes in global or regional climate patterns due to climate change are affecting livestock health directly and indirectly. Bangladesh is listed as one of the most vulnerable countries for global climate change due to its unique geographical location. Livestock sectors of Bangladesh are growing rapidly to fulfill the increased demand of animal protein of the world's eighth densely populated country. However, it is suffering with increased disease loads and often reporting emerging and reemerging infectious diseases. Climate changes are increasing HS by a lingering hot season and increased heat waves in the summer, resulting in an increased disease incidence for homoeothermic farm animals. This paper reviewed the relationship between climate change and livestock health in Bangladesh and the potential for increased impact for the livestock sector and identified a lack of systematic research globally and an absence of information in Bangladesh on this topic. Specific recommendations for adaptation and mitigation of the impact of climate change on animal health in Bangladesh are as follows: a. Research on climate-resilient technology for livestock and their adaptation. b. Follow the good farm practice, strict farm biosecurity, and herd health management. c. Development of updated vaccines and therapeutics combating for endemic and emerging diseases. d. Design the animal sheds considering HS, animal comfort, animal behavior, and climate change. e. Conservation and development of local animal genetic resources. A long-term policy of breeding for disease resistance traits through continuous challenge and natural selection. f. Development of saline and drought-tolerant fodder varieties and modern feeding management practices. g. Development and application of the methodology to link climate data with animal disease surveillance system. h. A comprehensive and coordinated study on real-time impacts of climate change on livestock and making a corrective policy decision. | 2020-08-20T05:06:12.113Z | 2020-05-14T00:00:00.000 | {
"year": 2020,
"sha1": "51b87ec5b3e5750c314bb3aed326fef48d181696",
"oa_license": "CCBYNC",
"oa_url": "https://www.ajol.info/index.php/ovj/article/download/198673/187340",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "51b87ec5b3e5750c314bb3aed326fef48d181696",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
213871459 | pes2o/s2orc | v3-fos-license | Exploring syntactic variation by means of “Language Production Experiments”: Methods from and analyses on German in Austria
Abstract This article presents computer supported “language production experiments” (LPEs) as a method for the investigation of syntactic variation. It describes the setup for the investigation of numerous syntactic phenomena and provides a sample study of the German GET passive across Austria. It also suggests that LPEs offer possibilities for the targeted investigation of linguistic variation in various ways. They may be used to explore speakers’ individual linguistic repertoires and an according corpus setup can be used to examine e.g., interspeaker patterns of variation. LPEs also enable researchers to investigate which linguistic factors control or influence syntactic variation.
Content and goals
While variationist linguistics has focused primarily on phonetics/ phonology since its inception, variationist research is slowly but increasingly starting to focus on syntactic variation. This research requires modifications and expansions of theoretical and methodological approaches of variationist linguistics (cf. Cheshire, 2005;Labov, 1978;Lavandera, 1978). This contribution primarily aims to provide a methodological enrichment of the discussion by adding an experimental approach to the previously predominant empirical approaches. Additionally, the syntactic "language production experiments" (cf. Section 3) attempt to seize a "syntactic variable" in an empirical manner.
Over the years, a broad spectrum of innovative projects on the variation of syntax has emerged, focusing particularly on the variation of dialect syntax. 1 This research has broadened the empirical basis of modern linguistics and has shown that syntactic variation provides valuable insights into various linguistic disciplines. Regarding the survey methods, many of these projects use oral questionnaires, in which speakers are required to make judgements about grammaticality, acceptability and preference of syntactic variants. Research rarely relies upon "uncontrolled" conversational data, as this can be problematic when regarding syntactic variation (particularly when concerning quantity). This article presents an alternative and effective research method that offers solutions to the quantitative and qualitative problems of many other methodological approaches. This approach consists of computer supported "language production experiments" (LPEs), which aim to evoke syntactic constructions and uncover the linguistic, as well as sociolinguistic, factors that control or influence variation. 2 Our paper attempts to illustrate the potential and the limitations of LPEs when eliciting data on syntactic variation within a large-scale project, namely project part 03 "Between dialects and standard varieties: Speech repertoires and varietal spectra" of the Special Research Programme (Spezialforschungsbereich SFB) "German in Austria. Variation-Contact-Perception" (subsequently referred to as "SFB DiÖ" (Spezialforschungsbereich 'Deutsch in Österreich')). 3 In the context of the SFB DiÖ, the LPEs are used to elicit language data from the (more or less dialectal) non-standard and the (more or less) standard spectrum of individuals' German language repertoires. The experiments facilitate the systematic investigation of syntactic variation, regarding the inter-individual variation across speakers in various regions of Austria, as well as the intra-individual variation on the "vertically" conceptualized "dialect-standard-axis" (cf. Auer, 2005) within one and the same speaker. As well as taking a methodological focus, this article presents comprehensive linguistic and sociolinguistic analyses of one selected syntactic phenomenon (the so called "GET passive" in German). The contribution is structured as follows: In the overview section (Section 2), the current state of areal-linguistic research into the syntactic level, with a focus on Europe and the German-speaking area, is summarized. Section 3 focuses on methodological aspects of the LPEs used in the SFB DiÖ project. Section 4 provides an exemplary analysis of one selected linguistic phenomenon to illustrate the productivity and validity of LPEs for the analysis of syntactic variation. The article closes with a summary in Section 5.
Syntactic variation-research overview
Taking Kayne's (1996) theory of microvariation as a starting point, a broad spectrum of studies and projects on dialect syntax have emerged in Europe over the past 30 years, primarily with theoretical interests and goals. Besides numerous individual studies (cf., e.g., Poletto, 2000), there are also several dialect syntax (atlas) projects taking place across Europe. Most of these projects are thematically and methodologically connected within the network European Dialect Syntax (Edisyn) (<www.dialectsyntax.org>). Their thematic integration primarily manifests in a shared emphasis on doubling phenomena in all European syntax projects. Their methodological integration is based, in part, on overlapping and standardized methods of data collection, facilitating the comparison of data across the individual studies. In this respect, the Syntactische Atlas van de Nederlandse Dialecten (SAND) ('Syntactic Atlas of Dutch Dialects') has set standards that have become well established (cf. Barbiers, 2005;Barbiers, van der Auwera, Bennis, Boef, de Vogelaer & van der Ham, 2008).
Regarding the current state of research on German variation, research on the areal-social dimensions of syntactic variation previously focused almost exclusively on the syntax of dialects. Research of dialect syntax in the German-speaking area emerged at the end of the 19th century, when important studies, such as Schiepek (1899Schiepek ( , 1908 appeared. However, it is only since the 1980s that research on German dialect syntax came back into focus, first concerning methodological considerations of its practicability (Patocka, 1989;Tatzreiter, 1989) and later mainly as the object of theoretical linguistic research (cf., e.g., Abraham & Bayer, 1993;Bayer, 1984;Grewendorf & Weiß, 2014;Haider, 1993;Weiß, 1998). In addition to syntactic theory, German dialectology has started to address questions regarding syntactic structure. Around the beginning of the new millennium, a veritable "boom" in research on dialect syntax can be observed (cf., e.g., Fleischer, 2002;Schallert, 2014;Seiler, 2003; among many others).
Besides phenomenon-oriented studies, large-scale investigations of dialect syntax in the form of atlases have also emerged. Here, the Syntaktischer Atlas der Deutschen Schweiz (SADS) ('Syntactic Atlas of Swiss German Dialects') serves as a model for atlases of German dialect syntax. 4 Additionally, syntactic data collected in the context of the Bayerische Sprachatlanten ('Bavarian Language Atlases') should be mentioned (cf., e.g., Eroms & Spannbauer-Pollmann, 2006 for Low Bavarian). Over the past two decades, additional projects on dialect syntax in the German-speaking area have been developed: the project Syntax Hessischer Dialekte (SyHD) ('Syntax of Hessian Dialects'), 5 and the project Syntax des Alemannischen (SynAlm) ('Syntax of Alemannic'). 6 Furthermore, the Siegerländer Sprachatlas (SiSal) ('Language Atlas of the Siegerland') conducted widespread dialect syntactic surveys by using the SyHD questions. 7 Regarding 'higher' varietal sections in the vertical dialect-standard spectrum of variation, two important and innovative projects should be mentioned: Firstly, the Atlas zur deutschen Alltagssprache (AdA) ('Atlas of Colloquial German'), 8 which also surveys at least some syntactic phenomena of 'intermediate varieties ' (regiolects;cf. Lenz 2010) of the dialect-standard-axis. Secondly, the project Variantengrammatik des Deutschen ('Regional Variation in the Grammar of Standard German'), 9 which focuses on national and regional differences in the grammar of the (written) German standard language in German speaking countries and areas. While these two projects concentrate on "intermediate" respectively "higher/ highest" varieties, they neglect the "lower/lowest pole" of vertical variation, i.e., the dialects.
To date, the detailed research methods of large-scale areallinguistic projects in Europe, with a focus on nonstandard syntax, have contained many similarities. The methodical standards often include questionnaire-based surveys, deployed either in written form (by post or online 10 ) and without the presence of a field worker or in the context of oral interviews. In these written or oral questionnaires, different types of questions or tasks are employed. Of these, translation and assessment tasks (acceptability tests) have the longest tradition and are most commonly used. 11 Other types of tasks used include puzzle tasks, image description, sequences of images description and completion tasks, as well as combinations of these tasks (cf., e.g., Lenz, 2016 for SyHD). The SyHD project is the first project to employ LPEs in the context of local face-to-face surveys, in addition to written questionnaires (cf. Fleischer, Kasper & Lenz, 2012;Fleischer, Lenz & Weiß, 2015;Lenz, 2016Lenz, , 2017. In the SFB DiÖ study, more traditional survey methods for collecting data on syntactic variation are supplemented by computer supported LPEs (with experiment software) and with a pseudo-randomized order of the tasks.
Language Production Experiments (LPEs)-in general and in particular for the SFB DiÖ
As previously mentioned, large-scale projects have used LPEs in a broad sense in the past (e.g., SyHD). However, there is little literature on the details of this method, e.g., the experimental software used in the projects, the underlying conceptions on which the tasks are based and the exact kind of presentation (described for example in Breuer, 2017a;Kallenborn, 2016;Lenz, 2009). This article discusses the fundamental aspects of the experiments: While LPEs can differ in both appearances and in the details of the setting, their standardization is a shared characteristic. As described explicitly in, for example, Cornips & Poletto (2005: 949) and Kallenborn (2016: 64-65), the use of written questionnaires constitutes a standardized procedure, but it does not provide oral language production data and has less controllable contextual or situational parameters. Traditional oral questionnaires, however, are strongly manipulated by the field worker, have less standardized sequences, and work with a minimum of media (images, if at all). Therefore, we define "Language Production Experiments" in the context of the SFB DiÖ, as a quasi-experimental test setting in which standardized multimodal stimuli are presented in standardized sequences to evoke spoken language data and to test the influence factors on specific linguistic phenomena (cf. Breuer & Bülow, 2019: 256). 12 However, this definition contains some restrictions: Because it is a standardized method, it should be replicable and some (linguistic) influence factors should be effectively controlled in the test design. There is an advantage which comes from the restrictions of this method: Because several factors are controlled, researchers can use LPEs to elicit linguistic data that is comparable between different projects. Kallenborn (2016: 69) emphasizes the high inter-and intra-individual comparability. Therefore, an LPE can be used to compare data between different speakers, or different linguistic reactions of the same person to the same linguistic influence factor or similar stimuli. Thus, LPEs are a suitable method for variationist linguistic surveys, especially for projects that focus on more than one linguistic variety (e.g., standard versus dialect). As previously mentioned, an LPE can be comprised of any number of different experimental settings in terms of the media used (images, sounds or videos) and are characterized by the presentation method (digital or computer supported) and the low influence of the investigator, and therefore the level of standardization of the LPEs.
The following sections focus on the LPEs used in the SFB DiÖ: These LPEs are employed within the framework of a multidimensional research project, which attempts to survey and analyze the complex spectrum of variation and varieties of German in Austria (cf. map 1-4 below), the predominant part of which falls into the Bavarian dialect area which belongs to the "East Upper German" dialect area. Beyond Austria, the Bavarian dialect area also extends far into South-Eastern Germany and into Northern Italy (South Tyrol) (cf. Wiesinger, 1983). The far West of Austria-Vorarlberg and parts of Tyrol-is part of the Alemannic ("West Upper German") dialect area.
The LPEs are an integral part of the complex methodology of data elicitation used by one of the nine project parts of the SFB DiÖ, project part 03, which investigates language repertoires and variation spectra in rural regions in Austria (https://dioe.at/ en/projects/task-cluster-b-variation/pp03/). In contrast to previous variationist linguistic research on German, which focused primarily on (segmental) phonetics/phonology, project part 03 additionally considers morphological and syntactic variation. Besides more open and natural conversational settings (namely interviews and "conversations among friends") in which the speakers are recorded, standardized elicitation methods (like LPEs) are also used (cf. Lenz, 2018b). Putting the methodological approaches in a comprehensive context, a classification of the "task sets" (cf. below, Table 1) used in the LPE of the SFB DiÖ is suggested. This experimental setting consists of two runs (e.g., Breuer, 2017a;Kallenborn, 2016): one aiming to evoke variants in the (intended) standard (LPE-S) and another aiming to evoke variants in the (intended) dialect (LPE-D). Between these runs there is a short break and another test setting that uses translation tasks to target phonological and grammatical (mainly morphological) phenomena, some which overlap with the phenomena elicited through the LPEs (for a comparison of speaker behavior across settings, cf. Fingerhuth & Breuer, accepted). However, because they are designed to complement each other, these two runs can be considered as one single LPE: For each task targeting a syntactic variable in the standard, there is a corresponding task targeting the same variable in the dialect. These complementary tasks differ in the varieties represented within the audio stimuli. For example, the tasks targeting standard use recordings of an Austrian newscaster differ from those targeting dialect recordings of a competent speaker from the investigated locations' local dialect. Furthermore, the complementary tasks can differ depending on the image or video presented, 13 but only in ways that are unlikely to influence the syntactic variable under consideration, thereby reducing the repetition effect (e.g., Breuer, 2017a: 99;Breuer & Bülow, 2019: 261). The tasks of both runs (LPE-S and LPE-D) occur in a pseudo-random order.
Within this project, the LPE is embedded in a whole survey setting, which can be described as a face-to-face survey. A field worker is present throughout the entire LPE to control the progress of the tasks. If necessary, the field worker can repeat a task (using the computer, see below) or ask further questions, thereby influencing the answer and behavior of the speaker. It is important to note that any additional inquiries or "influence" from the interviewer are marked as such during data processing. This allows researchers to distinguish spontaneous answers from "influenced" second or third answers in their analyses. Vice versa, the speakers can consult the field worker if they encounter any problems responding to a task. The LPE in the SFB DiÖ is "computer supported" (Breuer & Bülow, 2019: 257) and uses OpenSesame as experimental software. 14 An important feature of the software is the possibility for pseudo-randomization of the tasks in each run of the LPE (e.g., Breuer, 2017a: 97). This randomization helps to avoid the effects of task serialization across the data of different speakers. It is pseudorandomized to prevent tasks from targeting a particular syntactic variable and from being followed by another task which is aimed at that same variable. OpenSesame documents the chronological order of a given LPE run in a file, which can easily be imported into the analysis tool. This way, the speaker's response to a specific task can be located in the audio recording. However, using OpenSesame requires that all LPE tasks are presented using a notebook computer. All tasks of the SFB DiÖ LPEs use audio stimuli and most also use visual stimuli, such as images or videos. In general, the tasks avoid written text in the visual stimuli. Avoiding written text is intended to enhance the "oral character" of the tasks, as the LPEs aim at eliciting oral language production. In total, each LPE run consists of 109 tasks that target 13 distinct syntactic phenomena. The tasks are organized into sets that target a particular syntactic phenomenon according to their structure. Each task within a set has the same formulation, the same kind of media used as stimuli, and the same sequence of stimuli to target the same syntactic phenomenon. One task aims to evoke different syntactic variants or to evaluate different syntactic control factors simultaneously. The entire SFB DiÖ LPE consists of 218 tasks grouped in 50 task sets. The general characteristics of the LPEs (cf. Table 1) applies to all tasks. However, the tasks differ in their details. This includes the media used, the type (completion or question), whether a (narrative) context is given, the level of 'suggestion', (e.g., by specifying given words in the stimulus) and if a task is more open or closed. 15 Table 1 shows a classification of the task sets that target eight exemplary syntactic phenomena in the SFB DiÖ LPEs. The table provides an overview of the tasks and their essential methodical characteristics. The order of appearance mirrors different syntactic fields. Firstly, phenomena of complex noun or determiner phrases (NPs, or DPs, depending on the theoretical perspective), specifically the use of indefinite articles with mass nouns, determiner doubling, and the use of alternative constructions for the expression of adnominal possession. The second group contains phenomena of subordinate clause introduction: variation in the C-Domain in the form of doubly-filled COMP and inflected complementizers, and variation in the introduction of relative clauses. The last group concerns verbal syntax and addresses progressive constructions, final infinitival constructions, and German GET passives. To provide deeper insights into the various types of task sets differentiated in Table 1, we illustrate at least one concrete LPE for each syntactic field. A short description of the syntactic phenomenon is followed by a summary of the state of research with a focus on variationist linguistics aspects relevant for the SFB DiÖ and therefore for the analyses of the data. The third section portrays the design of the tasks created for the phenomena. In every instance, concrete language examples will be taken from the SFB DiÖ LPE. These SFB DiÖ language examples will come from either the experiment tasks targeting dialect (LPE-D) or those targeting standard (LPE-S). In addition to the intended variety, the Austrian location of the survey, as well as the generational category of the speakers ("old" versus "young"), will be given.
Example I: LPE on "determiner doubling" LPE-type: Closed completion tasks with image and video stimuli, with written text, without narrative context and with a high level of suggestion. Short description: Determiner doubling refers to the phenomenon of a noun phrase (NP) with an adjectival attribute that is preceded by an intensifying element (particle) and occurs with "two articles." (1.1) provides an example of the phenomenon. This is different from the patterns in (1.2) and (1.3), which feature only one article, placed either in a preceding or intermediate position (cf. Kallulli & Rothmayer, 2008;Lenz, Ahlers & Werner, 2015;Strobel & Weiß, 2017 State of Research: Determiner doubling has seen considerable research over the past decades. Occurrence of the phenomenon is documented primarily in Bavarian and Alemannic (both Upper German areas) and less frequently in Rhine Franconian (a West Central German area) and the Westphalian (a West Northern German area) dialects of German (cf. Henn-Memmesheimer, 1986). Research suggests that the occurrence of determiner doubling is dependent on the accompanying intensifying particle. Regarding Bavarian, Merkle (1975: 89) describes doubling of indefinite determiners as being possible with so ('such'), ganz ('completely'), recht ('quite'), viel ('much'), bisschen ('some'), and wenig ('little'), as well as sehr ('very') in urban varieties, although Merkle states that sehr ('very') is not traditionally part of Bavarian varieties. East Upper German data elicited with written questionnaires requesting the acceptability of determiner doubling with ganz ('completely') indicate that it is a common phenomenon throughout the entire Bavarian dialect area that may be gaining use as intergenerational comparison suggests (cf. Lenz et al., 2015). Determiner doubling has been further explored in Swiss German (a West Upper German area): Steiner (2005) focused primarily on regional and sociodemographic factors regarding doubling in the context of ganz ('completely'). Her results indicate various degrees of acceptability across German-speaking Switzerland, with high acceptability appearing in northern Switzerland. Steiner's findings also indicate a particularly high degree of acceptability among younger speakers in comparison to older Swiss German speakers.
As described in regard to Bavarian, there also appear to be marked differences in acceptability of determiner doubling when in combination with different particles in Switzerland (cf. Richner -Steiner, 2011). Notably, Kallulli & Rothmayr (2008: 105) state that definite determiner doubling in Swiss German is not limited to ganz ('completely'), as described for Bavarian, but also appears with viel ('much'), although the acceptability appears to be rather low. In addition to these empirical studies on the phenomenon in the German language area, there are various approaches to the theoretical interpretation of determiner doubling. Kallulli & Rothmayr (2008) classify the phenomenon as the parallel presence of a quantifier and a determiner rather than the actual doubling of a determiner. Alternative work has suggested there may be a semantic distinction between the different variants of determiner placement and doubling exemplified in (1), including the possibility that duplication indicates emphasis. However, the sparse empirical evidence on the different theoretical interpretations has so far remained inconclusive.
In the SFB DiÖ LPE: Based on existing research, the experiments aim to test the production of determiner doubling (and its constructional variants) in three sets consisting of two prompts each. The sets aim to elicit the intensifiers ganz ('completely'), sehr ('very'), and so ('such') in combination with attributive adjectives. In each set, one prompt targets the adjective böse ('bad'), the other lieb ('lovely'). The prompts for the LPE combine an audio recording with an image of a dog and two moving words (Fig. 1). In each prompt, a drawing of a dog that suggests either a bad or an adorable animal appears on the centre of the screen. The drawings were selected with the goal of potentially triggering an emphatic response. Beneath the drawing, a combination of an intensifying particle and adjective corresponding to the dog's character appears.
To reduce the suggestion of a particular word order, the two words appear in a video clip with a constant circular rotation. Additionally, a recording is played that begins a sentence with Oh, das ist aber : : : ('Oh, this is really . . .'), prompting the participant to complete the sentence using the displayed combination of adjective and intensifying particle, as well as an indefinite determiner.
Example II: LPE on "variation in the C-domain" LPE Type: Closed completion tasks with image stimuli, with written text, with narrative context and a high level of suggestion. Short Description: Subordinate clauses in Upper German dialects (especially in Bavarian) may be introduced by patterns that are different from standard German. One of these patterns is commonly referred to as either "inflected complementizer" or "complementizer agreement" (subsequently referred to as CA; cf. Weiß, 2005), another as "doubly-filled COMP" (subsequently referred to as DFC; cf. Bayer & Brandner, 2008;Fingerhuth & Lenz, accepted;Weiß, 2017 ) and wh-words (such as wann 'when') to be inflecting, the occurrence of these morphemes in dependence of the verbal inflexion of the sentence suggests otherwise and therefore inspires the term "complementizer agreement." (2.3) illustrates a case of DFC: The subordinate clause is first introduced by the complex wh-phrase wia viel Leit (standard German: wie viele Leute 'how many people'), and additionally by the subordinating conjunction dass 'that' which appears as a second element in the C-domain motivating the label "doubly-filled COMP." CA is also possible in cases of DFC, as illustrated in (2.4).
(2) Examples for variation in the C-domain from the SFB DiÖ State of Research: Both phenomena have been extensively discussed over the past decades. While research has focused less on areal distribution and more on the implications of the phenomenon for theories of grammar, there is considerable documentation of its regional spread. Weiß (2005) describes CA as a phenomenon that appears in most Continental West Germanic dialects, although he emphasizes that there is considerable variation between different dialects. While some dialects have a minimal system, where only the 2SG shows CA, other dialects show agreement for all persons. Regarding Austria, the phenomenon appears predominantly in the Central Bavarian dialect area and here in the 2SG and 2PL (cf. Fingerhuth & Lenz, accepted). In the 2PL, Central Bavarian varieties predominantly show the morpheme -s, but the morpheme -ts, common in Northern Bavarian, has also been documented (cf. Lenz et al., 2015: 13-15). In varieties that demonstrate the morpheme -ts, it is phonologically mostly identical to the corresponding verb ending. This is more generally the case with the 2SG, where the CA-morpheme -st is phonologically identical to the 2SG verbal inflection (cf. (2.2)). The exact nature of the CA is debatable (cf. Bayer, 1984;Fuß, 2008;Gruber, 2008). Weiß (2005Weiß ( , 2017 interprets the historic origin of CA in the reanalysis of subject clitics as inflectional morphemes. Regarding DFC, a wider distribution can be assumed, extending from mainland and insular Scandinavian (cf. Larsson, 2014) over West Flemish to Alemannic and Bavarian dialects (Penner, 1993;Schallert, Dröge & Pheiff, accepted). Different factors influencing the occurrence of DFC have been discussed, including a frequent co-occurrence with stressed interrogative pronouns, as well as the occurrence in dependence on the complexity of wh-words, and animacy (Bayer & Brandner, 2008;Fingerhuth & Lenz, accepted;Schallert et al., accepted).
In the SFB DiÖ LPE: Existing research suggests that the appearance of CA and DFC is connected to at least two linguistic factors: the grammatical features of the verbal inflection and the type and complexity of the element(s) in the C-domain. Our LPE therefore attempts to elicit subordinate clauses in four sets of three tasks each. Each set aims to elicit a different subordinating element: wann ('when') as a simplex wh-adverb; wie viele Leute ('how many people') as a complex NP with an interrogative Figure 1. Example of the visual stimulus depicting the phenomenon group of the "determiner doubling." It is accompanied by an auditive stimulus Oh, das ist aber : : : ('Oh, this is really : : : '). The intensifying particle plus adjective (in this example: so 'such' and lieb 'lovely') appear in a circular motion.
wh-determiner; ob 'if ' as a simplex subordinating conjunction; and bis wann 'until when' with a wh-element within a PP. Within these four sets, three prompts attempt to elicit sentences with specific verbal inflection (2SG, 2PL, and 1PL). The experiment uses audio-visual prompts with an embedded written context (single words) that correspond to a series of events narrated by a recorded voice (cf. Fig. 2). The first two events provide a context of past and present events (e.g., Letztes Jahr habt ihr 3 Leuten auf dem Hof geholfen. Dieses Jahr habt ihr 5 Leuten auf dem Hof geholfen. 'Last year you helped 3 people on the farm. This year you helped 5 people on the farm.'). The third event is placed in the future by the text in the visual stimulus (e.g., nächstes Jahr 'next year') and marked with a question mark. The narrator stops after uttering Ich frage mich, . . . ('I wonder, . . .'), leaving the completion of the sentence to the participant, e.g., with wie vielen Leuten ihr nächstes Jahr auf dem Hof helft ('how many people you will help on the farm next year'). The prompts aim to elicit sentences as follows: 1. Example III: LPE on "final infinitival constructions" LPE Type: Closed completion tasks with image and video stimuli, without written text, with no narrative context and a medium level of suggestion. Short description: In German, there are different infinitival constructions to express intention, purpose, and goal: While the construction consisting of um zu ('in order to') þ infinitive (cf. (3.1)) is accorded the status of standard language (cf. Zifonun, Hoffmann & Strecker, 1997: 829), the variants with für zu(m) ('for to') or zum ('to=the.DAT') þ infinitive (cf. (3.2)/ (3.3)/(3.4)) are considered non-standard (cf. Demske, 2011: 38). Final clauses introduced by dass ('that') (cf. (3.6)) or damit ('so that') (cf. (3.5)) are alternatives to final infinitival constructions. (3.6) Final clause introduced by dass 'that' dass man den Nagel in das Holz schlagen kann (LPE-S, Oberwölz-old) that one the nail into the wood hammer.INF can 'so that one can hammer a nail into the piece of wood' State of Research: Until now, few publications have addressed infinitival constructions in German. Regarding the areal distribution of the variants, Donhauser (1989) and Weiß (1998) observe the occurrence of the infinitival construction with zum 'to-the.DAT' in Bavarian, Kallenborn (2016) in Western Central German varieties, and Seiler (2005) in German-speaking Switzerland. The question whether the construction with zum 'to-the.DAT' in Bavarian Dialects is verbal or nominal is a large issue in literature on infinitival constructions 17 . In Alemannic dialects, the infinitival construction with zum is interpreted as verbal, since any number of complements can be put between zum and the infinitive, and even a complex direct object is permissible in the construction (cf. Schallert, 2013;Seiler, 2005). Such constructions (cf. (3.4) for Alemannic Raggal) are uncommon in Bavarian, where the strategy to deal with complements is to incorporate generic objects only (cf. (3.2)) or the complements are added in the form of a prepositional attribute (cf. (3.3)). While Bayer (1993) and Zehetner (1985) interpret the Bavarian infinitive as nominal, Weiß (1998) concludes that there are two types of constructions with zum 'to-the.DAT' in Bavarian, phrases with a nominal infinitive and phrases with a verbal infinitive (cf. Weiß, 1998: 242ff.). As the the map for "um/für zu kaufen" ('in order to' / 'for to buy') of the 'Atlas of Colloquial German' (AdA) reveals, the für zu(m) 'for to' construction seems to be predominant in 'intermediate' colloquial varieties of Western Central and Western Upper German. In addition, few occurrences can be found in the Bavarian area in the Southeast of Germany and in Austria (cf. <www.atlas-alltagssprache.de/runde-3/f04e/>). In the SFB DiÖ LPE: Within the framework of the SFB DiÖ experiments, a total of 14 tasks (7 in the dialect and 7 in the standard run of the LPE) are used to study the usage frequencies and syntactic-semantic selection parameters of the final infinitival constructions. As concrete linguistic control factors, the valency of verbs is tested by presenting actions intending to evoke: intransitive verbs (wandern 'to hike' and schlafen 'to sleep'), transitive verbs ((den Rasen) mähen 'to mow (the lawn)' and (Geldscheine in Stücke) schneiden 'to cut (banknotes into pieces)'), and ditransitive verbs ((einen Nagel in ein Stück Holz) schlagen 'to hammer (a nail into a piece of wood)' and (einen Knopf an eine Hose) nähen 'to sew (a button onto trousers)').
The design of Kallenborn's (2016) speech production experiment is used in extended form in the SFB DiÖ; it exposes the speakers to a complex stimulus consisting of a video clip, a picture and an audio recording (cf. Fig. 3). To ensure that the function of the intended object (e.g., hiking boots or a lawnmower) is in the focus of the speaker's description, a picture of these objects is explicitly shown beside the video. The purpose of the video clip is to portray the specific action for which the object is required. The speakers must complete the sentence Das braucht man, : : : ('This is required, : : : ').
Case study: LPE on German GET passives
The following section provides a detailed case study of one selected linguistic phenomenon to illustrate the productivity and validity of LPEs for the analysis of syntactic variation. At the same time, this section reflects on potential problems inherent to the method. For several reasons, described in more detail below, the analysis focuses on the German GET passive. Section 4.1 describes the phenomenon in general and reviews previous research. Section 4.2 provides a (predominantly methodological) discussion of the data gathered by the SFB DiÖ by LPEs.
General remarks on the German GET passives
The phenomenon chosen to illustrate methodology and application of the LPEs has been referenced with different terms in previous research: 18 1. GET passive (or, in German literature, "kriegen/bekommen/ erhalten passive"). This term refers to the pattern used in German. This non-canonical type of passive uses an auxiliary verb from the semantic network of transfer verbs (cf. Lenz, 2013b) (usually kriegen 'to get' or bekommen 'to get/receive', less frequently erhalten 'to obtain') that combines with a past participle (cf. (4.1) and (4.2)). Selection of the auxiliary depends on numerous factors-semantic, syntactic, stylistic, and sociolinguistic-in complex interaction (cf. Lenz, 2013a).
Except for a few areas, kriegen serves as the auxiliary in dialect varieties of German. Only in some (predominantly Western) Upper German dialects (e.g., Swiss German varieties), bekommen is the equivalent auxiliary. However, even where the dialectal GET passive appears with bekommen, it is restricted to specific syntactic-semantic contexts that indicate a presently low degree of grammaticalization of the bekommen passive. When considering areal varieties with greater coverage than the base dialects (i.e., regiolects, cf. Lenz, 2010), kriegen and bekommen co-occur more frequently as auxiliaries. bekommen generally appears more frequently in contexts closer to standard language than kriegen, which bears stronger associations of non-standard varieties. The auxiliary erhalten, however, appears neither in dialects nor regiolects but only in the context of standard language, particularly in writing with a rather high degree of formality. 19 In these contexts, erhalten functions as an auxiliary that is stylistically marked as a "higher" variant, while bekommen appears as a rather unmarked, neutral auxiliary (cf. Eroms, 2000: 396;Lenz, 2013a). 2. Dative passive: While the subject of the German werden passive (Vorgangspassiv 'event passive') is equivalent to the direct object (accusative) of the corresponding active sentence (cf. (4.3) versus (4.4)), the subject of a dative passive is (usually) equal to the indirect dative 'object' (either required by the predicate's valence structure or as a free attribute) of the corresponding active sentence (cf. (4.1)/(4.2) versus (4.4)). For the Bavarian dialect area, the term dative passive is to some extent misleading, because many non-standard varieties exhibit syncretism of dative and accusative in a single oblique case. 3. Recipient passive, addressee passive, beneficiary passive: These alternative terms for the construction capture the prototypical semantic roles of the subjects that occur most frequently (and historically first; cf. Lenz, 2012) in German GET-passive constructions.
(4) Examples for GET-passive and alternative constructions from the SFB DiÖ Over the past two decades, few other syntactic constructions have drawn more attention from both areal variationist linguistics and non-variationist theoretical linguistics. This interest highlights the phenomenon's value for different linguistic approaches and allows the researcher to access a wide range of findings from previous analyses. Based on this previous research on the dative passive from areal-linguistic work, the construction's grammaticalization has progressed with considerable regional differences within the Germanic language area. Grammaticalization appears to have progressed the farthest in Luxembourg and the West Central and West Low German areas (Lenz, 2007(Lenz, , 2008(Lenz, , 2009(Lenz, , 2018a). This regional pattern can (to a varying degree) be found in all layers of vertical variation in the German language area, ranging from dialects to the written standard language (Lenz, 2013a). Regarding Upper German areas (which includes Austria), a degree of grammaticalization for the GET passive appears only with ditransitive verbs in combination with subject referents that have the semantic role of beneficiary or recipient. Mono-and intransitive lexical verbs that appear with dative passives in (especially the Western parts of) Central and Low German areas (e.g., Er kriegt/bekommt/*erhält geholfen. 'He is getting helped.'), or constructions where the subject referent appears in the semantic role of "maleficiary" or "loser" of a transfer (e.g., Er kriegt/bekommt/*erhält etwas gestohlen. 'He is getting something stolen./Something is stolen from him.') appear not to be possible in Upper German varieties at this point in time (Lenz, 2009(Lenz, , 2013b(Lenz, , 2018a. The following aspects make the GET passive a particularly suitable phenomenon to reflect upon the potential as well as the limitations of LPEs as a method of data collection: Substantial previous research: The GET passive has inspired considerable research from different linguistic subdisciplines. This alone speaks to the phenomenon's relevance to a broad linguistic audience. The different theoretical and methodological approaches taken in previous work provide a rich pool for the investigation of research methodology. From this perspective, the phenomenon itself functions as a means to an end. Complex areal and social distribution: Areal-linguistic research indicates that grammaticalization of the GET passive (presently) has progressed to differing degrees in the German-speaking area. The Upper German area (which includes Austria) is an area in which the construction is in a comparatively early stage of grammaticalization. The regional differences appear (to different extent) on all varieties of the 'vertical' dialect-standardaxis, ranging from dialects over 'intermediate' varieties up to (written) standard varieties. Linguistic complexity: The complex linguistic (syntactic-semantic) factors that influence the use of a GET passive show variation in (areal and social) space and thus make it possible to investigate its spreading grammaticalization synchronically. High degree of dynamics: The indicated linguistic and extralinguistic dynamics of the phenomenon is reflected in the changes that can be documented over short periods of time.
Lay-linguistic relevance: Passive constructions are not the most salient object to inspire lay-linguistic discourse. Nonetheless, the use of the passive auxiliaries or their lexical origin are a frequent topic of meta-communicative discourse. 20
SFB DiÖ LPEs on the analysis of the GET passive in Austria
The LPE on German GET passives in Austria consists of five tasks that are used in a setting that aims to elicit the standard language (within the LPE-S) and the dialect (within the LPE-D), thus exposing every participant to 10 prompts. Within the framework of the complex surveys of project part 03, at least 130 speakers in 13 locations in rural areas of Austria took part in the surveys. To ensure a uniform frame of reference, strict selection criteria were analyzed and defined for the survey locations: locations were selected which a) are small villages in rural areas and b) have enough speakers who fulfil the requisite socio-demographic criteria. Comparable to the SyHD project (in Hesse) and the SynBai (Syntax bairischer Dialekte 'Syntax of Bavarian Dialects') pilot study 21 , the research locations have populations of between 500 and 2,000 inhabitants. These locations are distributed across Austria (considering both the various federal states as well as the dialect geography of Austria; see Maps 1-4 below). The selection of speakers is directly linked to the central aim of the SFB DiÖ project part 03, namely gathering data on the variation and dynamics of the entire vertical spectrum of spoken German in each location, as well as of the individual spectra of linguistic possibility of its inhabitants. Consequently, the types of "autochthonous" speakers represented are as varied as possible, reflecting a broad range of variety competences and variation behavior (i.e., the most divergent "communities of practice," Eckert & Rickford, 2001), which, when drawn together, in the inter-individual summary will portray the local variety spectrum of each location as authentically as possible. Because the hypothesis that language behavior and extralinguistic criteria in Austria are interconnected in a complex and regionally specific manner, the selection of speakers is based on extralinguistic criteria. 22 In the project, the social variables of age, gender, level of education, and-closely related to the latter-type of occupation (manual or communication-oriented), as well as "regional mobility" (depending on the commute between home and place of work) are applied and examined as selection criteria. 23 One group of speakers (NORMs and NORFs) consists of individuals aged 60 and older who have spent their entire lives in the investigated location, most of them spent their working life either in said location or the immediate proximity in a manual profession, and are retired at the time of the interview. A second group of younger speakers, aged 18 to 35, displays more variation in terms of formal education, (regional) mobility and type of profession. For reasons of research practicability, the number of speakers has been restricted to 10 persons per location (2 older NORMs, 8 younger speakers, 4 with lower and 4 with higher level of formal education). In an ideal scenario, each of the 130 or more participants provides at least one relevant response to the experiment, providing a minimum of 1300 instances for a type-token analysis of the GET passive or other competing variants ( Carinthia), the Alemannic-Bavarian transition area (Tarrenz, Tyrol), and Highest Alemannic (Raggal, Vorarlberg). According to previous research, two linguistic factors that significantly influence the realization of a GET passive are the valency of the main verb and the semantic role of the subject's referent. The LPE (similar to Lenz, 2008Lenz, , 2009) employs a video clip in every task. Each clip in a task set shows a (male) person in a situation where a particular action is performed on him: a tooth is pulled, a pair of glasses is put on his nose, his hair is cut, water is poured into a glass standing on the table in front of him, and a banana is taken from his hand. The main person therefore appears in different roles: as recipient of a transfer (with a neutral facial expression) by which water or glasses (re)enter his possession, as beneficiary receiving a haircut; as beneficiary of whom a tooth is extracted, which from his facial expression has a remedial effect and is appreciated; further, in the situation where he is bereft of the banana, he appears as a "maleficiary" who responds to the action by explicit discontent. In each situation, the person doing something to the main person is outside the frame of the video and only the agent's arms or hands are visible. The still images in Fig. 4 are taken from the video "putting on glasses" and exemplify the focus on the main person and the progression of the action. According to research, the GET passive in the Upper German area only appears in instances that reflect a lower degree of grammaticalization (i.e., with subject referent in the role of recipient or beneficiary). Instances with transitive verbs where the subject referent appears to be losing something ("maleficiary" of a privative act, i.e., an act of taking away sth. from so.) are rare. At this point, GET passives with intransitive dative verbs like helfen ('help') or mono-transitive verbs like schimpfen ('scold') do not appear to be part of Upper German varieties. For this reason, the videos only present actions that have a high potential of evoking transitive constructions: two giving actions (pouring water and putting on glasses), a cutting action, a pulling action, and a taking (away) action. Prior to the video, a recorded voice asks, "What's happening to the man?" (recorded either by an ORF-newscaster to elicit the standard language usage or a speaker of the participants local dialect to elicit the speaker's dialect), prompting the participant to respond. The expected answer is spontaneous and consists of a single, complete sentence. The definite wording of the question and the "hiding" of the agent aim to influence the theme-rhyme structure of the speaker's answer in the direction of a response that has the male main person as the subject referent in the topic position.
LPE Type: Open questions with video stimuli, without written text, with no narrative context and a low level of suggestion.
Results of the GET passive LPEs from the SFB DiÖ data
The following section presents the first results from the data elicited using the LPEs. As previously mentioned, the data represents responses given by 92 participants (53 female, 39 male) from eight locations in Austria. 25 of the respondents represent an older generation (Ø 71.7 years), 67 a younger generation (Ø 26.9 years). The participants' answers contain 1,205 responses relevant to the investigation of the GET passive, 596 of which occurred in LPE-D, 609 in LPE-S. These numbers relate to the productivity of the LPEs in terms of quantity. Relevance in this context is judged based on whether the participant's response to the stimulus question ('What's happening to the man?') captures the pictured frame and their description of the scene verbalizes the entities at the core of the frame. 24 E.g., in the case of "putting on glasses," answers were considered relevant if they contained the glasses as a reference object and at the same time described an action of PUTTING (ON). In contrast, answers like Der ist beim Optiker. ('He's at the optometrist.') or Er probiert eine Brille. ('He tries on glasses.') were considered irrelevant. In situations where participants uttered irrelevant responses, the interviewer asked them to try a further response and replayed the video and accompanying stimulus question. Of the 103 instances where participants gave irrelevant responses in their first attempt, the interviewers' intervention provided relevant responses in 80 instances of the follow-up responses. The lowest number of relevant responses for a single respondent is nine, meaning that not a single participant gave only irrelevant answers. Even older participants, for which it was assumed that the length, complexity, and the unfamiliarity of the survey (of which the LPEs are only one part) could be problematic, produced an average of 13.8 relevant responses. The extent to which the participants gave relevant responses independently or only after the interviewer's intervention is considered as metadatum in the data preparation. Fig. 5 shows the relative frequencies of variants the LPE produced in the two runs aiming at LPE-S and LPE-D. It considers the following three types of constructions: • Active constructions: 41.30% of the active constructions in the LPE-S and 65.66% in the LPE-D responses contain an agentive subject. In these responses, the participants of the LPE predominantly realize the person in the center of the video as object of the action as in (5.1). If the main actor is realized as the subject, this is predominantly in the semantic role of recipient (as in (5.2), 46.38% in LPE-S and 26.26% in LPE-D). A further type of active construction involves lassen ('to let'), and perceives the subject referent as being in a passive role (6.52% in LPE-S, 4.04% in LPE-D; cf. (5.3)). Other less frequent active constructions represent 5.80% (LPE-S) and 4.04% (LPE-D) of the instances.
• werden passive: As is the case with the active constructions, the werden passive is a category that contains different subcategories. In the majority of instances, the main actor, realized as the indirect object of the werden passive construction, predominantly appears in the Vorfeld (topic position (6.1); 90.51% in LPE-S, 95.81% in LPE-D). Less frequently, the main person appears in the Mittelfeld (middle field (6.2); 9.49% in LPE-S, 4,19% in LPE-D). • GET passives: The GET passives in Fig. 5 contain all GET passives regardless of the specific auxiliary used in the construction. In all instances, the main character appears topicalized in the Vorfeld of the utterance (cf. (4.1) and (4.2)).
(5) Examples for active constructions from the SFB DiÖ LPE (LPE-S, Neumarkt/Ybbs-old) the banana becomes he.DAT away-taken 'The banana is stolen (from him).' Fig. 5 indicates that the LPE produces primarily passive constructions in both LPE-D and LPE-S, as intended. Among these, the werden passive appears most frequently in both settings, i.e., the passive construction in which the main person is perceived as an indirect object (c.f. 6.1). Active constructions are less frequent but appear particularly in responses to the videos showing "putting on glasses" and "pouring water." Most of these active constructions contain the pattern Er kriegt/bekommt Wasser (in sein Glas). ('He gets water (in his glass)') or Er kriegt/bekommt eine Brille (auf die Nase). ('He gets glasses (on his nose).' cf. also (5.2)). Participants therefore use active constructions that offer advantages similar to those of passive constructions: These active constructions of the type "X gets Y," like passive constructions in general, allow for a reduction of arguments (specifically, the omission of the agent referent), and-like GET passive constructions-allow the recipient to take the role of subject and appear in the Vorfeld as thematic focus (cf. Zifonun et al.,1997. Regarding the other prompts, these types of active constructions are either barely used or difficult to construct (e.g., Er bekommt Hilfe/einen Haarschnitt/*einen Bananenklau. 'He gets help/a haircut/*a banana-theft.' werden passive In addition to the active and werden passive constructions, the experiment also produced GET passive constructions. The frequencies in LPE-S and LPE-D are similar. This first glance at the data is notable: while the LPE shows a low frequency of GET passives, the data suggests that it is, at least to some degree, established in registers of spoken German closer to the standard and the dialects. Unlike previous research on the phenomenon, the LPE provides substantiated data on which constructions can be considered alternatives to the GET passive in situations that require the verbalization and perspectivization of the recipient of a ditransitive verb-action, and to what ratio these constructions are used. Fig. 6 offers a different and closer look at the data. It displays only the auxiliaries participants chose in their GET passive realizations and further distinguishes the younger and older speaker generations. Contrasting the different experiment settings reveals striking situational differences, primarily concerning the choice of passive auxiliary. The frequency of the GET passive appear to differ only slightly between speaker generations and LPE settings. The kriegen passive is dominant in the LPE-D (49 out of 50, 98.0%), while there is a strong preference for the bekommen passive in the LPE-S (55 out of 57, 96.5%). Passives using erhalten at this point are not documented in the SFB DiÖ data. These findings are consistent with previous research on the vertical-social distribution of passive auxiliaries in the East Upper German language area and confirm the assumption that the methodological design of the LPE makes it possible to target aspects of distinct varieties. As Fig. 7 also indicates, younger speakers seem to produce higher frequencies of passive constructions (werden as well as GET passives) than the older generation.
To validate the observations, a statistical analysis of the data elicited through the LPEs is provided. While the experiments proved generally effective in eliciting descriptions of the actions in the video stimuli, there are instances where participants gave responses that strayed from the intended answers. In such instances, and to elicit information on alternative constructions, the interviewers frequently asked participants to provide further responses. This practice can influence the results of a quantitative analysis of the data. At the same time, excluding these additional answers and relying purely on the participants' first responses considerably reduces the available data: While the experiments elicited a total of 1,205 relevant responses across both settings, this number drops to 826 when only the first and spontaneous responses that were given without interference from the interviewer are considered. To address this, statistical analyses on both datasets were performed. The comparison of the results showed only slight differences between them. In the following section, the findings from the full dataset (1,205 responses) are discussed. When the results from the smaller dataset did not indicate an interpretation in the same order of magnitude, this is explicitly noted. To measure whether an observed difference between settings or participant groups is within the margins of probable results, if the variants were used to an equal extent, p-values based on the application of Fisher's Exact test are reported.
A comparison of all data from the experiments revealed a fundamental difference between the responses given in the LPE-D (dialect run) and in the LPE-S (standard run). While passive constructions (werden and GET passives taken together) are used more frequently than active constructions, there appear to be differences between both elicitation settings. Active constructions have a higher relative frequency (p < 0.001) in the LPE-D (198 out of 596 responses, 33.2%) than in the LPE-S (138 out of 609, 22.6%). A closer look at the older participants reveals that the difference between experiment settings is even more marked for this group. In the dialect run, active constructions are more frequent (89 out of 166, 53.6%) than passive constructions, and thus markedly more frequent (p = 0.002) than in the standard run, where only about a third of constructions are active (65 out of 178, 36.5%). 26 While the younger participants use active constructions less frequently than passive constructions in both settings, they appear more frequently (p = 0.003) in the LPE-D (109 out of 430, 25.3%) than in the LPE-S (73 out of 430, 16.0%). 27 A direct comparison of the responses between older and younger participants in both runs confirms the difference between participant groups: Younger participants used passive constructions more frequently than older participants both in the LPE-D (p < 0.001) and the LPE-S (p < 0.001). The parallels between the results from the LPE and previous research clearly back the validity of the experimental settings used in the SFB DiÖ. Among the passive constructions, the GET passives are generally less frequent than the werden passives. However, in contrast to the general frequencies of passive constructions, the use of GET passives does not appear to be dependent on age. A comparison of the responses from both speaker generations shows that while older speakers appear to use GET passive constructions slightly less frequently (10 out of 113, 8.8%) than younger respondents (47 out of 348, 13.5%), this difference is within the margin of probable results (p = 0.2508). This similarity is even more pronounced in the intended dialect where the relative frequency of GET passives is similar (p = 0.8502) between older (10 out of 77, 13.0%) and younger (40 out of 321, 12.5%) respondents. Across both participant groups, there is a strong preference for one or the other form of the GET passive (kriegen versus bekommen passive) that is dependent on the experiment setting (p < 0.001). Fig. 8 shows the results of the five distinct tasks in further detail within each setting and indicates the constructions used in the responses to every video. As mentioned above, the clips demonstrating "putting on glasses" and "pouring water" evoke particularly increased frequencies of active constructions, while the other three videos show these forms more rarely. At the same time, these two videos elicit the highest amount of GET passives (15.22% to 20.18%) in both LPE runs. These videos depict actions in which the main character takes the role of recipient, which ideally fits the GET passive (cf. above). The third and fourth most frequent are GET passives for the videos "pull tooth" (6.51% to 8.62%) and "cut hair" (1.71% to 3.3%). They suggest that the syntactic-semantic restrictions for the use of the GET passive are softening both in standard and non-standard varieties of Upper German. In comparison to the "cutting hair," the action "pulling tooth" is a transitive action in which a (direct) object is not only modified (like hairs that are being cut) but in which an entity is "leaving" the subject referent. Therefore, "pulling tooth" is semantically/conceptually an action which is closer to a prototypical act of transfer (a prototypical GET event) than an action of haircut. These semantic-conceptual differences are mirrored in the (at least slowly) divergent frequencies of GET passives evoked by these two videos. Only the video "steal banana" at this point has prompted only one isolated response that used a GET passive, shown in Fig. 8. Ditransitive verbs with privative semantics thus remain rare in Upper German varieties, both in registers close to the standard and in the dialects.
Application of a general linear mixed-effects model (Bates, Maechler, Bolker & Walker, 2015) to the data that asks for the occurrence of GET passive constructions in dependence of the different stimuli, age groups, and experiment settings as fixed effects, and the participants as random effects, supports the observations on potential of the different stimuli to elicit GET passives and their competing variants. Further, the statistical analyses suggest that there are differences in the degree that the different stimuli elicit GET passive constructions. In comparison to the stimulus "putting on glasses," the category that was mapped onto the intercept, the stimulus "pouring water" did not show any significant difference (β = 0.214, SE = 0.351, z = 0.611, p = 0.541). However, the three other stimuli elicited GET passives to a significantly lower degree ("pulling tooth": β = −1.863, SE = 0.391, z = −4.770, p < 0.001; "cutting hair": β = −3.288, SE = 0.554, z = −5.939 p < 0.001; "stealing banana": β = −5.311, SE = 1.141, z = −4.656, p < 0.001), supporting the hypothesis that the permissibility of the use of ditransitive verbs with privative semantics lags behind that of verbs with the subject referent appearing as the recipient or beneficiary of the action. These differences appear independent of the participants' age group or of the experimental setting. Overall, the results point to the productivity and validity of the used video stimuli in the LPEs. Results from further descriptions that require different verbs with different valency patterns would be desirable. However, considering the overall setup of the LPEs, in which the GET passive is only one of several phenomena addressed, and the greater context of the survey that includes an interview, a conversation among friends, and translation as well as reading tasks, it was not possible to include further tasks that aim at the GET passive.
In the next step, the regional distribution of the syntactic variants is used to illustrate to what extent the number and choice of locations indicates regional differences in the use of the GET passive across Austria. For this, the responses to the videos "putting on glasses" and "pulling tooth" are used. As shown in map 1-4, the LPEs evoked at least some instances of the GET passive both in LPE-D and LPE-S in each of the investigated locations. These occurrences attest the arrival of the GET passive in all dialect areas of Austria, in standard and dialectal varieties. Only weak interregional differences appear. The comparison between the locations in the Bavarian dialect area on the one hand, and the Highest-Alemannic Raggal and Tarrenz in the Bavarian-Alemannic transition area on the other hand, is particularly interesting. In agreement with earlier research (Lenz, 2018a), the GET passive appears generally more frequent in the Bavarian area of Austria. It further supports the hypothesis that the GET passive has the strongest restrictions in Alemannic varieties: while some of the Alemannic-speaking respondents realize GET passives to describe a transfer in the direction of the subject referent ("putting on glasses"), there are no such responses to "pulling tooth." Lastly, the data from Raggal indicates that the GET passive is not rooted in the Alemannic dialect but enters the dialect "from above" by way of the standard variety (Lenz, 2013b). In all other locations, the GET passives appear both in LPE-S and LPE-D. Application of a further general linear mixed effects model to the data requesting the occurrence of GET passive constructions in the participants' responses in dependence of the respondents age group and region (divided into three areas: Alemannic, Bavarian, and Alemannic-Bavarian transition area) as fixed effects and the speaker id and the stimulus as random effects did not indicate any significant regional differences in the use of the GET passive. This changed when the model considered only the participants' responses in the intended dialect. Based on this data, the data from the Bavarian region showed a significantly higher occurrence of GET passives (β = 3.373, SE = 1.667, z = 2.023, p = 0.043) in comparison to the data from the Alemannic area. The data from the Bavarian-Alemannic transition area in contrast did appear similar to the Alemannic data (β = 1.818, SE = 2.048, z = 0.888, p = 0.375). 28 Age did not appear as a significant factor (β = −0.178, SE = 1.059, z = −0.168, p = 0.866).
As a final aspect, the intra-individual dimension of variation is considered. For this purpose, focus is concentrated on one selected survey location, i.e., Weißbriach in the South Bavarian area where eleven speakers took place in the LPE; two older informants and nine younger ones. Their frequencies of constructions are visualized in the situational comparison in Fig. 9 in which all five video stimuli are taken together. The diagram reveals divergent variation patterns across speakers and therefore different types of speakers: 1. There are speakers (0057, 0052, 0301) who in both LPE runs (LPE-S and LPE-D) only produce werden passives besides less frequent active constructions, but none GET passives. 2. There are speakers (0067, 0071) who only in the dialect run (LPE-D) produce (some) GET passives which all represent kriegen ('to get') passives. 3. There are speakers (0056, 0302, 0307) who only in the standard language context (LPE-S) produce (some) GET passives, which all represent bekommen ('to get/receive') passives. 4. Finally, there are speakers (0300, 0304, 0308) who in both LPE runs (LPE-S and LPE-D) produce GET passives which primarily vary with werden ('to become') passives and less frequently with active constructions. The GET passives of these speakers contain the auxiliary kriegen ('to get') only in the LPE-D while they only produce bekommen ('to get/ receive') passives in the LPE-S.
To supplement Fig. 9, the Tables 2 and 3 show the distribution of the various construction types with respect to the individual speakers and the individual video stimuli. Together the results illustrate that the situational distribution of the GET passives, which can be observed across speakers and regions, is also evident intra-individually. In all speakers' variation repertoire, the kriegen passives seems to be subject to their dialect repertoires, while bekommen passives are part of their standard repertoire. Furthermore, implication relations which in synchrony can be drawn from the intra-individual variation patterns in Weißbriach can also be interpreted diachronically. These provide evidence for the chronological step of the grammaticalization pathways of the GET passive: Tables 2 and 3 reveal regarding the single video stimuli, particularly the actions "pouring water" and "putting on glasses" evoke GET passives. Only among those speakers who produce GET passives with these video stimuli GET passives also occur with the action "pulling tooth" (but not vice versa). Weißbriach belongs to those locations where the experiments do not evoke GET passives with the other video stimuli ("cutting hair" and "stealing banana"). In summary, the Weißbriach example also shows the fruitfulness of an experimental approach w.r.t. the intra-individual variation dimension.
Summary
The intention of this article is to illustrate the potential, as well as the limitations, of computer supported "language production experiments" (LPEs) for the elicitation of data on syntactic variation within a large-scale project on areal linguistics. The context for the methodological reflection was the Special Research Programme (SFB) "German in Austria. Variation-Contact-Perception" in particular the project part 03 "Speech Repertoires and Varietal Spectra" that investigates vertical-social variation of autochthonous speakers of German in Austria in rural contexts. Regarding the investigation of the speakers' individual linguistic repertoires, the project uses various survey settings that target different registers of the speakers' individual varietal spectra and can be simultaneously compared between speakers. The LPEs discussed in this contribution are part of the (more) controlled survey settings used in the context of the SFB DiÖ to investigate syntactic variation specifically. The participants perform the LPEs in two runs to elicit syntactic data in both "intended standard" and "intended dialect." The LPEs are used to elicit different syntactic phenomena. We also proposed the setup of experiments of selected syntactic phenomenon complexes, which represent different subtypes within a classification of LPEs we also proposed here. To allow for a deeper reflection on the methodology, a case study of the data the LPE provided for German GET passives was provided. The analysis used data from 92 participants from eight locations that represent different linguistic areas of Austria. In every location, two groups of speakers are documented: an older generation of NORMs, contrasted with a younger generation of speakers with divergent sociodemographic criteria. To summarize, the LPE's findings support the following hypotheses: Regarding quantity, the computer supported LPEs are a highly productive method for the elicitation of syntactic data regarding oral language use. Data from (more) open conversations are problematic as the syntactic phenomena (like the GET passive) occur in frequencies so low that they do not allow for the analysis of factors influencing or restricting the occurrence of the phenomenon. The LPEs provide a way to elicit a high number of instances of a particular syntactic phenomenon, and also enable researchers to control factors that influence the phenomenon. Because the setup of the experiment further enables researchers to keep semanticpragmatic factors constant, the LPEs allows researchers to identify further quantitative preferences between competing variants of a syntactic variable. This way, the LPE also helps to solve problems regarding the analysis of syntactic variables in practice. Conducting LPEs in two distinct runs enables researchers to elicit intra-individually distinct varieties of the speech repertoire of a speaker, while at the same time opening them for comparison. Although the LPEs are a complex and controlled method of data collection, they provide results that are comparable to those described in literature that rely on (more) free data from open conversations. Even speakers of the older generation investigated by the SFB DiÖ provide sufficient relevant data points for an intergenerational comparison. Relying on the apparent-time hypothesis allows for statements on language-dynamic tendencies. Finally, the parallels concerning intra-and interregional distribution of the GET passive between the findings from the LPEs in the SFB DiÖ and previous research support the hypothesis that the LPEs in the project can indicate interregional differences, despite the limited number of research locations. Thus, the LPE is overall a highly productive research method that provides solutions to the quantitative and qualitative problems of many other methodological approaches to syntactic variation. Particularly for the investigation of syntactic variation, in the SFB DiÖ, they substantially complement the results from (more) open conversations (cf. Section 3). | 2019-12-19T09:18:37.470Z | 2019-10-01T00:00:00.000 | {
"year": 2019,
"sha1": "0c1625c168a22e4b3262c585b667dae179108b94",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/8D2818ADB6ECAAB2BACECD4A05398A72/S2049754719000076a.pdf/div-class-title-exploring-syntactic-variation-by-means-of-language-production-experiments-methods-from-and-analyses-on-german-in-austria-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "c74c030f03daa9e98e5f73b10312595550a3ec45",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
239000340 | pes2o/s2orc | v3-fos-license | The Forgotten Third: A Comparison of China, US, and the European Union’s AI Development
In the 21st century, great power competition dominates the field of international relations. Much has been written about the US and China rivalry for technological dominance, specifically in artificial intelligence. But these analyses are missing one essential player: Europe. I ask will China use its advancements in artificial intelligence to overtake the United States as a superpower, disrupting the US hegemon, just as the United States once did in a post-cold war era with the USSR. Europe is developing its own strategy and capabilities to rival those of the US and China. I use a cross-country qualitative case study method to examine advancements in artificial intelligence across the US, China, and Europe, specifically France and Germany. To determine each states’ leadership and capabilities, I compare them across their AI dreams, hardware, research, and ecosystem. In this comparison I find that whilst China’s numbers outcompete the US and Europe in total output, there are multiple criterium, notably in top tier development, where there is still a significant gap China needs to close between its rivals. Thus, providing an opportunity for Europe, specifically France and Germany, to develop and lead certain criterium regarding core AI development. This paper contributes to existing scholarship on artificial intelligence and US-China relations by adding the European dimension.
Introduction
Artificial intelligence with the rapid increase in Chinese influence and the unexpected rise of the EU will threaten the USA's world order, thus weaking it as a superpower.
Since the Cold War ended with the defeat of the USSR in 1991, US influence has dominated international systems. This has often been referred to as a "unipolar" moment in history (Krauthammer, 1990).
Post-World War II (WWII) reconstruction and the Cold War played a crucial role in defining USA's position on an international stage. The Cold War was a battle between two key superpowers, the USA and the Soviet Union. Following the devastation of WWII and the end of the Cold War the United States played a key role in stabilizing Europe notably in the economic, political and military spheres (van Heuven, 1993). Whilst the United States experienced exponential economic growth, it also created guidelines in terms of international leadership like the development of the Marshall Plan and the North Atlantic Treaty Organization (NATO), all of whom played an important role in the development of Europe.
At the same time, America was leading the world in the third industrial revolution, which included mainframe computing, personal computing and the internet which are the primary reasons for its rapid economic growth. The post-cold war era and the fall of the USSR left the United States with unexpected inherited unipolarity, that further expanded due to its significant advancements in the fields of technology (Wohlforth, 1999). In this unipolar moment, the US lead a world order based on principles of economic and political liberalism, a commitment to global open markets and the promotion of 'free market democracies' (Apeldoorn & Graaff, 2015). "With Moscow's headlong fall from superpower status, the bipolar structure that had shaped the security policies of the major powers for nearly half a century vanished, and the United States emerged as the sole surviving superpower" (Wohlforth, 1999). This inherited order, re-enforced by American hegemony, was left temporarily unchallenged. Volume 10 Issue 3 (2021) ISSN: 2167ISSN: -1907 www.JSR.org Yet, this order is not unchallenged. The recent Chinese 'takeoff' (Huang, 2012) since late 1970s and remarkable development in leading industries shaping a path to exponential economic growth has increasingly gained relevance on an international stage, threatening USA's established liberal world order and unique position as leader in international politics. This has raised one pertinent question amongst academics and researchers, due to China's reemergence as a superpower and significant advancements in AI and related technologies, will China use the inevitable fourth industrial revolution to overtake the United States as a superpower, disrupting the rules and hegemon it has formed, just as the US once did in a post-cold war era? What does this mean for the distribution of global power?
The current debates on US-China asserting world dominance are missing one essential element: Europe. In this article, I formulate an argument on why it is essential to include Europe in this conversation. Whilst it's true that the US will long remain to be a superpower as its "market power is still unparalleled and underpinned by the dollar's status as the unrivalled global reserve currency" (Routledge, 2015); China does represent an emerging threat in distribution of power -with the ability to challenge the US status as leading country in the next era of artificial intelligence and related technologies.
Artificial intelligence (thereafter AI) is the driving force of a new round industrial revolution that will cause significant changes in social efficiency, economic growth, and national security. The AI race is a multisectoral race affecting every industry, throughout this journal I present an analysis on the US's, China's, and Europe's, specifically France and Germany, AI advancements and the pivotal role this plays in shaping the future of international politics. Moreover, by dissecting EU-US-China tripartite relationship, I formulate arguments on why it is essential for the European Commission to articulate a clear, unified AI vision centered around developing, a strong ecosystem -ensuring AI doesn't become a source of indifference within Europe
Review of Literature
What is Artificial Intelligence?
Currently AI has been misperceived as extremely futuristic technology that is unobtainable. Due to the vague 'visionary' description of this concept, there are room for various interpretation and analogies from the most far-fetched fiction books to academics drawing parallels to their specific discipline. As these approaches developed alongside AI advancements, the perception of the general technology depicts a flawed image of what AI is and can be. What few understand is that AI is not new: it is mostly based on algorithms developed from pre-existing knowledge from the '60s and '70's.
In order to understand how AI effects, the balance of power and Europe's role, a definition is required; however, defining AI is no easy task. Artificial Intelligence was first coined in 1956 by McCarthy -often referred to as one of the 'founding fathers' of this technology. He defined it as the science and engineering of making intelligent machines, especially intelligent computer programs (McCarthy, 2007). However, as AI evolved, creeping into every sector, the approaches to defining this concept became ever so different. Fundamentally, it refers to a program whose ambitious objective is to understand and reproduce human cognition; creating cognitive processes comparable to those found in human beings (Meurier, 2018). The main reason why this concept has become so relevant in the 20 th century is due to two vital innovations that are predominantly responsible for AI entering a new era, an era of: machine learning 1 and deep learning 2 (both of which are subsets of AI). Whilst this paper is focuses on the political implication of this technology it is worth noting that AI advancements have impacted almost every industry from health and transport to development of robots and blockchain due to drastic increases in computing power and the rapid digitalization of 1 Machine learning uses algorithms to interpret vast amounts of data, learn from that data and make informed decision based the information gathered 2 Deep learning is technically a subset of AI that layer's structures of algorithms to create an "artificial neural network" that can learn and make decisions independently. our world. AI has the potential to disrupt every sector of our economy as well as completely change our way of life. Given the extent of the disruption that analysts believe AI could cause in the global economy, it is worth associated AI in the context of other major technological changes like the industrial revolutions (Horowitz, 2018).
As our society progresses and evolves there has been three fundamental occasions from 1760s to 1950s that catalyzed a shift into a new era which completely transformed our modern society, referred to as the industrial revolutions. Past industrial revolutions have generated significant changes in the balance of power, international competition, and international conflict (Horowitz et al., 2018). Similarly, future revolutions will have the power to disrupt every sector and generate, once again, significant changes in international relations.
The first industrial revolution originated in Britain in the 18 th century, after the invention of the steam engine and mass production of goods and products generating huge increases in productivity. The changes created were not only in technology but also in lifestyle. This changed traditional family life revolving around rural living and farming to factory life in the bustling cities with poor working conditions and low wages. Moreover, Great Britain being the epicenter of this revolution witnessed a great shift in the balance of power, becoming the leading power in Europe and on international stage. Hence, this competitive edge fueled the growth of the British empire and created a gap with the rest of the world that took decades to close.
The second industrial revolution -often referred to as the technological revolution -occurred from the late 19 th century to early 20 th century and renewed international competition. Bringing about the invention of new energy sources combined with the development of telephones, computing, automobiles, and machinery leading to a rise in automation. Whilst there was no distinct country at the origin of this revolution, the leading countries were: Britain, France, and Germany competing in Europe along the rising United States and Japan. Nations were competing on a global scene in the harvesting of steel production, electricity, and petroleum. The rising tensions between nations and the race to lead this multipolar environment fueled the escalation of the trench warfare, WWI. Completely reshaping warfare with the creation of tanks, trucks, radios, etc. that were perfected during WWII.
Finally, in 1950 onwards we advance to a third industrial revolution also known as, the digital transformation, that completed changed the playing field. With the combination of transistors, microprocessors, global production chains and electronics, produced a wave of innovation that, once again, increased international competition. A wave that created the internet, GPS, 3D printing and various other technologies that act as a backbone to our modern society. Unlike, the previous industrial revolutions, the digital shift was led by American power, with companies such as Google or Apple -monopolizing the internet from the software to the hardware. Rather than leading an intense international competition, this revolution allowed the United States to further widen the gap and project power -establishing its position as world leader (Hoppe & Bhagat, 2007).
As the famous speculative-writer, William Gibson said: "the future is already here -it's just not very evenly distributed". The overall impact of these industrial revolutions on the distribution of power was dependent on not just the advancement of these technologies as such, but more how companies and governments were able to adapt and use these technologies to become more efficient and consistent. Alternatively, technologies that forced companies to face an 'adapt or die' approach, tend to be more disruptive. Thus, it is essential for countries to formulate a coordinated response to AI and ensure a strong collaboration between the private and public sector, in aim to encourage industries to transition and develop AI technologies.
The increasing role emerging technologies play in our daily life will give rise to a new movement, that will once again lead to "a series of significant shifts in the way that economic, political and social values are being created, exchanged and distributed" (Philbeck & Davis, 2019) testing the ability for governments and firms to adapt and evolve. The arrival of the digital revolution in 1980 laid the groundwork to this 'fourth industrial revolution' -powered by robotics, the Internet of Things (IoT), 5G infrastructure, nanotechnology, biotechnology, quantum computing and AI amongst others -amplifying each other in the fusion of technologies across the physical, digital, and biological realms. AI technology is one of the few technologies that can revolutionize every industry and society, however, is reliant on vast amounts of data and research to develop. Hence, understanding the relationship between the three major players Volume 10 Issue 3 (2021) ISSN: 2167ISSN: -1907 www.JSR.org in global politics US-China-EU, is essential to formulate an argument to who is going to be the leader of the fourth industrial revolution. The concept of the fourth industrial revolution was first referred to as 'Industry 4.0' initiative that emerged in 2011 where three representatives from the spheres of science, politics and business were assigned to develop a hightech strategy for the German government (Kagermann et al., 2013). In which, they described "how the paradigm shift will take place in industry" and how "in the next decade, new business models based on cyber-physical systems will become possible". This government initiative also referred to as I4.0 was originally designed to promote computerization of manufacturing through a set of implementation recommendations to the German federal government (Federal Ministry of Education and Research, n.d.). These recommendations rested on the improvement of automation technology by introducing digital business models, standardization, new technology and research, legal framework, and security of networked systems (Messe, 2019). This term was further developed by academics and professors like, Klaus Schwab, as 'the Fourth Industrial Revolution', where he describes this hyperconnectivity between the physical, digital, and biological spheres as, "nothing less than a transformation of humankind" (Schwab, 2017). The development of hyper automation and the fusion of technologies powered by AI will once again catalyze a shift in a new era, completely transforming the way society behaves, generating significant changes in the balance of power and international relations and cooperation. In contrast to past industrial revolutions, this fusion of technologies will impact every sector, automating and augmenting a broad range of tasks, bringing important "technical and ethical challenges to sectors, stakeholder groups, and social norms" (Philbeck & Davis, 2019).
Thus, the importance of the fourth industrial revolution shouldn't be analyzed in terms of technological advancements of individual countries but their flexibility to change and adapt to these advancements and the impact this will have on the global order. Whilst it's true we are entering an era of exponential economic growth, we are also entering an era of great power competition, characterized by "struggle, change, competition, the use of force, and the organization of national resources to enhance state power". Industrial productivity, science, and technology are critical in this struggle as well, notes international relations scholar Paul Kennedy (1988). AI's growing presence plays a vital role in shaping state power, especially as the leading country in this 4IR will also "emerge as the driving force in defining ethical norms and standards [for AI]". Thus, given the nature of AI and the vast differences in values between the US, China, and Europe, it is essential for the EU to formulate a clear strategy to lead the 4IR -reaffirming its seat as the leading industrial region.
Methodology
To compare the AI developments of the US, China, and the EU focusing specifically on France, and Germany I will be using both a qualitative and quantitative methodology.
Due to AI's 'omni-use' potential the thin line between core AI technologies and AI related technologies is often blurred, so how each country interprets the difference between the two is essential to formulate an accurate comparison. The clearest way to assessing a country's AI vision and advancements is first done by understanding its 'AI dreams'. I will commence by analyzing each nation's goals and strategies implemented by comparing, through governmental documents, each regions objectives outlined for the coming years. Then, to get a sense their current AI strengths and capabilities, I will be using some of the core categories articulated by Jeffrey Ding notably in AI hardware, AI research and development (AI R&D) and strength of AI ecosystem (2018). I have chosen these three categories from Ding's research as they are crucial to formulating, as later explained, a macro-perspective on the development of AI technology as a whole.
Advancements in AI hardware is imperative to become leader in this sector as not only will it mean complete autonomously in the creation of AI chips, a necessity for supporting AI algorithms, but will also facilitate the development of faster and more efficient cutting-edge systems. Thus, by analyzing the global market share of semiconductor production, the highest performing supercomputers, the number of firms developing AI-specific chips and the overall investment in this market I will be able to construct an overall image of each nations raw processing power. Then, AI R&D provides the necessity framework to determine a nations potential in terms of innovation and capabilities. By further dissecting this section, I will be able to compare each state's potential, thus competitiveness on an international stage. Some of the sections I will be analyzing include: the general paper output and influence in AI related research, the number of AI patent, and the number of AI talent -for both firms and countries.
Finally, for a nation to fully benefit from its vision, hardware, and R&D it must first and foremost develop a strong AI ecosystem. Thereby, by understanding the difference in AI ecosystem of each region through primarily focusing on the funding and number of AI startups, a comparison can be made between China, the US and the EU in terms of their talent and foundations for AI.
Through a cross-country qualitative case study method, I will use these four categories to examine advancements in AI across the US, China, and Europe, specifically France and Germany. This report draws upon specialized databases to understand the current state of the AI industry notably in AI output (journals, patents, and talent), AI specialists and policy documents. The data collected draws upon various reliable sources from both previous academic research conducted and databases.
Understanding China's AI Dream
In 2017, China's president Xi Jinping outlined his vision for China becoming a global science and innovation leader by 2050. The government has been paving the way for years and the country's knowledge-intensive high-tech sectors has been developing at a rapid pace.
In the 19 th Party Congress in October 2017, China's General Secretary of the Chinese Communist Party Xi Jinping outlined his vision for China's advancements and his aspiration for China to be "a leading nation in terms of national power and global impact by 2050" (Roberts et al., 2021) notably regarding "propelling China into a leading position in terms of economic and technological strength" (Jinping, 2017). Development in AI has become fundamental for realizing this goal. A clear turning point in China's AI views that led to such important initiatives to developing this technology was in March 2016 with the victory of Google DeepMind's AlphaGo (a computer program) on Lee Sedol, winner of 19 world titles, 4-1 (Li & Ruiyang, 2016). The victory of the computer algorithm played a vital role regarding China's views on AI and acted as a sort of 'sputnik moment' for China and "the event paved the way for a new flow of funds into the discipline" commented the two professors who consulted with the government on their AI strategy (Mozur, 2017). In July 2017, The State Council of China released the 'New Generation Artificial Intelligence Development Plan' that officially outlined 3 strategic objectives, making China's growth in the AI sector a "national priority". These objectives are: (1) by 2020, China will have actualized important progress in the spheres of AI and AI related technology -with aspirations to make China the world's primary AI innovation center cultivating leading AI 'backbone enterprises' with an AI core industry scale exceeding RMB 150 billion, and of related industries exceeding RMB 1 trillion; further developing its AI ecosystem as well as developing AI ethical norms. (2) By 2025, China aspires to develop "major breakthroughs in basic theories for AI" as well as have official legal frameworks established for AI laws. (3) By 2030, China dreams of becoming the major hub for AI innovation and development with the AI core industry exceeding RMB 1 trillion and related industries exceeding RMB 10 trillion (Graham Webster et al., 2017). Whilst this plan does represent an important milestone for China, it is only a small part of its overall AI strategy, merely acting as a continuation of previous government initiatives like the '14 th Five-Year Plan' or the 'Made in China 2025 Industrial Plan'. As well as, in the past, academics and researchers have assumed that China's approaches to AI were defined by its "coherent top-down geopolitically driven national strategy, reflecting Chinese leaders' global ambitions" (Zeng, 2021). With China being a re-emerging global superpower it is home to 23% of AI companies (Lam et www.JSR.org guiding role in AI advancements however is not the sole contributor. As scholars like Jinghan Zeng argue, instead of a top-down approach, China's government dictates the overarching goals and is simply designed to help bureaucratic agencies, private companies, academic labs, and subnational governments to achieve their own interests, thus also allowing businesses to adopt, through its flexibility, a bottom-up approach (Zeng, 2021).
Understanding the US's AI Dream
In February 2019, the Trump Administration signed Executive Order (EO) 13859, Maintaining American Leadership in Artificial Intelligence, announcing a nationwide plan for developing AI. This launched the American AI Initiative, identifying USA's long-term vision for AI development under five major strategies (Kratsios & Parker, 2020) that were later codified into laws as part of the National AI Initiative Act of 2020. The EO aims to "ensure that technical standards (…) reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies (…) and develop international standards to promote and protect those priorities". These AI strategies as per the EO include (1) driving technological breakthroughs, (2) developing appropriate technical standards, (3) training the current and future generation of American workers, (4) foster public trust and confidence, and (5) promote an international environment supporting American AI research and innovation (Trump, 2019). In support of this initiative, the Select Committee released an updated version of the 2016 National AI R&D Strategic Plan; accounting for new research and innovations, it outlined eight clear strategic priorities to guide federally funded AI research. According to the 2020 Budget for a 'Better America', it provides USD 688 million for the National Institute of Standards and Technology (NITDR) to "conduct cutting-edge research, including quantum computing, artificial intelligence, and microelectronics" (Kratsios, 2019). Later, the NITDR also released a Supplement Report to the President's FY2020 Budget showing a breakdown of the first agency-by-agency budget allocation for non-defense AI R&D, equating to nearly USD 1 billion for the year (White House, 2020). The document also provides an important baseline that outlines key programs and the administration's strategic priorities, making AI -along with quantum information sciences and strategic computing -the second highest R&D priority behind national security. In addition to USD 1 billion pledged by the US government, starting from 2019 fiscal year, the Defense Advanced Research Projects Agency (DARPA) plans to add an additional USD 2 billion over the next 5 years in "new existing programs called the 'AI Next' campaign". This campaign aims to bring about a 'third wave' of AI technologies where "systems are capable of acquiring new knowledge through generative contextual and explanatory models" (DARPA, 2019). Whilst the US government does play a role in the development of AI, it has over the years adopted a bottom-up approach with private companies leading the growth of AI. It is important to consider, government spending represents a small portion of total investments, with the tech giants like Nvidia, Alphabet and Amazon paving the road for AI investments. As a report from CSET argues, major AI companies are often categorized together as general AI leaders, however, focus on very different subfields within AI with "considerable differentiation in the areas of research they prioritize in". Essentially, positioning the US government as a 'gap filler' in research that is underinvested by the private sector (Gelles et al., 2021).
Understanding the EU's AI Dream
In 2018, the European Council formulated a Declaration of Cooperation on AI, signed by 25 European member states (including the UK). Whilst some of these countries, mainly the west, already have announced their own AI initiatives it is essential for the EU to formulate a coordinated approach to increase its competitiveness with major powers like the US and China as well as to ensure that 'European values' are upheld. This coordinated approach aimed to (1) ensure Europe is competitive in the AI landscape, (2) guarantee all countries adapt in this digital transformation, and (3) develop new technologies in accordance with its values. Moreover, in February 2020, the European Commission published a White Paper on AI which proposes policy options on how to achieve the "twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of this new technology". In cooperation with both Volume 10 Issue 3 (2021) ISSN: 2167-1907 www.JSR.org national and private sector, the EU's holistic approach aims "to increase investment and reach a total of at least EUR 20 billion" (European Commission, 2020). The EU is also actively promoting research and innovation whilst safeguarding ethical aspects of the progress achieved regulated by the EC-appointed High-Level Expert Group on AI (European Commission, 2019). As argued by scholars like Fabien Merz, the EU has created an 'umbrella approach' where it aims to maintain global competitiveness in AI and create the conditions to closing the gap between the US and China whilst simultaneously developing corresponding regulatory and ethical guidelines. France and Germany are arguably two of the most advanced countries in AI within the EU however have two distinct strategies.
Understanding Germany's AI Dream In November 2018, the Federal Government of Germany launched like France an AI Strategy, that presented the progress made, goals for the future and policy initiatives to be done in Germany. The strategy is revolved around three main goals: (1) increasing and consolidating Germany's future competitiveness by making Germany and Europe a leading center in AI, (2) guaranteeing a responsible development and deployment of AI which serves the good of society, and (3) integrating AI in society in ethical, legal, cultural, and institutional terms in the context of a broad societal dialogue and active political measures. The FY2019 Budget has allocated a total of EUR 500 million to 'beef up' the AI strategy for 2019 and pledged by 2025 to provide around EUR 3 billion for the implementation of the strategy (Federal Republic of Germany, 2020). Of the EUR 500 million budgeted in 2019, EUR 190 million was invested in research and education in AI. As well as some of the key projects it announced were to create at least 100 additional professorships for AI to ensure its presence in higher education system as well as establish a 'teach-andlearn AI' platform dedicated to developing the necessary skills to understand, thus, operate AI. Moreover, in response to the AI talent shortage, the Federal Government also launched large scale qualification initiatives revolved around 'upskilling' and 'reskilling' workers across any industry to make them more future ready (Federal Ministry of Education and Research et al., 2018). Moreover, Germany announced numerous partnerships both locally and nationally, like the Franco-German research and development network to ensure cooperation in AI research and innovation. Similarly, to France, the report also focuses on "integrating AI in society in ethical, legal, cultural and institutional terms" through developing joint guidelines with businesses and AI dedicated platforms.
Understanding France's AI Dream
The report written by mathematician and member of parliament Cedric Villani in 2018 subtitled 'Towards a French and European Strategy' announced France's AI ambition to "playing a leading role at [a] global level and compete with non-European giants [in AI]", heavily relying on research and innovation (Meurier et al., n.d.). The Mission Villani report essentially addresses seven main objectives, which are to: (1) develop an aggressive data policy, (2) target the 'four strategic' sectors (healthcare, environment, transport, and defense), (3) boost the potential of French research, (4) plan for the impact of AI on labor, (5) make AI more environmentally friendly, (6) open up AI black boxes 3 , and (7) ensure that AI supports inclusivity and diversity. According to the European Commission the French government will dedicate EUR 1.5 billion to the development of AI by the end of 2022, including EUR 700 million for research. Some of the key issues that France's faces regarding AI development is the 'endemic' brain drain of researchers towards the major industrial players "GAFAM and other unicorns" due to its weak AI ecosystem which then results in significant knock-on effects regarding competitiveness for AI talent. Thus, one of the main focuses of the French AI strategy highlighted by Emmanuel Macron, the French president, is "to improve the AI education training and ecosystem to develop and attract the best AI talent" (European Commission, 2021). In coordination with leading research entities and universities, the National Research Institute for Computer Science and Automatism (IN-RIA) oversees the main strategies proposed. One of the two flagship projects suggested by Villani is the establishment of interdisciplinary institution on AI (known as 3AI) across France, which essentially creates a network designed to strengthen the AI sector and enhance research and innovation amongst institutions involved in AI research. As well another of Villani's flagship projects is the creation of an Ethics Advisory Committee for Digital Technologies and AI to oversee AI developments. Essentially these two initiatives will 'encourage' AI development that is environmental and ethical; through collaborations and immediate access to cutting-edge AI research for France and partnered countries (Villani, 2018). By promoting collaborative research agreements between renowned research institutions notably with the trilateral French-Japanese-German research projects on AI in partnerships with: the French National Research Agency (ANR), the German Research Foundation (DFG), and the Japan Science and Technology Agency (JST); will as outlined by the European Commission create a network that will allow "for an efficient sharing of knowledge associated with AI across various stakeholders and increases their motivation to participate in cutting-edge AI research" (European Commission, 2021). France has also created a National Commission for Information Technology and Liberties (CNIL) which developed a Digital Republic Bill that outlined characteristics that must be "at the heart of the French AI model: including respect for privacy, protection of personal data, transparency, accountability of actors and contribution to collective wellbeing" (Lemaire & Mandon, 2017).
Takeaway
With the UK also trying to assert itself as the AI leader of Europe, France is not the only contestant. Before analyzing these two countries in a global context it is important to note that while the German government pledge to provide around EUR 3 billion for the implementing their AI strategy by 2025, it only allocated EUR 40-50 million to AI (Stix, 2020). Thus, Germany's relatively weaker AI initiative due to its limited resources could create an opportunity for France, rather than a threat; encouraging partnerships and research collaborations that would be mutually beneficial.
In addition, even though AI is still in its infancy stage, France -like its European neighbors -have already fallen behind the US and China in numerous categories (later explored). Due to EU's 'umbrella approach' investments done by individual countries like France and Germany remain insignificant compared to its rivals. Even if France doubled its AI budget from EUR 1.5 billion to EUR 3 billion it is still no match for the colossal USD 68 billion investment by the US and USD 70 billion by China. Unlike the ecosystem developed in the US with a bottom-up approach where its tech giants shape the ecosystem through their own interests and company values and China's top-down approach where the government shapes the ecosystem through a single framework , countries like France and Germany are trying to create conditions encouraging ethical and sustainable ecosystem through 'multistakeholder collaboration' argues a report by Access Partnership (Williams, 2018). France's position as a potential AI leader could be strengthened by its soft-power agenda, reducing all the concerns and fears about AI integration. Whilst the US needs to secure its position it remains AI leader. The US has traditionally led the world in developing and applying AI technologies with the development of its tech giants dominating the digital world. However, its position is increasingly threated due to its weak government initiatives as other governments are aggressively developing new initiatives, all aiming at being world leader in AI. The US industry is led by its tech giants, and the US is seen as a 'gap filler' however this approach may not be sufficient in the long run to outcompete its biggest competitors like China. Considering that AI technology is still in its nascent stage, the USA has the best-established ecosystem, thus able to attract the most top AI talent and develop the greatest number of startups and investments, however other nations like France, Germany and China are rapidly developing their own ecosystems.
The US, China, and EU have all expressed their desires to be leaders in this shift to the fourth industrial revolution powered by AI. After understanding these countries' goals in AI, this paper now examines their implementation. Hardware What do I mean by hardware in AI?
Advancement in AI hardware (also referred to as AI chips or AI semiconductors) is imperative to reach the title of leading country in AI. As major corporations and governments are looking to develop AI technology into their systems, semiconductors manufacturers have started to design hardware specifically tailored to support and develop AI. When defining AI hardware there are four main chips involved that can be used in training and developing AI technology, which essentially can then be divided into two categories.
(1) Chips originally designed for other computing purposes but that can also be adapted for developing AI technology (e.g. CPU and GPU) and (2) chips that are specifically designed to accelerate AI especially in the fields of artificial neural networks, machine vision and machine learning (e.g. ASIC and FPGA) (AnySilicon, 2019). Both FPGAs and ASICs consume much less energy and allow for greater flexibility in their design, leading to greater speed and lower costs. Given the broad usage of chips, the best way to analyze a countries AI development in the context of AI hardware is through comparing: (1) the global market share of semiconductor production, (2) the highest performing supercomputers, (3) the development of customizable chips and (4) AI chips investment.
(1) The Global Market Share of Semiconductor Production
Many experts believe that chips specifically designed for AI will eventually outperform the multipurpose chips like GPUs. Thus, providing a competitive advantage for the US tech giants like Google, Alphabet, and Apple who are already manufacturing their own AI chips. In terms of general chip-making the U.S. semiconductor industry is the leading manufacturer in the world capturing 47% of global semiconductor revenue in 2019, generating USD 41 billion, making it the fifth largest export for the US (International Trade Administration). Whilst the US companies have nearly 50% of the global market share, over 80% of their sales don't take place in the country. According to Research 2020 Rank Company Headquarters (Wübbeke et al., 2016). According to research firm IC Insights, of the USD 143 billion in chips sold in China in 2020, only USD 22.7 billion worth were produced in China, and only USD 8.3 billion of that was produced by Chinese-headquartered companies (36.5%), accounting for only 5.9% of the total market. As per its forecasts, if Chinabased internal circuit manufacturing rises to USD 43.2 billion in 2025 it would still only represent 7.5% of the total market forecast in 2025 of USD 557.9 billion. Even with significant increases in Chinese-based production of chips, it would still likely represent only 10% of the global internal circuit market, far from its goal to being world leader in AI development (IC Insights, 2021).
(2) Highest performing supercomputers One of the best ways to analyze a countries strength in producing chips is looking at supercomputers, because they are extremely fast, high performance systems that require powerful chips to process the mass amounts of data and massive calculations (Intel). The best benchmark to rank performance of supercomputers is perhaps best done by the TOP500 project which ranks and details the most powerful computer systems in the world. The performance of supercomputers is expressed using floating-point operations per second, also referred to as FLOPS (University Information Technology Services, 2020). China is the leader in this ranking with 219 supercomputers, more than the US with 116 and combined EU with 92. The US position in the development of the world's fastest supercomputers not only shows China's commitment to leading the race for the next forth industrial revolution but also demonstrates China's AI capabilities, as chips remain the backbone for creating an AI ecosystem. Despite China's growth in supercomputers, Intel continues to dominate the TOP500 ranking where the US companies chip appears 95.6% of all systems, clearly showing Chinese and European reliance on American chipmakers (TOP500, 2019). Whilst USA remains essential to the production of chipmaking, it is losing global leadership in supercomputer. In 2010, 282 of the TOP500 performing supercomputers in the world were American, now there is only 116. There are also now nine Europe-based supercomputers in the top 25, and the fastest one is Germany-based 'JUWELS'. This shows, that whilst the US is still the leading manufacturer of semiconductor hardware chips, it is slowly losing its position as the most technologically advanced nation in AI technologies.
(3) The number of firms developing customizable chips
There are two categories of AI chips as previously mentioned, while the first category of 'traditional' chips is currently dominated by US companies; experts are hopeful that the category of 'new' chips will take over the chip market, thus leaving China and other countries in a competitive position.
In the first category of AI hardware producers of GPUs and CPUs are all mainly produced by US companies Nvidia, AMD and Intel. Amongst the top 10 American companies 4 specialize in GPU chip making; in contrast to the top 10 Chinese companies where non are specialized in either (Ding, 2018). According to the Foreign Policy journal, "the only serious Chinese rival to these advanced U.S. chips is the HiSilicon Kirin 9000, designed by Huawei's own in-house 'fabless' chip-design subsidiary" (Babones, 2020). However, amidst sanctions due to US-China trade war the business has greatly suffered as a large sum of its parts were acquired from American manufacturers.
To comprehend why firms are shifting to a GPU system incorporating CPU we must look back to Nvidia's first GPU chip GeForce 256 in 1999; originally designed to increase the processing power of complex computer games. As this technology evolved, in 2010 during the Google Cat photo challenge Nvidia, with 48 GPUs, achieved the same performance as 16,000CPUs -creating a new movement for GPUs (NVIDIA, 2021).
As per the Centre for International Governance Innovation, Intel was the biggest vendor in GPU market with 69% in Q4 2020, followed by AMD (17%) and Nvidia (15%). However, GPUs are not the only hardware platform that can train and execute neural networks. The main argument behind manufacturing hardware in the second category www.JSR.org -FPGAs and ASICs chips specifically designed for AI -is for its customizability and low power costs. Chinese startups like DeePhi's deep learning processors are entirely dependent on the western chip companies such as, Xilinx's or Intel's FPGA network. As academics like Loskai and Toner argue "the technical dependence on western chip company is not an anomaly -it's industry norm". Companies like Horizon Robotics, world's highest valued AI chip unicorn (Zhang, 2019), rely on Intel's FPGA network for its main processor. Whilst Chinese companies are currently reliant on Intel's, Nvidia's and AMDs for GPUs and Xilinx's and Altera for FPGA's, there remains one key chip model that hasn't been explored, ASIC. As computer scientist Kai-Fu Lee argues in 'AI Superpowers: China, Silicon Valley, and the New World Order' the AI industry has already experienced the basic innovation in terms of deep learning; now it is how companies can adapt and apply it to specific cases (Lee, 2018). Many specialized AI chips like ASICs, FPGAs and various other neural network processing units have efficiency, cost, and market potential advantages over the GPU; thus, offering a relative 'white space' in which China and Europe can compete (Ma, 2019). Research is increasingly pointing at ASIC to be one of the dominating AI chips; McKinsey projects that GPUs presence in AI training will shift from 97% (in 2017) to 40% (in 2025) with ASICs chips drastically rising to 50% by 2025 (Batra et al., 2019). Given the customizability and other advantages of ASIC chips many scholars like Dantong Ma believe that they are looking promising for China's aim to be a global competitor in this paradigm shift to the fourth industrial revolution. China is leading AI specific chip making companies heavily focused on ASICs chips; six of the top ten companies specialized in production ASICs as of 2018 are Chinese (Gullett & Bleykhman, 2020). Thierry Breton, the European Industry Commissioner, described the EU market as "too naïve, too open" and has unveiled a strategy in March 2021 with ambitious aim to double production in chips manufacturing to 20% by 2030. As well as officials has also aimed to beat industry leaders in the race from 5nanometers down to 2-nanometers transistors, making the chip more efficient and powerful (Bloomberg, 2021).
The USA is still dominating in the manufacturing of non-specific AI chips; however, Europe and China have developed ambitious goals to decrease reliance on US chips. The production of specific-AI chips still does not have a defined leader giving China an opportunity to decrease dependence on US chips and provide more cost-efficient, flexible chips. Europe has also pledged to decrease reliance on chip-production by doubling its production. As these regions evolve, the US is slowly losing its position as world leader in chip-making but will still likely maintain its strong position for a while.
(4) Investment in AI chips
Another way of measuring the development of a countries AI chips is by looking at the investments. Global VC investment in semiconductor companies in the first 3 months of 2021 have set a quarterly record for deal value at USD 2.64 billion, with 70% of the funding going towards Chinese companies according to Pitchbook data. Some experts argue that China is better positions to compete in AI chip market than its biggest rival. The USA is currently dominating the manufacturing of non-AI specific chips due to its lead in the 3IR. However, with most VC investments directed at Chinese startups like Enflame Technology, developing deep learning chips for AI training. It has a total funding amount of approximately USD 500 million including Chinese tech giant, Tencent (Crunchbase, 2021). Similarly, Horizon Robotics, the only one to achieve mass production of AI chips for vehicles in China, have announced plans to raise over USD 700 million to its new series C chip, USD 100 million more than its investment in its series B funding in 2018 capped at USD 600 million (Zhang, 2019). In addition, China's tech giants including Baidu, Tencent, Alibaba, and Huawei, have gained interest in developing AI chips -with Huawei showing promising results as they compete with Apple developing 5nm chips, which refers to the size of the transistors in a processor. Smaller transistors are critical as they are more efficient in terms of power than larger ones. US firms are also developing specialized AI chips like Google's Tensor Processing Unit (TPU) or Apple's new M1 chip. Similarly, the EU aspires to reduce its reliance on overseas production of chips, specifically France and Germany. US tech giant Intel is willing to help the EU achieve its 'dream of semiconductor sovereignty' but requires a commitment of at least USD 10 billion (Hetzner, 2021). Apple will also invest over 1 billion euros in Germany and plans European Silicon Design Center in Munich, focused on 5G and future wireless technology (Apple, 2021). In terms of investments the in chip making
AI R&D
Google Scholar, the largest database for academic papers of its kind, released a ranking of the most highly cited papers announcing that "AI research dominated again" (Crew, 2020). Governments and firms in China, United States and the European Union have all launched initiatives to promote AI R&D, attempting to attract AI talent, and thereby increasing their competitiveness on an international stage. In this section AI R&D is defined as any research or innovative initiatives conducted by companies or governments to improve and grow existing knowledge in the domain of AI (Kenton & James, 2021). This section analyses the success of each countries AI R&D through (1) AI research output and influence, (2) AI Patents and (3) AI Talent. Compiling indicators to compare AI development amongst nations is essential to understanding the influence they each have on AI development, and thus potential in terms of innovation and AI capabilities
(1) AI Research
As per a report released by Tsinghua University, "it was found that 58.64% of AI papers were proceeding papers, showing that proceeding papers are important sources of AI research output". This section defines AI paper output as any related AI proceeding papers, articles, reviews, book chapters or any other literary works published between 1997 and 2017. The last 20 years, many countries have attempted to develop their own paths towards AI research; China is the current leader in terms of total AI paper output with 369,588 publications, ahead of the USA (327,034) and the EU (combined score of 328,630). Indeed, the combined AI paper output of the leading countries in the European Union from: UK (96,536), Germany (85,587), France (72,261), Spain (58,582), Poland (25,592), and Netherlands (25,138) is greater than the United States. As well as, amongst the top 20 research institutions in AI specific output the two major institutes that are leading research is the Chinese Academy of Sciences Systems and the French National Center for Scientific Research. Of the 20 contestants, 13 are based in the EU; with France, Germany and the US being the three most featured each with three titles (Tsinghua University, 2018). Whilst the EU may have a lead in the number of research institutions in terms of AI output; of the top 20 companies leading AI research in 2019, 10 of them were American, including the first 4 with Google, Microsoft, Facebook, and Amazon (Chuvpilo, 2020). In terms, of AI research China and the USA are the two leading countries both showing dedication in research, however when considering the EU as a whole it also surpasses the US. Leading European countries like the UK, France, and Germany are playing a significant role in shaping AI research in the European region and show great AI potential. However, it is not the amount of AI specific output that is produced, but their impact. As Jeffrey Ding wrote, "just pumping out raw numbers of papers that don't have a lasting impact isn't really useful". Formulating a logical relationship between the published articles and published references will provide accurate data assessing a paper's influence. The more frequently cited the paper the more influence it has (Tsinghua University, 2018). To ensure a reliable comparison method this report adopts the H-Index which is widely recognized in the academic community as "an estimate of the importance, significance, and broad impact of a scientist's cumulative research contributions" (Hirsch, 2005). As per a report published by McKinsey Global Institute on AI, although China produces a large number of widely cited AI-related papers, US and UK research remains much more influential (Barton et al., 2017). While the United States is currently leader in publication influence, given the nature of AI race, it is essential to promote international cooperation with 'likeminded democracies' (Kerry, 2021). Collaboration in AI R&D has become essential to "liberate silos, protect privacy and security, and maintain efficiency" notes Intel, leading company in AI . In 2018 the Tsinghua University calculated that international collaborative research papers on AI represent 23.42% of all AI papers but as high as 42.64% of all top papers on AI and even more than 50% of all top papers on AI of the top 10 countries with the greatest output of top papers on AI. According to a report released by Stanford, US authors are cited 40% more than the world average regarding AI papers (Shoham et al., 2019). It is essential for the European Union to follow a similar approach to France, which holds the highest percentile in industry collaborative papers, and is heavily investing in AI research. China produces the most AI research however in terms of quality it has far less influence than the US and is close to the UK.
(2) AI Patent Due to different definitions of AI technology, it is complicated to directly compare the difference in investment in aggregate investment (both regarding private and public sector). Thus, an alternative measure of comparing the success of AI R&D is through investments in AI patents. A 'patent application' is defined as an application submitted in one jurisdiction and published in that same jurisdiction; a patent can be re-published during processing (eg. later research, grants, corrections) however is only counted the first time it is published. A 'patent family' is defined as "a group of patent applications made and published in different jurisdictions". Each patent in a patent family is related to a single invention (UK Intellectual Property Office, 2019). As per data collected by the Intellectual Property Office in, the US remains the leading country in overall published AI patent application with nearly 50000, despite China's exponential increase with just above 40000 and Germany placing 7 th and France placing 17 th both of which are under 5000 publications. As well as the importance in the IP5s is also worth noting, with WIPO and EPO ranked 3 rd and 5 th above any European countries.
However, it is essential to analyze the countries AI applications published in each of the IP5s per year to understand which countries are driving the recent 'AI patents boom'. The IP5s are defined as the largest intellectual property offices in the world which were designed to improve efficiency in the review process of patents worldwide, these include: the European Patent Office (EPO), the Japan Patent Office (JPO), the Korean Intellectual Property Office (KIPO), the National Intellectual Property Administration of the People's Republic of China (CNIPA) and the United States Patent and Trademark Office (UPSTO). Analyzing these IP5 Offices is essential as together they are responsible for 80% of the world's patent applications (IP5 Office, n.d.). According to a report published by WIPO, patents relating to AI skyrocketed in the last 5 years representing 50% of all patents published; as well as AI patent families grew on average by 28% annually between 2012 and 2017 (World Intellectual Property Organization, 2019). As per data from the Intellectual Property Office, China is largely responsible for the growth in published patent applications in recent years. Whilst the US remains the leading country in total published patents since 1998, China is rapidly closing the gap whilst European countries are falling behind.
Another way to understand the recent 'AI patent boom' is looking at the top applicants worldwide in terms of AI patent families. Companies particularly in Japan, USA and China are dominating patent activity. According to a report published by WIPO, companies represent 26 of the top 30 AI patent applicants, with only 4 universities and public research institutions. Across different AI-related patents, Microsoft, and IBM rank first and second in most AI (
3) AI talent
The difference in AI research may also be due to AI talent. Researchers are essential for progress in AI development; as Nick Zhang, president of the Wuzhen Institute, an AI think tank in Wuzhen said, "it's a talent war -whoever makes the best offer wins". AI talent is defined as individuals who possess a technical and creative expertise that encompasses statistical modeling and big data computational skills*, conducting active AI research and formulating innovative outcomes (LinkedIn, 2019). According to LinkedIn the aggregate demand for AI talent grew by 74% annually between (LinkedIn, 2020. This section analyses the AI talent of each of the three countries through (1) the number of AI researchers, (2) the number of top AI researchers, (3) location of AI researchers' graduate degrees. The AI talent pool is scarce, densely distributed amongst the United States, China, India, and some countries in Europe. The United States has an estimate of 28,536 people ahead of China (18,232) and India (17384). However, the combined score of Europe is 43,064, making it a major player in AI talent. The combined number of AI talent from Germany (9,441), the United Kingdom (7,998), France (6,395), Spain (4,942), and Italy (4,740) was 33,516 -greater than USA's AI talent (Tsinghua University, 2018). Whilst the data for the other 23 EU nations were unavailable the EU has enough AI talent, thus AI potential, to outcompete its biggest rival, USA. While China accounts for 8.9% of total AI pool, it only holds 5.4% of the top AI talent of all AI talent in the country, as shown in table 2; behind the USA (18.1%), the UK (14.7%), Germany (11.9%), France (16.5%) and Italy (20.8%). AI talent is clearly mal distributed across Member States, with just three countries accounting for almost half of the total EUs AI talent. In order to transform the EU as a global leader in AI technology, they should equalize the distribution of AI talent across EU and create a single approach to strengthen its AI ecosystem. China's lack of AI talent could be due to its relatively new interest in AI, with only 25% of Chinese AI talent having more than 10 years of experience compared to the US with nearly 50% (Qingqing, 2017). The distribution of AI talents by affiliated entity worldwide by the end of 2017 showed that 73.3% of all AI talent came from universities and 15% from research institutions. Thus, analyzing the development of AI talent through these two lenses is essential to understanding a countries total potential (Statista, 2017). As shown in table below the leading contributors in academics with the highest concentration of AI talents are dominated by China, with 11 of the 20 universities listed. However, when compared to the top international AI talent Tsinghua University, the only Chinese university, is classed 15 th of the 20 ranked with half of them being American (Tsinghua University, 2018).
Takeaway
Although China is one of the most active countries in terms of AI research with the highest number of AI talent, research institutions and papers published, the US and Europe are far more influential in terms of their output. This section shows that the US and leading European countries like UK, France and Germany have great amounts of AI capabilities due to the high quality of its output. This pattern persists not only in AI Research but also in AI Development, where China once again supersedes the US and Europe in terms of quantity but stumbles when quality is examined.
AI Ecosystem
In addition to AI Research, another important area of exploration is the AI ecosystem. For nations to fully benefit from the development of AI they must first develop a healthy ecosystem that will attract AI talent, thereby developing research and innovation. It is important to note that unlike other ecosystems, the AI Ecosystem cannot be associated to a single industry, "but rather it is composed of multiple, interconnected enabling technology sub-industries (henceforth referred to as segments) -such as machine learning, robotics, computer vision, or natural language processing. Each of these technology segments will have their own trajectories, but they must be considered in relation to the other segments in order to get a true systemic understanding of AI ecosystem evolution" defines Rahul C. Basole, Associate Professor in the School of Interactive Computing and a Director of the Institute for People & Technology (Basole, 2021). This section analyses the development of AI Ecosystem of China, US, and Europe through (1) funding of AI start-ups and (2) the number of AI startups.
(1) Funding of AI start-ups New analysis by the OECD, using data from Crunchbase, found that AI start-ups have so far attracted around 12% of all worldwide private equity investments in the first half of 2018, a significant increase from just 3% in 2011 (Breschi et al., 2018). China has seen an upsurge in spending from 2016 in total investments and now appears to be the second leader in terms of AI equity investments received; this growth reflects China's efforts and government initiatives like the Made in China 2025 announcing its ambition to dominate key technologies identified like AI, 5G and semiconductors to name a few (Wübbeke et al., 2016). The EU accounted for 8% of global AI equity investment in 2017, representing a significant increase for region compared to 2013 (with only 1%). However, according to data recorded in Crunchbase, the investment in AI start-ups is extremely varied throughout member states. With the UK dominated with 55% of total amount; followed by the Germany (14%) and France (13%), implying that the rest of the 25 member states account for less than 20% of all investments in AI received in the EU. As expected, US attracted a significant proportion of all investment deals, China transitioned from having no deals in 2011 to 60 in 2017, showing China's rapid growth as well as surpassing the EU in higher average deals (Breschi et al., 2018). Digital giants like Google and Baidu are also heavily investing in the acquisition of AI startups, aiming to attract and harness the most AI talent as possible. According to CB Insights from the major mergers and takeovers (M&A) deals in Jan 2011 -Feb 2017, there were nearly 60% conducted by American tech giants; with Google dominating at 11 deals ahead of, Apple (5), Intel (5) and Microsoft (4). As well as of the 10 major M&A deals including VC Investments, 9 of the 10 were by the Chinese tech giants. With Tencent leading with 5 ahead of Baidu (2), Alibaba (1) and JD (1). As well as from 2017 -2018, U.S. AI firms received the most investments (1,270 deals) ahead of the European Union (660) and China (390). Per one million workers, the United States (8 deals) led the European Union (3) and China (0.5) (Castro, 2019).
The three main patterns that should be remembered is that: (1) Chinese start-ups have had unprecedented growth in the last from 2013 to 2017, (2) the USA has the largest M&A deals including VC Investments, with a steady growth and (3) such difference in investment in the EU could fuel a gap between the UK, France and Germany and other member states. (2) Number of startups As the pace for AI intensifies and businesses are trying to join the multisectoral race to integrate into their products, the pool for AI talents is becoming even more scarce. Roland Berger, a global consultancy, and Asgard, a Berlinbased investment firm, has categorized AI startups and digital platforms as the "players creating algorithms and developing use cases, they are the brains behind innovation". In their report on AI, they defined an AI company as "one that produces a primary product or service utilizing machine learning, deep learning, image recognition, natural language processing or other frontier AI technology". According to the study, the US dominates the race for startups, accounting for 1393 startups -representing approximately 40% of the global distribution. China is ranked second with 383 startups (11% of the total worldwide) and Israel in third place with 362 startups (10%). However, if we consider Europe as whole it equates to 769 AI startups or 22% of global distribution with the UK (245), France (109) and Germany (106). The leading regional hub is San Francisco with 596 startups, followed by London (211), Tel Aviv (189), New York (180) and Beijing (150) (Roland Berger, 2018). While the UK will remain the powerhouse of European AI for many years, Brexit has allowed other European countries like France and Germany to extend their influence. This is because a key part of London's tech ecosystem originates from overseas talent attracting roughly "one in five London tech workers [being] European; one in three is from overseas' making them a big portion of the ecosystem" says an article by Soldo. As well as data from CB Insights regarding the top 100 startups, the US and EU outcompete China.
In keeping with the AI research section, this examination of the AI ecosystems finds that China leads in terms of quantity but not necessarily quality. In both the start-up and funding space, China does outcompete its rivals in terms of quantity; however, the US and Europe do have higher quality of output especially regarding the strength of their ecosystem. Whilst China does have an advantage over Europe with its tech giants like Baidu and Alibaba, Europe, specifically France and Germany, is prioritizing sustainable growth over rapid growth.
Conclusion and Implications
The development of AI will have an impact beyond the realms of technology and efficiency, it will have both immediate and long-term impacts on society and completely change the balance of power on an international stage. Given AI's 'omni-use' the approaches to defining this concept has become ever so different; fundamentally, it refers to a program whose objective is to understand and reproduce human cognition through interpreting vast amounts of data. AI technology is one of the few technologies that can revolutionize every sector, however, is reliant on large amounts of data, research, and collaboration in order to develop. Hence, understanding the ambition and initiatives of the three major players, the US, China, and the EU, specifically France and Germany, is essential to formulating an argument to who is going to lead this fourth industrial revolution, and therefore transform the global power distribution. In the article, I argue that AI is at the epicenter of the fourth industrial revolution and will, similarly to previous industrial revolutions, completely transform global governance; especially regarding the world order established by the United States after its inherited 'unipolarity' in 1991. The implications of this paper suggest that the international system is moving from unipolarity towards multipolarity with Europe and China at the forefront of this movement.
Whilst the United States needs to secure its position, it remains leader in AI technology due to its undeniable strength in core AI development. Notably its dominance in the manufacturing of AI hardware; as well as its strong AI ecosystem fostering top AI startups and talent. It is also home to tech giants like the Nvidia and Alphabet who secure large amounts of investments and research; hence, providing the perfect platform for innovation in AI technology. China has been experiencing unprecedented growth ranking amongst the leading countries in AI development, and is indeed an increasing threat to USA's leadership in this field. Beijing has become the number one city in AI development and potential, in terms of number of enterprises, talent, research institutions, venture capital, startups and innovations. However, despite its rapid growth it remains weak in several indicators, especially regarding the quality of its development. Although its numbers are next to the United States in multiple criterium like AI papers or AI talent, when only considering top tier output, there is a significant gap between US and European countries, notably France Volume 10 Issue 3 (2021) ISSN: 2167ISSN: -1907 www.JSR.org and Germany. Thus, China is still far from its goal to be world leader and still remains behind the US overall; however, outcompetes other countries like Germany and France. In terms of areas for future research, at present, AI still lacks a universal definition, making data collection tricky to analyze. Although this paper draws from multiple definitions of AI and analyzes the influence of AI through different categories, other papers may be interested in examining different categories. The research can also be further expanded to examine other European states and develop a holistic narrative, particularly as the European Union takes on questions of AI regulations. Moreover, given the nature of AI and data availability, this report only considers the commercial usage of AI, completely excluding its military capacity. The framework examined here has direct implications for states' ability to develop and deploy these artificial intelligence assets for military and security purposes. Given the categories examined here, there are also direct implications for ethical and regulatory frameworks, as well as the sustainable growth of AI development. Whilst nations are racing for AI innovations, the next step will be developing global governance of AI, which plays a prominent role in AI development, risk prevention and formulating ethical norms. This paper serves as a grounding point for these questions by adding in Europe to the US-China rivalry and utilizing a concise framework to evaluate each state's artificial intelligence capabilities.
In the long term, the leading countries in AI technology will be those who opt for sustainable growth and coordinated strategy, with a fluid relationship between the private and public sector and not at the detriment of society. In this respect the United States and China are clearly at an advantage with its global digital players, Google, Apple, Facebook, Amazon, and Microsoft (GAFAM) and China's Baidu, Alibaba, Tencent and Xiaomi (BATX). These companies with the support of government initiatives have access to all the necessary components to nurture a strong AI ecosystem with the ability to attract top AI talent, access to state-of-the-arts technology and access large amounts of funding and data. Europe, often forgotten, is also a key player in developing AI technology due to its large potential; however, its fragmented approach may result in AI being a source of differentiation, creating a gap within Europe. The EU combined has the capability to compete and even outcompete the US and China in numerous disciplines however without a strong, unified ecosystem they will not be able achieve the critical mass it requires to maintain its competitiveness on an international stage. Thus, Europe must rapidly develop an ambitious, unified AI policy vision centered around developing, like its rivals, a strong ecosystem -keeping startups at its heart. | 2021-10-15T15:35:11.181Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "87d69b927989df3ce78cce56491d80b0b3b48e8d",
"oa_license": "CCBYNCSA",
"oa_url": "https://jsr.org/hs/index.php/path/article/download/1762/816",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "737c0d3ab346dc733b7ea80c1cca93d77c5eb315",
"s2fieldsofstudy": [
"Political Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
236389113 | pes2o/s2orc | v3-fos-license | Ernest Hemingway's Female Consciousness
IRA Academico Research is an institutional publisher member of Publishers International Linking Association Inc. (PILA-CrossRef), USA. IRA Academico Research is an institutional signatory to the Budapest Open Access Initiative, Hungary advocating the open access of scientific and scholarly knowledge. IRA Academico Research is a registered content provider under Open Access Initiative Protocol for Metadata Harvesting (OAI-PMH).
make an issue of "masculinity" in his writings. His early stories have won wide critical praise for their stoic, understated, "masculine" style and their graphic depiction of male pursuits and attitudes. Furthermore, fewer female characters are involved in his writings. In The Old Man and the Sea, he offers a world nearly devoid of real women. Consequently, many critics declare that Hemingway discriminates against, and even looks upon women with hatred. Especially with the rise of the women movement in the 1960s and feminist criticism in the department of literature, Hemingway becomes Enemy Number One for many critics, who accuse him of perpetuating sexist stereotypes in his writings. Edmund Wilson's observation in 1939 of a "split attitude and growing antagonism toward women" in Hemingway's work has provided a basic premise of Hemingway criticism for decades to come. He sees Hemingway's heroines as falling into two categories: the "submissive infra-Anglo-Saxon women that make his heroes such perfect mistresses" and "American bitches of the most soul-destroying sort", suspecting that Hemingway's instinct to get the woman down grows out of "a fear that the woman will get the man down" ( Edmond Wilson; 1939; p. 236-257).
Personally, these views go to an extreme. Hemingway indeed is concerned with the women sphere. He has elaborated on his feelings about women's virtues: "I have always believed that though both live under strict and rigid standards, it's much easier for the male to survive. Not a man will adhere to those stern standards as women, nor will a man have moral standards as good as women as he imagines, unless he does try hard." ( Hemingway; 1967 ;p. 461 ). In The Art of Short Story, He complains that it's very difficult to portray female characters, stating that you needn't worry when people doubt the existence of such women as you have created.
It merely shows that your female characters differ from theirs Hemingway; 1989; p. 135). These may well display his concern about the female sphere.
However, this paper is not primarily to secure justice for Hemingway. It is hoped that by analyzing his female characters in his major works, his depiction of women and gender issues may be fruitfully approached.
Hemingway's Sympathy and Respect for Women
A common but mistaken assumption about Hemingway's fiction is that he automatically sides with his fictional males. Rena Sanderson holds "In fact, the stories in his first collection are consistently sympathetic to women, who are often revealed to be more mature than their mates."( Sanderson; 2000; p. 176) Although In their own world, Hemingway's men have an implied code of stoic manliness by which to define themselves, but 71 71 in their relationship to women, that code does not assure success. In these stories the men seem very passive in response to women; they are either indifferent or insensitive, unwilling or unable to take action or to accept responsibility for the way things turn out. Several of Hemingway's finest and best-known stories such as Indian Camp and Hills Like White Elephant pivot on this very point. Some critics blamed the Indian woman, which goes against the intention of the author. His intention is very clear in the following dialogue between the doctor and his son: "Why did he kill himself, daddy?". "I don"t know, Nick. He couldn't stand things, I guess." ( Hemingway, 1987; p. 69) It is clear that the cause of his death is internal, not external. The weakness in his character is the root cause of his suicide: not being strong enough to stand for sufferings, he has no choice but to escape. His internal weakness leads to the tragedy.
IRA-International Journal of Education & Multidisciplinary Studies
"Do many men kill themselves, Daddy?" "Not very many, Nick." "Do many women? " " Hardly ever. " "Don"t they ever" "Oh, yes, sometimes" (Hemingway; 1987; p. 69-70) The indirect bearer of the suffering kills himself while the direct bearer stands up to it. In fact, when disaster comes, women stand up to it with great courage. They have suffered more, and what they have suffered make them tougher and more tensile.
Another viewpoint blames that the Indian woman is a destroyer of the man. As a matter of fact, the Indian woman is not a destroyer, but a giver of new life: she bears the suffering to bring a new life to the world. To Ernest Hemingway, who loves life deeply, it is a great and glorious cause. To Nick, what he receives is not only that the world is of nihility, and life is painful, but also that he should stand up to the nihility, and subjugate nihility and sufferings. In another short story Hills Like White Elephant, Hemingway portrays another man who attempts to escape his responsibility and another girl who shows her courage. Trying to persuade the girl to have an abortion, the man picks up the topic repeatedly: "It's really an awfully simple operation. " "That's the only thing that bothers us. It"s the only thing that's made us unhappy. " "I don't worry about that because it's perfectly simple". "I don't want you to do anything that you don't want to do."( Hemingway; 1987;p.212-214)p. 212-214) The more he says, the more he betrays himself: selfish, hypocrite and irresponsible. Being sober-minded, the girl has seen through him and tries to hide her disappointment. In the early years of this century, abortion seemed to be very dangerous, especially to a girl who hasn't got married yet. A girl usually relies on the man when such an incident happens. Having no one to rely on, the girl in the story shows great courage towards it, and resolves on facing up to it alone: "The girl smiled brightly at the man." "She smiled at him." "She was sitting on the table and smiled at him." "There's nothing wrong with me. I feel fine" Overall, in Hemingway's eyes, the two male characters are regarded as cowards, far from his code-hero. Yet the female characters that are strong and mature are worth sympathy and respect. They most possibly originate from his actual life. During Hemingway's youth, his most serious relationship involved older, mature women. His first passion-the nurse Agnes von Kurowsky, his first wife Hadley, and his second wife Pauline-were all several years older than him.
Hemingway's Appreciation of New woman
The female role was undergoing a transformation after World War I in the popular consciousness from passive, private creature to avid individual in pursuit of new experiences. The household Victorian nurturer was becoming the modern woman of unprecedented mobility and public visibility. The culture of the 1920s was something new, embracing the first generation of women to smoke, drink and use divorce as a solution to a bad marriage. She was educated, valued her autonomy, and did not automatically subscribe to the values of the family. No longer did she define herself as a domestic being; openly rebelling against nineteenth-century
IRA-International Journal of Education & Multidisciplinary Studies
72 72 bourgeois priorities, the new woman rejected the traditional feminine ideals of purity, piety, and submission.
Instead, she insisted on reproductive freedom, self-expression, and a voice in public life. In short, the new woman rebelled against patriarchal marriage, and, protesting against a society that was rooted in female biology, she refused to play the role of the ethereal other. In addition, the war had given a generation of women an opportunity to test their abilities: service in the nursing or agriculture corps taught women not only that they would work efficiently but that their work was valuable. By the time American women gained the right to vote in 1920, a modern New Woman had appeared.
Quite clearly, the New woman contributes heavily to Hemingway's own image of the ideal woman (Sanderson; 2000; p.173). His intimate relationship with them obviously manifests his appreciation of New Woman which can be seen in Lady Brett Ashley, who has stepped off the pedestal and now roams the world.
A traditional woman is in fact a private creature of a man. As a New Woman, Brett Ashley will not submit to the authority or the direction of the man. She breaks up when her lovers attempted to claim her, that is to exercise authority over her. She even leaves the bullfighter Romero-a man to whom she is overwhelming attracted-when he shows signs of wanting to domestic her: he tells her to give up her mannish felt hat, to let her hair grow long, to wear more modest clothes. In traditional courtship situations, the woman's power is the power to be pursued; once caught, she forfeits her opportunity to choose. However, Brett keeps her options open, diversifies her investment of social and sexual energy, and thereby maximizes her opportunities.
Some critics ignore Hemingway's sympathetic treatment of Brett. In fact. the loose, disordered relationship on the other hand reflects what she has suffered. She has been a nurse on the Italian front. The war has turned her into the freewheeling equal of any man. It has taken her first sweetheart's life through dysentery and has sent her second husband home in a dangerous state of shock. These blows seem to expose her to the male prerogatives of drink and promiscuity. She has been wounded psychologically and emotionally by the war. In a word, she is also a victim of the war. Trying to relieve herself by indulging herself, she often vacillates between the extreme of self-abnegation and self-indulgence, and her relationships with Mike Campbell, Robert Cohn; and even Jake are filled with ambivalence, anxiety, and frequently alienation. What's more, she is not interested in exploiting her considerable erotic power for economic gain, refusing $ 10,000 to spend a weekend with Mippipopolous. She will not take money for sex, because that would be prostitution.
Hemingway's Admiration for Woman
No character in all of Hemingway's fiction has provoked responses so numerous, so contradictory, and so strong as has Catherine Barkley in A Farewell to Arms. She has been idealized and reviled; her creator has been reviled for idealizing her. Some readers have objected to her passivity; others have found her malevolently active. Theodore Bardacke declares that her "complete subjection is the core of Hemingway's conception of the ideal woman" ( Bardacke; 1950; p. 346). They think that Hemingway's heroine is submissive and ignorant. In restoring the portrait of Catherine Barkley, it is crucial that we keep firmly in mind not only the aspects of A Farewell to Arms but also its historical context. It must be remembered that world war I was a mechanized horror unprecedented in human history, a war of "stasis and futility" that exacted casualties hideous in their number. Having witnessed the cruelty of the war, Hemingway's world views changed. He experienced a bitter disillusionment and did not believe So-called "glory' any longer. Instead, he realized that world war I was "the colossal, murderous, mismanaged butchery that has ever taken place on earth"(Hemingway; 1942; p. 5) Besides, in assessing Catherine's character, it is critical to remember that A Farewell to Arms takes place in a world in
IRA-International Journal of Education & Multidisciplinary Studies
73 73 which the winner takes nothing, and those who play by the rules only lose more and faster than others. World War I was a war that had in effect deserted men, which had defaulted from every human value. Therefore, Catherine as one character in the novel, more than any other, embodies the control of courage and honor that many have called the "Hemingway code". Henry Hazlitt remarked that "the girl Catherine has a fine courage and a touch of nobility" ( Hazlitt; 1929, p.38). Her intelligence and self-awareness are obvious and excellent in the novel.
Catherine has realized the war before the novel begins. Her faith in traditional values is blown to bits offstage, along with her boyfriend. She already knows what Frederic will learn: that people get blown up while eating cheese; that a good Italian soldier can get shot by Italians for no reasons other than starting across an open field; that in the violent and senseless world that this novel portrays, Humanity is like a swarm of ants on a long log at the mercy of an arbitrary and indifferent fate.
She has gone into nursing with the "silly idea" that her boyfriend might come to the hospital where she is: "With a sabre cut, I suppose, and bandage around his head. Or shot through the shoulder. Something picturesque." "They blew him all to bits." (Hemingway; 1932; p. 20) Later when Frederic wants to drop the war and get on with the seduction, Catherine remarks "there is no place to drop it" (Hemingway, 1932; P. 26). Catherine has come to the war fully wiser than the young man who happens into the war thinking it has a thing to do with him.
It is Catherine who challenges Frederic's statement that nothing ever happens to the brave. When they wonder who said that "the coward dies a thousand the brave but one", she corrects the statement: He was probably a coward. It is not so much that Catherine is more noble than Frederic, she is simply more experienced. Catherine has lost her true love to the war, but she seems strengthened. In retrospect, she realizes that she "didn't know about anything before the death of her true love" (Hemingway; 1932; p. 12). After that death, she behaves like someone who has been psychologically wounded by the war and by the loss of her first love, but she endures and gradually comes to realize the finality of death and what that implies for the living. Typical of Hemingway's heroic figures, Catherine not only accepts her pain but shares her insights and growth With Frederic( Sanderson; 2000; p. 181). What the priest and Catherine know (before Frederic himself discovers it) is that the only certainty in life is the imminence of death. In contrast to Frederic, the priest and Catherine realize that dissipation equals death and that the only choice is to snatch a fine life out of the jaws of death, to carve meaning out of meaninglessness, spirituality out of worldliness. In managing to snatch a fine life, Catherine has learned to disobey, to avoid obstacles more furiously than others. After learning she is pregnant, she thinks only "how small obstacles seemed that once were so big" and tells Frederic, "Life isn't hard to manage when you have nothing to lose." (Hemingway; 1932; p. 137). Her Self-reliance is also far more advanced than Frederic.
She cares nothing for tradition or convention; her values are private and personal. When she checks into the hospital to have the baby, it is clear that her only allegiance is to herself and her love, indeed, her only religion. "She said she had no religion and drew a line in the space after that word. She gave her name as Catherine Henry". ( Hemingway;1932;p.313) In the context of the Great War, her willingness, even determination to submerge herself in a private love relationship can be seen as a courageous effort to construct a valid alternative existence in a hostile and chaotic universe. Again, Catherine"s humor is a mark of strength and courage in the face of impossible circumstances.
After a bad moment of feeling like a whore in the hotel on their last night together in Milan, Catherine transforms the hotel room to " home" by an exercise of sheer will. As she pulls herself together, her stiff-up-lip
IRA-International Journal of Education & Multidisciplinary Studies
74 74 determination to put the best face on things is amusing and endearing. During their harrowing escape to Switzerland, she is able to laugh at how silly Frederic looks clutching the inside-out umbrella as he uses it as a sail. Even as she is dying in childbirth, she maintains a sense of humor: "I"m a fool about the gas. It"s wonderful." Yet she is hardly a blind romantic retiring from the world at large for reasons of weakness or incompetence. It simply is not her show anymore. Indeed, Catherine' intelligence and resourcefulness and ability to cope in the social world place her in the category of confident and competent characters. We have noted that though A Farewell to Arms was finished long before Hemingway" last two marriages, virtues of Hemingway's wives-Hadley's faith, Pauline's maturity, Martha's courage and intelligence, Mary's adaptability and bravery, are concentrated on Catherine. In fact, she serves as the epitome of Hemingway wives. Rena Sanderson observes "Catherine emerges as a modern, independent young woman-quite possibly Hemingway's definition, at the time, of the ideal woman. Essentially, she is an improved-actually more modern-version Brett"( Sanderson; 2000; p. 180). She is sexually liberated, monogamous and faithful. Her ethical and moral standards are much more orthodox. Apart from those, she is self-reliant and competent, ready and qualified to run away with the man she loves and to help him domesticate the world of his wishful dreams.
Conclusion
Hemingway is well known to be an expert in depicting the Male sphere. Yet he has also displayed a keen sense of female consciousness, made unremitting efforts in portraying women sphere, and created many respect-worthy female characters. His attitude toward them is positive. For those who are strong and mature, he expresses the greatest esteem; to those New Woman, he shows his appreciation; and to those who are with great courage, intelligence and loyalty, he conveys his admiration. His female characters somewhat resemble his" code hero" with emphasis on courage and perseverance in the face of inevitable defeat and death.
Anyway, Hemingway has his limitation and inadequacy in the characterization of female characters, which may affect our understanding of his female consciousness. Some of his female characters are so physically and spiritually alike that they are almost identical. In addition, with more and more women playing an important role in almost every field, the role of women is changing rapidly. Women have a variety of choices of job. But Hemingway hasn't noticed that. Women in his books usually take up the traditional job, and some of them are unemployed, which will certainly lead to their lack of financial and psychological independence. These fully display his lack of understanding of women. Furthermore, an occasional ambivalence and perplexity in his attitude toward female characters are revealed.
However, taking a sweeping view of Hemingway's life, it"s convinced that the mainstream of his female consciousness is positive. Basically, his attitude toward women is fair and just. | 2021-07-27T00:05:39.491Z | 2021-05-25T00:00:00.000 | {
"year": 2021,
"sha1": "7ed6539046efab8c927555e38fb33616c4a90e61",
"oa_license": "CCBYNC",
"oa_url": "https://research-advances.org/index.php/IJEMS/article/download/1601/1278",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bc6fef71a4f2e873397924727a954f1eb2de5fb4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Philosophy"
]
} |
1555401 | pes2o/s2orc | v3-fos-license | Leukocyte TRP channel gene expressions in patients with non-valvular atrial fibrillation
Atrial fibrillation (AF) is the most common arrhythmia in clinical practice and is a major cause of morbidity and mortality. The upregulation of TRP channels is believed to mediate the progression of electrical remodelling and the arrhythmogenesis of the diseased heart. However, there is limited data about the contribution of the TRP channels to development of AF. The aim of this study was to investigate leukocyte TRP channels gene expressions in non-valvular atrial fibrillation (NVAF) patients. The study included 47 NVAF patients and 47 sex and age matched controls. mRNA was extracted from blood samples, and real-time polymerase chain reaction was performed for gene expressions by using a dynamic array system. Low levels of TRP channel expressions in the controls were markedly potentiated in NVAF group. We observed marked increases in MCOLN1 (TRPML1), MCOLN2 (TRPML2), MCOLN3 (TRPML3), TRPA1, TRPM1, TRPM2, TRPM3, TRPM4, TRPM5, TRPM6, TRPM7, TRPM8, TRPC1, TRPC2, TRPC3, TRPC4, TRPC5, TRPC6, TRPC7, TRPV1, TRPV2, TRPV3, TRPV4, TRPV5, TRPV6, and PKD2 (TRPP2) gene expressions in NVAF patients (P < 0.05). However, there was no change in PKD1 (TRPP1) gene expression. This is the first study to provide evidence that elevated gene expressions of TRP channels are associated with the pathogenesis of NVAF.
Atrial fibrillation (AF) is the most common arrhythmic disorder associated with increased risk of stroke, heart failure, dementia and cardiovascular mortality 1 . The prevelance of AF is increasing with age, and it is a growing public health problem 2 . Pathophysiology of AF is a complex process including structural alterations in the atrium and electrophysiological abnormalities. Atrial fibrosis and inflammation makes the atrial tissue a substrate prone to AF 3 . Local ectopic firing and multiple wavelets propagating in atrial tissue can initate and maintain AF 4,5 . The etiology of AF involved a complex interaction of environmental factors with genetic factors 6,7 . Because the utility of conventional antiarrhythmic agents that target cardiac ion channels is limited by inefficacy and side effects, new treatment strategies are required 8,9 . Altered Ca 2+ handling is a cruical process in AF pathophysiology, and may be a target for antiarrythmic therapy 10,11 .
Transient receptor potential (TRP) channels consist of a large number of nonselective cation channels with variable degree of Ca 2+ -permeability. The 28 mammalian TRP channel proteins can be grouped into six subfamilies based on protein sequence homology: TRPC (canonical), TRPM (melastatin), TRPV (vanilloid), TRPP (polycystin), TRPA (ankyrin), and TRPML (mucolipin) 12,13 . The majority of these TRP channels are expressed in different cell types including both excitable and nonexcitable cells of the cardiovascular system. TRP channels are not voltage gated but are activated by a variety of stimuli including pressure, shear stress, mechanical stretch, oxidative stress, membrane-receptor stimulation, hypertrophic signals, inflammation products, and thermal or sensory stimuli 12,13 . All functionally characterized TRP channels are permeable to calcium except monovalan cation selective TRPM4 and TRPM5 12,13 . TRP channels also contribute to endothelial cell apoptosis and cardiac fibrosis via fibroblast differentiation 13,14 . Accumulating studies revealed that TRP subfamilies are involved in differentiation of cardiac fibroblasts in most cardiac diseases and atrial electrical remodeling in AF patients [15][16][17] . In cardiac myocytes or experimental studies, several TRP channels have been shown to be involved in arrhythmogenesis 13 . However, which type of TRP channels participates in AF is not exactly known in humans. In this study, we aimed to investigate whether peripheral leukocyte TRP channel gene expressions are associated with the devepment of nonvalvular atrial fibrillation (NVAF), as a reflection of inflammatory status.
Materials and Methods
Patients. A total of 47 NVAF patients followed up in Gaziantep 25 Aralik State Hospital were enrolled in this study. All of the patients had NVAF on surface electrocardiogram. Exclusion criterias were valvular heart disease, heart failure, coronary artery disease, peripheral artery disease, diabetes mellitus, thyroid disorder, kidney failure, autoimmune disorder, pregnancy and cancer. Patients who had any cardiac intervention or an ablation procedure for AF management were also excluded. A total of 47 sex and age matched controls were recruited to the study. The control group consisted of healthy individuals who had no history of AF or cardiac arrhythmias. Hypertension was defined as systolic blood pressure of >140 mm Hg and diastolic blood pressure of >90 mm Hg, in a sitting position, on ≥3 different occasions. Dyslipidemia was defined according to the third report of the National Cholesterol Education Program 18 . Subjects stopped taking medications for at least 12 h prior to venous blood sample collection. All blood samples were obtained between 9:00 and 10:00 AM. Medications used by the patients are given in Table 1 cDNA Synthesis and Gene Expression. mRNA was isolated from leukocytes by using β-mercaptoethanol, and stored at −80 °C until use. cDNA was produced with the Qiagen miScript Reverse Transcription Kit according to manufacturer's protocol. PCR was performed by BioMark HD system (Fluidigm, South San Francisco, CA, USA) with TRP channel primers, and β-actin (ACTB, housekeeping gene). We screened 26 TRP channel genes [TRPA1, TRPC1-7, TRPM1-8, TRPV1-6, MCOLN1-3 (TRPML1-3), and PKD2 (TRPP2)] and PKD1 (TRPP1) for this expression study. Data were analyzed using the 2 −ΔΔCt method, according to the formula: ΔC t = C tTRP − C tACTB , where C t = threshold cycle.
Statistical analyses.
Results are expressed as the mean ± SD, SEM or percentage. For comparisons of the differences between mean values of two groups, the unpaired Student's t test was used. Chi-square test was used for calculation of the significance of differences in categorical data. The gene expression analysis was performed by using online program, QIAGEN GeneGlobe (http://www.qiagen.com/geneglobe). Student's t test was used to compare gene expression data. Statistical analysis was performed using GraphPad Instat version 3.05 (GraphPad Software Inc., San Diego, CA, USA). All probability values were based on two-tailed tests. P values less than 0.05 were considered to be statistically significant.
Results
Demographic and clinical characteristics of the study population are presented in Table 1. The prevalence of cardiovascular risk factors, including hypertension, lipid profiles, smoking, and body mass index for the control and NVAF groups are shown in Table 1. Compared with the controls, the average age, genders, percentages of smokers, and BMI in the NVAF group were similar. Blood pressure, total cholesterol, LDL cholesterol, and TG levels were all greater among NVAF subjects. There was no marked difference in HDL cholesterol levels between the groups (Table 1). Eight (17.0%) patients had hypertension, and 6 (12.8%) had dyslipidemia. While about two-third of the patients 61.7% (n = 29) were on a single medication, 34.0% (n = 16) of them were on two drugs. All the TRP genes studied were upregulated in leukocytes of NVAF patients. Gene expression analysis showed that MCOLN1 (TRPML1), MCOLN2 (TRPML2), and MCOLN3 (TRPML3) mRNA contents in leukocytes were augmented in NVAF patients when compared to the control groups (P < 0.05, Fig. 1). There were also elevations in TRPM8, TRPC6, TRPV5, TRPV4, TRPA1, TRPC3, TRPV6, TRPC4, and TRPV2 gene expressions in NVAF patients (P < 0.05, Fig. 2). Higher TRPM3, PKD2 (TRPP2), TRPM7, TRPV1, TRPM6, TRPV3, TRPM5, TRPM4, TRPM1, TRPC7, TRPC2, TRPC1, TRPM2, and TRPC5 gene expressions were detected in NVAF patients (P < 0.05 , Figs 3 and 4). However, PKD1 gene expression was not changed in patients with NVAF when compared to controls (Fig. 4).
(TRPP2)] gene expressions in circulating leukocytes were observed in patients with NVAF. To the best of our knowledge, this is the first study to investigate the TRP channel gene expressions in relation to NVAF.
Several TRP channels are functionally expressed in the immune cells including lymphocytes, monocytes and macrophages 19,20 . Several TRP channels are also expressed on leukocytes, but the functional roles of these channels are still unclear.
Expressions of TRPM4, TRPV5 and TRPV6 in normal human leukocytes were not found by Northern blot analysis [21][22][23] . Additionally, Northern-blot analysis showed that a faint signal of TRPM6 was detected in leukocytes 24 . However, we were able to detect the expressions of these channels in our real-time PCR assay. In the present study, TRPV5 showed the highest expression, whereas TRPV3 demonstrated the lowest expression patterns among the TRPV genes in leukocytes. Our findings are in agreement with data from Spinsanti et al. 25 , who showed that TRPV3 was the least expressed gene among the TRPV1-4 genes in human leukocytes.
TRPM4 is a monovalent nonselective cation channel permeable to Na + , and K + , but not to Ca 2+ 26 . Atrial myocytes from Trpm4−/− mice display a shorter action potential 27 . TRPM7 knockdown suppresses endogenous TRPM7 currents and Ca 2+ influx in atrial fibroblasts and inhibits transforming growth factor-β 1 -induced fibroblast proliferation, differentiation, and collagen production 28 . Since fibrosis is one of the major detrimental factors for AF, TRPM7-mediated Ca 2+ signals may play a pivotal role in fibroblast differentiation and fibrogenesis in human AF 28 . We have found that both TRPM4 and TRPM7 gene expressions were markedly upregulated in leukocytes of the NVAF patients.
TRPM6 is suggested to be responsible for systemic Mg 2+ homeostasis in humans 29 . Also, the function of TRPM7 is modulated by TRPM6, and the TRPM6 kinase may be involved in tuning the phenotype of the TRPM7/M6 channel complex 30 . We have noted significant augmentations in other TRPM gene expressions in our study. Contributions of these channels to the genesis of AF are currently unknown in humans.
TRPC channels may play a key role in regulation of cardiac pacemaking, conduction, ventricular activity, and contractility during cardiogenesis 31 . The overexpression of TRPC3 enhances the store-operated calcium entry 32,33 . The increased TRPC3 gene expression, which was observed in the present study, together with consecutive increase of calcium influx may account for the activation of leukocytes that has been described in patients with atrial fibrillation 34 . We have detected increases in all TRPC channel gene expressions in leukocytes obtained from NVAF patients, but significances of these upregulations are unknown.
TRPV1 is found to express on the endoplasmic reticulum/sarcoplasmic reticulum and the mitochondria 35 . Therefore, intracellular TRPV1 may control calcium level both inside the organelles and in the cytoplasm. TRPV1 is involved in systemic inflammatory response such as phagocytosis by macrophages, nitric oxide and reactive oxygen species (ROS) production, and cytokine production 36 . Regulation of the relative expression levels of TRPV5 and/or TRPV6 may affect the Ca 2+ transport kinetics and Ca 2+ -dependent functions, such as proliferation and differentiation, in lymphocytes 37 . We have detected marked increases in all TRPV channel gene expressions in leukocytes of the NVAF patients.
Our study is the first to show that there were significant TRPML mRNA expressions in leukocytes of NVAF patients. Expression level of MCOLN1 (TRPML1) gene was found to be high. TRPML1, a Ca 2+ -permeable non-selective cation channel that localizes to late endosomes and lysosomes 38,39 , is also activated by ROS in in vitro to regulate autophagy 40 . TRPML1 appears to be ubiquitously expressed, but it is not known to be specifically involved in AF.
Deletion of Pkd2 (Trpp2) in mice can cause abnormal heart development 13 . PKD2 related proteins form Ca 2+ -permeable channel with PKD1, an 11 transmembrane protein, which is also known as TRPP1, but it is not a TRP protein. PKD1 is thought to interact with TRPP2, which functions as a receptor for mechanical stimuli such as shear stress 41 . Moreover, PKD1 and TRPP2 can interact with and amplify Ca 2+ release from inositol trisphosphate receptors in the endoplasmic reticulum 42 . Although we have observed an increase in PKD2 (TRPP2) gene expression, no change was noted with PKD1 (TRPP1) in this study. However, contribution of PKD2 (TRPP2) to the pathogenesis of AF has not been studied yet.
Current data suggest that inflammation is associated with the development of AF 34,[43][44][45] . Indeed, white blood cell count is significantly higher in patients with AF, and it is significantly and independently associated with AF 46 . Leukocyte activation has been regarded to have a critical role in the pathogenesis of AF 34 . There is evidence that atrial neutrophil infiltration is enhanced in atrial appendage sections of patients with persistent AF 47 . The physiological role of TRP genes in human peripheral blood cells has yet to be determined, but it has been hypothesized that, under pathological conditions, their upregulation may be an indicator of inflammation at a secondary site.
Accumulating evidence suggests oxidative stress may play an important role in the induction and maintenance of AF 43 . The type 2 ryanodine receptor oxidation resulting from mitochondrial-derived ROS in atrial myocytes leads to increased sarcoplasmic reticulum Ca 2+ leak contributing to the pathogenesis of AF 44 . Furthermore, serum oxidative stress marker levels are elevated in patients with AF 45 . Modification of cysteine residues by ROS has been shown to alter the activity of TRP channels 48,49 . There is evidence that TRPC5, TRPA1, TRPV1, TRPV3, and TRPV4 channels are modulated by covalent modification of cysteine residues by ROS and/or reactive nitrogen species (RNS) [48][49][50] . TRPA1, TRPV1 and TRPC5 channels are directly activated by oxidizing agents through cysteine modification; whereas, TRPM2 channel is indirectly activated by production of ADP-ribose 49 . TRPM7 overexpression can enhance levels of ROS and nitric oxide 51 . TRPM2, TRPM7, TRPC5, and TRPV1 are activated by ROS and RNS 48 . TRPM7 can contribute to hydrogen peroxide-induced cardiac fibrosis 52 . Wuensch et al. 53 showed that oxidative stress increases the expression of both TRPC3 and TRPC6 mRNA in human monocytes. Collectively, these data may imply that TRP channels are involved in the pathogenesis of AF through activated leukocytes which promotes the inflammatory or immune cascade.
The gene expression profiles in this study were measured in isolated leukocytes, but the ideal tissue for study AF is the left atrium. This is considered main limitation of this study. However, it should be pointed out that this group patient with NVAF has no indication for cardiac operation. Additionally, Lin et al. 54 examined the association of whole blood gene expression with AF in a large community-based cohort, and identified seven genes statistically significantly up-regulated with prevalent AF. Raman et al. 55 also evaluated peripheral blood gene expression in patients with persistent AF that underwent electrical cardioversion. In a recent study, peripheral monocyte toll-like receptor (TLR) expression levels have been investigated and higher levels of TLR-2 and TLR-4 expressions were detected in patients with AF 56 . The increased leukocyte TRP gene expressions observed in this study should be studied at the cardiac level.
In conclusion, the results of the present study revealed that TRP channels gene expressions are upregulated in leukocytes of the NVAF patients. Our findings showed that TRPML genes are strongly expressed in NVAF patients. Our data may imply that TRP channels may be effective targets for prevention or prophylaxis of AF. The findings of this study may lead to development of more effective approaches for treatment of AF. Further investigations are needed for understanding the role of TRP channels in AF. | 2018-04-03T05:47:34.565Z | 2017-08-24T00:00:00.000 | {
"year": 2017,
"sha1": "1c0343b57a3035c1e8f9a9f0ed26cf886928be24",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/s41598-017-10039-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0490b8f98e77a32b783d41c1a815f0ecd38020d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221761510 | pes2o/s2orc | v3-fos-license | Nanomechanics of Antimonene Allotropes
Monolayer antimonene has drawn the attention of research communities due to its promising physical properties. But mechanical properties of antimonene is still largely unexplored. In this work, we investigate the mechanical properties and fracture mechanisms of two stable phases of monolayer antimonene -- the ${\alpha}$ antimonene (${\alpha}$-Sb) and the ${\beta}$ antimonene (${\beta}$-Sb), through molecular dynamics (MD) simulations. Our simulations reveal that stronger chiral effect results in a greater anisotropic elastic behavior in ${\beta}$-antimonene than in ${\alpha}$-antimonene. In this paper we focus on crack-tip stress distribution using local volume averaged virial stress definition and derive the fracture toughness from the crack-line stress. Our calculated crack tip stress distribution ensures the applicability of linear elastic fracture mechanics (LEFM) for cracked antimonene allotropes with considerable accuracy up to a pristine structure. We evaluate the effect of temperature, strain rate, crack-length and point-defect concentration on the strength and elastic properties. Tensile strength goes through significant degradation with the increment of temperature, crack length and defect percentage. Elastic modulus is less susceptible to temperature variation but is largely affected by the defect concentration. Strain rate induces a power law relation between strength and fracture strain. Finally, we discuss the fracture mechanisms in the light of crack propagation and establish the links between the fracture mechanism and the observed anisotropic properties.
INTRODUCTION
2D materials have attained wide attention in this era of nano-material research due to their unique material properties. Since the advent of graphene, researchers have progressed a long way to predict and synthesize other two dimensional materials for instance, Group IVA (tetragens) elemental analogues like silicence, [1] germenene, [2] hexagonal boron nitride (h-BN); [3] Group VA (pnictogens) elements such as phosphorene, [4] arsenene, [5] antimonene, [6] bismuthene; [7] monolayer TMDs like MoS2, [8], MoSe2, [9] WS2, [10] TiS2, [11] InSe [12] etc. Even though being mechanically super elastic [13] and electrically super conductive, [14] graphene flounders when it comes to the application where a certain amount of band gap is mandatory. TMDs and pnictogens dominate and possess further prospects in regard of this special requirement in many electrical and optoelectronic nano-devices. [15][16][17][18] As soon as the 2D form of phosphorus (black phosphorus/ phosphorene) emerged as Filed Effect Transistor, [19] it took a very little interval to verify that all other Group puckered VA elements also have stable, freestanding 2D structures (entitled as nitrogene, phosphorene, arsenene, antimonene, and bismuthene). Among these pnictogens, antimonene allured particular prominence because of its some distinct features such as, fundamental band gap in semiconducting antimonene monolayer 0.3-2.28 eV [20][21][22] which can be tuned by varying its chirality, width, [23] and applying tensile strain, [24] high stability in ambient conditions [25] and almost no signs of deterioration over months [26] which is the blatant disparity to its predecessorphosphorene, capability of its single layer to produce binary compound [27] that demonstrates topologically non-trivial characteristics under definite settings, competence of absorbing a wide range of wave length and high carrier mobility, [21] and so on.
Antimonene monolayer has already been fruitfully synthesized from bulk antimony by mechanical isolation of few-layer antimonene flakes, [26] epitaxial growth on PdTe2, [25] on Ge substare, [33] liquid phase exfoliation. [34] Remarkably, flat antomonene has also been synthesized very recently on Ag substrate. [35] However, there are plenty of works regarding the electrical, optical and magnetic properties of monolayer antimonene but a very few of on mechanical. [23,36,37] So, to inquest further on the mechanical characteristics of SLSb is wanting. There are also studies that report modulating not only electronic but also magnetic properties inducing defect and tensile strain which eventually resemble the necessity of finding out the mechanical properties and fracture mechanism of SLSb nano-sheet at atomic scale. [38][39][40] Although in the perspective of theoretical inquiry first principle study may reveal the basic structural characteristics of a nano-material incorporating with a few sets of atom of that material, [41] MD investigation is a must in order to have an insight into the fracture mechanism of a nano-sheet meticulously while dealing with a lot. And, to our best knowledge, there is hardly any study of MD on SLSb to explore its exhaustive mechanical properties.
In this work, we investigated the mechanical characteristics of monolayer Sb sheet for its two different structures (α, and β) varying temperature, strain rate, crack-length and defect concentration ranging from 1K to 500K, 10 8 s -1 to 10 10 s -1 , 0 to 110 Å and 0% to 5% accordingly. This paper focuses on calculating the stress distribution at critical point, eg., crack tip, using local volume averaged virial stress. The derived stress distribution is used to determine the applicability of LEFM at nanoscale and also used to derive the fracture toughness at different crack lengths.
Although the strain rate range is quite higher compared to the real life scenario, it is frequent in MD simulations to qualitatively assess the impact of strain rate at this order. [42] While we changed the temperature, we preserved the strain rate constant (10 9 s -1 ), and vice-versa for fixed temperature (300K). And we kept the both temperature (300K), and strain rate (10 9 s -1 ) constant while altering the defect concentration. Finally, we demonstrated the fracture mechanism of pristine and defected Sb nano-sheet for all three structures considered here in details.
METHODOLOGY
Monolayer antimonene nano-sheets (20nm x 20nm) for both α-Sb and β-Sb are generated with MATLAB [43] script and Atomsk [44] codes. The lattice parameters for Sb are used as followslattice constant and bond length of α-Sb: 4.12 Å and 2.89 Å respectively; a1, a2, a3 for β-Sb: 4.73 Å, 4.36 Å, and 11.11 Å respectively. [45] In order to study the influence of defect concentration on the mechanical behavior of SLSb, sheets having 1% to 5% of point defects are generated by deleting atoms from random sites. Four sheets with different defect arrangements are simulated for each defect concentration to derive statistically sound results. We apply gradually increasing uniaxial tensile strains on the antimonene sheet and record the corresponding virial stress responses. OVITO software [46] is used to visualize the fracture mechanism of monolayer Sb.
LAMMPS [47] simulation package is employed to execute all the simulations. We preserve periodic boundary condition in the lateral directions (X and Y) to elude the finite size effect. Nonperiodic boundary condition is maintained in the out of plane direction to avoid interaction between periodic images. Conjugate gradient (CG) scheme is carried out for energy minimization.
Then the structure is relaxed for one 100 pico-seconds (ps) within an isobaric-isothermal (NPT) ensemble with slow damping of pressure (0.05 ps) and temperature (0.5 ps). A time step of 1 fs is used to ensure proper convergence. Stress-strain relationship is developed by straining the simulation box uni-axially and computing the average stress over the structure. Virial stress is taken as the basis of estimating the average. Relation among virial stress components are as follows: where the summation over all the atoms occupying the total volume is denoted by Ω, ⊗ specifies the cross product, the mass of atom i is represented by , ij r is the position vector of atom, the displacement of an atom with respect to a reference point is directed by time derivative ° is the undeformed length of the box and l is the instantaneous length.
We utilized a recently developed Stillinger-Weber (SW) potential by Jiang et al. [45] to address the interatomic interactions in this study. The SW potential includes a two body term and a three body term characterizing the bond stretching and bond breaking, respectively. The mathematical relations are as follows: Here, V2 is the two body bond stretching and V3 is the angle bending terms. The terms rmax, rmax ij, rmax ik are the cutoffs and the angle between two bonds at equilibrium configuration is symbolized by 0 . A and K imply energy related parameters that are established on Valance Force Field (VFF) model. B, ρ, ρ1, and ρ2 are other parameters that are fitted coefficients. These parameters and their corresponding value can be found in ref. [45]. For the current SW potential parameters, we found a negative out of plane Poisson's ratio at strain larger than 15% ( Figure S1).
METHOD VALIDATION
Stress-strain graph for temperature 1K of α-Sb and β-Sb pristine are presented in Figure 2. For validation purpose, Table 1 is constructed to compare their intrinsic mechanical properties with literature . From the comparison, it is evident that the results predicted in this study are in wellagreement with the existing literature [45].
Effect of Temperature
In real-life application, 2D materials might encounter a frequent risk of being exposed to a high temperature due to its particularly smaller dimension. A small current can easily generate a large temperature through joule heating. [48,49] Deterioration of physical properties is often associated with an exposure to high temperature. So, it is essential to inquire the impact of temperature on the material properties. Here, we investigated the influence of temperature on the stress-strain relationships ( Figure 3) to determine the elastic properties of two monolayer Sb allotropes. Α and β antimonene monolayer sheets are equilibrated in the temperature ranging from 1K to 500K and strained at a constant strain rate of 10 9 s -1 separately along armchair and zigzag directions.
Escalating temperature generally causes a significant worsening of many material parameters (eg., ultimate strength, elastic modulus, and failure strain) regardless of chirality and structural differences. We simulated mechanical properties of α-Sb and β-Sb at different temperatures and the comparison is presented in Figure 4. A small linear decrease of elastic modulus with temperature is observed (Figure 4a(iii), 4b(iii)). Such a material softening is associated with a small thermal expansion arising from the anharmonic contribution of the potential function.
Elevated temperature also facilitates the thermal vibrational instabilities. And this unsteadiness assists the possibility of some bonds exceeding the critical bond length and instigating rapid failure. Moreover, the intensification of temperature engenders higher entropy in the material and expedites crack propagation. Material weakening occurs for this reasoning too. [50,51] We compared the change of elastic modulus, ultimate tensile stress (uts) and fracture strain for α and β structure both for loading along armchair and loading along zigzag direction. α-Sb possesses higher uts and elastic modulus than β-Sb whereas β-Sb shows more softness and larger fracture strain in armchair direction. Nearly perpendicular bond with XY plane in β-antimonene sheet are inclined while it goes under armchair directional tension. Thus, stress does not accumulate as much as develop in the α one and the sheet displays more endurance before failure. Consequently, fracture stress stands less and fracture strain grows higher. In case of zigzag loading, that bond does not extend but creates a hindrance for the zigzag bonds to lengthen. Therefore, stress developed with less effort and the sheet gets harder. Hence, uts and elastic modulus of βantimonene sheets improves while fracture strain falls relative to α structure this time.
We also measured the change of uts, fracture strain, and elastic modulus of the material with the variations of temperature. In α structure, they decline almost 17.9%, 33.4%, and 6.9% (in armchair) and 13%, 28.6%, and 22% (in zigzag) accordingly. Their changes in β structure are as follows 27.6%, 42%, and 9.6% (in armchair) and 19.5% 38.6% and 4% (in zigzag) for the modification of temperature from 1K to 500K. Measuring and fitting the numerical values, we also suggest a linear relationship of elastic modulus and temperature of the material by the following equation.
Effect of Strain Rate
Our MD simulations are performed at a strain rate of ~10 9 s -1 which is few order of magnitude larger than the experimentally practiced strain rate. To demonstrate the strain sensitivity of the mechanical properties, we performed uniaxial tension simulations on the monolayer Sb nanosheet at varying strain rate ranging from 10 8 s -1 to 10 10 s -1 ( Figure 5). SLSb sheet exhibits less sensitivity for altering strain rate than that for temperature and defect concentration. While the Young's modulus of the material remains virtually unaffected, ultimate tensile strength of armchair and zigzag tension increases about 1.5%, 3.7% for α-antimonene and 5.9%, 4.9% β-antimonene accordingly for shifting the strain rate from 10 8 s -1 to 10 10 s -1 . That implies the greater the strain rate, higher the strength. Since at higher strain rate, the time needed for the material to respond and relax is less. Hence, it does not permit atomic thermal fluctuations to pass over the energy barrier to break their bonds and cannot foster rearranging bonds, growing vacancy, and propagating crack. Consequently, higher strain rate gives rise to the fracture stress and vice versa. [52][53][54] These factors are also persistent for SLSb sheet. Figure 6 illustrates higher fracture stress for higher strain rate in logarithmic scale.
We evaluated the sensitivity of ultimate stress on the strain rate by the following equation: [52] = 10 Here, ε̇ is strain rate, C is a constant, and m indicate the fracture strength and the strain-rate sensitivity respectively. According to the logarithmic scale the equation can be written as subsequently: Estimating (m) the slope from the linear fitted data in Figure 6, we propose the equation for monolayer α-antimonene sheet as: In armchair direction: y = 0.0041x + 1.7892 12 In zigzag direction: y = 0.0075x + 1.6866 13 And for β-antimonene sheet as: In armchair direction: y = 0.0138x + 0.8980 14 Vacancies and extended defects, viz, cracks act as stress raisers. Here we discuss the stress field generated at the crack tip in the light of molecular dynamics and continuum mechanical models.
Crack-tip Stress-field
For brittle materials like antimonene, the stress is almost entirely concentrated to the tip atoms.
Thus, a very small fracture process zone is perceived. Near tip solution of continuum fracture mechanics gives the following relation - We note that, in this paper local volume averaged virial stress is calculated from per atom virial and local Voronoi volume with a unit nanometer thickness. Per-atom virial was calculated as described in the original paper. [55] Voronoi volumes are calculated by tracing the Voronoi vertices for each atom from the atomic trajectories at each several time steps (supplementary fig S2).
Previously, a Voronoi equivalent volume was used to calculate local virial stress for unit cells with a cubic symmetry. [56] Thompson et. al. [55] showed the equivalence of atom-cell, per-atom and group virial formula. They predicted that per atom virial tensor can effectively represent the contribution of atomic virials to the global virial stress. Figure 7 (b) stress distribution immediately before the crack-tip disruption. Figure 8(c) shows a typical evolution of KI with strain. Griffith's analysis, fracture toughness is a material constant. However, nanoscale cracks are prone to show deviations from the Griffith's prediction. We observe fluctuations in KIC at smaller crack lengths. Some authors often attribute this behavior with the crack tip plastic deformation which is specially plausible for materials with doped foreign species that superimpose their own stress field deforming the tip plastically. [57] However, earlier in this section we observed that for this material, crack tip behavior is convincingly described without the need for plastic dissipation.
Thus, it is expected that, the observed fluctuation of KIC is due to non-singular part of stress field, i.e., higher order terms in the expansion of stress field that are only dominant for smaller crack lengths.
Effect of Defect Concentration
In practical life, harsh chemical settings during manufacturing process, and recurrent subjection to high temperature make Sb nano-sheet vulnerable to the growth and evolution of defects which eventually minimizes the material property drastically. In many cases, such defects are unavoidable. However, sometimes they are also intentionally placed upon the nanostructures to achieve preferred properties, exclusively in terms of electrical and optical properties and even to enhance mechanical stability for some nanomaterials. [38,58] To measure the susceptibility of material properties on the density of defects, stress-strain curves for the mentioned structures of SLSb have been constructed and demonstrated in Figure 10. The figure suggests that arbitrarily dispersed and rising quantity of defect density can extensively minimize the material integrity. It is because the vacancy defects always act as fracture initiation points and occupancy of defects at numerous locations of the sheet augments the possibility of nucleation events at those positions. [59] As a result, the material gets fragile. [60,61] We also perceive similar trend of fracture strength, fracture strain and elastic modulus between armchair and zigzag directional loading of α-antimonene and β-antimonene structures as stated earlier in the effect of temperature section. The Young's modulus of α structure and fracture strain of β in armchair direction; Young's modulus in β and fracture strain of α remains higher. The comparison has been demonstrated in Figure 11.
Fracture Mechanism
In order to demonstrate the fracture mechanism of α-antimonene and β-antimonene nanosheet we generated a single vacancy by deleting atom at the center of the sheet and applied tension along both armchair and zigzag directions. The bond breaks at first in case of α-SB, is perpendicular to the loading and gets the opportunity to become almost parallel to the XY plane while undergoing tension. When the crack forms in armchair direction, it faces four (two at each side) evenly inclined bonds situated at the crack tip that offer two potential crack propagation paths (±60° with X direction) ( Figure 12). Hence, branching phenomenon takes place during armchair directional crack propagation ( Figure 13). Nevertheless, when zigzag crack formulates, it just confronts two bonds at the crack tip which is perpendicular to the crack. Therefore, it does not generate any branch initially ( Figure 12). [62] Yet, it might encounter some weaker bonds due to the vibrational instabilities in other directions rather than perpendicular in its long propagation path. In that case branching might occur in the distant path for zigzag directional crack too. The phenomenon is also spotted here and represented in Figure 13. In β Sb structure, there are bonds positioned nearly vertical to the XY plane. Thus, when it experiences armchair (X-axis) directional tension, it tries to descent and ultimately breaks while crosses its critical length. As the vertical bonds are at the same line along Y-axis, the broken bond engenders a straight path for the crack to follow ( Figure 12). Thus, almost no branching arises in zigzag directional crack propagation path. Again, tension in zigzag direction (Y-axis), since the zigzag bonds take the exerted load, it does not extend much ( Figure 12). Therefore, the breaking occurs at the zigzag directional bonds only and causes branching. The phenomena have been depicted in Figure 10. As a whole, since tensile straining is executed, stress starts to build up in the structure.
Nonetheless, stress does not develop homogeneously rather mostly around the vacancy positions in the perpendicular direction of the loading. And nucleation occurs at those stress concentrated areas. Following the nucleation, crack forms and propagates perpendicular to loading direction (in zigzag direction for armchair loading and vice versa).
CONCLUSIONS
For encapsulation, we conducted molecular dynamics simulations to explore the mechanical properties and fracture behavior of two distinct i.e. α and β structures of pristine and defected SLSb sheet. We examined the outcome of varying temperature, strain rate, crack length and defect percentage on the structural properties of the material. We showed that Sb allotropes can be modeled using LEFM with reasonable accuracy. Our calculations show that fracture toughness governs the material fracture for crack length ~60 Å and above, below which material strength governs the fracture. Increase in temperature and defect concentration not only deteriorate fracture strength and strain but also diminish the material soundness and elastic modulus. On the contrary, escalation in strain rate leads to the rise in fracture strength but Young's modulus remains the same. The vertical bond with XY plane in β-antimonene plays a vital role in determining the anisotropy of the material. And finally, fracture mechanism reveals that branching is prevailing in zigzag tension rather in armchair for both the structures. | 2020-09-18T01:00:25.682Z | 2020-09-17T00:00:00.000 | {
"year": 2020,
"sha1": "efc45f2f296476b9c7f52fefaab29e8d1a5d5ab5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "efc45f2f296476b9c7f52fefaab29e8d1a5d5ab5",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
253031811 | pes2o/s2orc | v3-fos-license | Pregnancy outcomes among pregnant women infected with COVID-19 with and without underlying disease: A case-control study
Abstract Emerging infections have many effects on the health of pregnant mothers and their fetuses. Given the importance of coronavirus disease (COVID-19) during pregnancy, this study aims to evaluate the pregnancy and fetal outcomes in pregnant women with COVID-19 by using previous studies. To conduct this study, all studies related to the subject under discussion during the years 2000–2021 were checked out by systematic search in internationally available databases, including Web of Science, Science Direct, Scopus, PubMed, and Google Scholar. Finally, 21 closely related studies were selected to investigate the main objective. The results showed that common symptoms of COVID-19 in pregnant women included fever, cough, and muscle aches. The most common laboratory results included decreased blood lymphocytes and increased blood CRP. Consequences of pregnancy and childbirth in pregnant women included increased preterm delivery and increased cesarean section. Based on the results of the reviewed study, it can be concluded that newborns of mothers with COVID-19 were negative for COVID-19. However, the most common outcome for infants born to mothers with COVID-19 was low birth weight. Clinical signs, laboratory results, and radiographic criteria in pregnant women with COVID-19 are similar to those in non-infected adults. However, it is recommended that precautions be taken to prevent transmission of the virus, as well as preventive health instructions, particularly masking.
Introduction
Coronavirus (CoV) is a virus containing triple ribonucleic acid (RNA) and is isolated from the family Coronaviridae and belongs to the order Nidovirales, which generally causes respiratory and gastrointestinal infections that may range from mild to more severe such as viral pneumonia disorders with systemic disorders. In the last two decades, the coronavirus has been responsible for two major epidemics: severe acute respiratory syndrome (SARS) (SARC), which killed 8098 people with a mortality rate of approximately 10.5%, and Middle East Respiratory Syndrome (MERS), which killed 2519 people. [1] After these two major epidemics, Coronavirus 2019 (COVID- 19) was first detected in December 2019 and spread to Wuhan, Hubei Province, China. [2] The virus was initially named nCoV-2019 and subsequently renamed SARS-COV-2, and eventually the associated disease was renamed COVID-19. [3] In a pandemic of infectious diseases, pregnant women and their fetuses are considered high-risk populations. [2] Pregnant women are more susceptible to infectious diseases than the Pregnancy outcomes among pregnant women infected with COVID-19 with and without underlying disease: A case-control study general population and are particularly vulnerable to low immune systems. In addition, their upper respiratory tract is swollen with high levels of estrogen and progesterone, and lung levels are limited. Therefore, in this way, pregnant women are prone to such diseases. [2,[4][5][6] Also, changes in physiological adaptation during pregnancy (e.g., increased diaphragm level, increased oxygen consumption, and edema of the mucous membranes of the respiratory tract) cause intolerance to hypoxia. [2,5,7] It should be noted that the potential risks of cytokine storm due to infection of pregnant women may be severe complications and even death. [5] These factors make pregnant women, their fetuses, and their newborns vulnerable to infectious diseases. [2] For example, the influenza pandemic of 1918 killed 2.6% of the population, but in pregnant women, the mortality rate was 37%. It was also reported that in the 2009 H1N1 flu epidemic, pregnant women were four times more likely to be hospitalized than other patients. [2] In addition, in the epidemic of infectious diseases, special attention should be paid to newborns as infants with infections may be asymptomatic and present with mild or severe symptoms. Body temperature in infected infants may rise, but temperature instability may also occur frequently in premature infants. The presence of tachypnea, apnea, and cough is very important in identifying the infection in adults, but these symptoms may not be a specialized sign of the disease in infants. [8] In addition, other manifestations such as malnutrition, lethargy, vomiting, diarrhea, and bloating are common in any malignant baby. Not all of the above findings can be linked exclusively to a specific infection in infancy. [9] COVID-19 affects all age groups, and pregnant women may be more susceptible to the disease. It should be noted that COVID-19 may alter immune responses in both the mother and fetus and affect the well-being of mothers and infants. [5] Pregnant women with SARC experienced worse results than non-pregnant women of the same age. Spontaneous abortion has been reported in women infected with SARC and MERC in the first trimester of pregnancy. Also, intrauterine growth restriction and preterm delivery have been reported in pregnancies in the second and third trimesters affected by SARC and MERC in the first trimester of pregnancy. Restriction of intrauterine growth and preterm delivery have also been observed in pregnancies affected by SARC and MERC in the second and third trimesters. In addition, intensive care for infants and endotracheal intubation, renal failure and maternal mortality have been reported in these patients. [3,6,10] However, no evidence of vertical transmission of SARC and MERC from mother to fetus has been reported. [3,11] Also, the studies by Yu et al. (2020) [4] and Schmid et al. (2020) [12] did not show any evidence of perinatal SARC and MERC infection in infants born to mothers who were infected during pregnancy. It is important to note that a number of pregnancies have had good results despite maternal infection with SARC or MERC. [6] Given the importance of this topic, the main purpose of this study was to investigate the possible effects of COVID-19 on pregnant women and their infants.
Materials and Methods
The aim of this study was to investigate the possible effects of COVID-19 on pregnant women and their infants. For this purpose, systematic searches of internationally available databases including Web of Science, Science Direct, Scopus, PubMed, and Google Scholar were performed between 2020 and 2022. A systematic review by using the Mesh terms "Pregnancy outcomes," "Women," "Coronavirus disease," "COVID-19," "Newborn," "Infant," "Embryonic consequences," "Fetal consequences," "Mother," "Outcome," "Symptoms," "Patients," "Giving birth," "Childbirth," "Complications," "Preterm delivery," and other similar keywords was performed. For other databases, the same Mesh terms were used. To ensure the completeness of the search, the references of the found studies were evaluated (reference checking) to minimize the possibility of not entering the studies. Citation tracing was also reviewed. According to Figure 1, the search for literature review, especially articles, was done according to the PRISMA guidelines. [13] In addition, unofficial reports, articles in letter to editor format, and unpublished articles and content posted on Internet sites were removed from the list of downloaded files. Finally, the results of 21 published articles were investigated for the present review.
Results
It is vital that life-saving interventions for infectious diseases be performed for pregnant women unless there is a compelling reason to eliminate them. Like all treatment decisions during pregnancy, it is important to carefully consider the benefits of interventions for both mother and fetus in terms of potential risks. Because monitoring systems have been developed for COVID-19 cases, it is important to gather and report information on pregnancy status as well as maternal and fetal outcomes, and some basic evidence to guide the treatment of pregnant women with COVID-19 should be presented. The clinical signs of the disease in pregnant mothers, radiographic criteria, pregnancy outcomes (including maternal and fetal outcomes), and vertical transmission of coronavirus from mother to fetus are presented in Table 1.
Discussion
In various studies, no specific symptoms were observed for pregnant women with COVID-19 compared with non-pregnant adults. However, the COVID-19 pandemic has caused stress and anxiety for pregnant women around the world. Anxiety and stress in pregnancy are associated with side effects such as preeclampsia, depression, increased nausea and vomiting during pregnancy, premature birth, and low birth weight. [29] Various studies have shown that viral infections and physiological changes in pregnant women with COVID-19 often cause complications, with preterm labor being the most common complication of pregnancy. However, it should be noted that the time and method of delivery should be chosen according to clinical conditions or delivery factors as usual (not just during the COVID-19 pandemic). According to studies, COVID-19 was not detected in the patient's breast milk. [2,22,25] However, in some studies, breast milk was contraindicated for infants. Delayed umbilical cord closure and the presence of vernix caseosa on the infant's body also play a role in COVID-19 transmission. For the diagnosis of COVID-19 from IgM assay, it is not possible to make a definitive diagnosis because this diagnostic method can be prone to false-positive and false-negative results and is usually less reliable than molecular diagnostic tests based on nucleic acid diagnosis. [15] Regarding the uterine transmission of COVID-19 virus from an infected mother to the fetus, various studies have shown that although the RT-PCR test was positive in a number of newborns, given that the amniotic fluid viral nucleic acid test, breast milk, placenta, umbilical cord blood, and vaginal secretions of infected women are negative, vertical intrauterine transmission may not have occurred. There is also no evidence of vertical transmission of COVID-19 so far, and further study is needed. Even if the results of RT-PCR are negative for fetal COVID-19 infection, the mother's disease may cause secondary neonatal complications such as effects on brain development. For this reason, continuous awareness of the condition of the mother and fetus with their close management and supervision is essential. [30] In pregnant women with suspected or definitive diagnosis of COVID-19, all care should be taken in the hospital until complete recovery. [31] Pregnant women with suspected/probable COVID-19 infection or individuals with asymptomatic confirmed infection, as well as individuals recovering from mild illness, should be monitored by ultrasound evaluation at 2-4 weeks of fetal growth and amniotic fluid volume. In addition, Doppler ultrasound is recommended. [32] A pregnant mother with COVID-19 should be admitted to an isolated room immediately and only medical care providers should be present. Continuous electronic monitoring of the fetal heart during labor is recommended. The gynecologist, anesthesiologist, pediatrician, and infant nurse should be present during the delivery of the mother with COVID-19 and should be placed at the patient's bedside if necessary. The mother's oxygen saturation should be controlled and should be more than 94%. [31] The mode of delivery depends on the gestational age and the condition of the fetus and mother, [32] and based on the results of some studies, there is currently no preference for the type of delivery (cesarean or normal) in mothers with COVID-19. There is also no clear evidence of optimal timing of delivery, safety of vaginal delivery, or whether cesarean section prevents vertical transmission at delivery. Therefore, the method of delivery and the time of delivery should be decided individually based on the symptoms of gynecology and the condition of the mother and Key results
Studied cases Studied group Authors (Study year)
The mother underwent immediate cesarean section due to severe preeclampsia.
Evaluation of the infant with COVID-19.
Evaluation of a pregnant woman with definitively -COVID-19
Evaluation of the infant of this infected mother Díaz et al. (2020) [14] All infants with good Apgar scores were born by cesarean section. Cord blood samples, amniotic fluid, breast milk, and neonatal sputum samples were negative for the COVID-19 virus in this study.
Cord blood sample, amniotic fluid, breast milk sample, and neonatal sputum sample after delivery.
9 pregnant women with definitive COVID-19 infection in the third trimester of pregnancy. 9 infants of these infected mothers.
Chen et al. (2020) [2] 10 mothers gave birth by cesarean section, and 1 mother gave birth by natural method with complete health. The most common symptom in 86% of them was fever, while the remaining 14% had symptoms such as cough, shortness of breath, and diarrhea.
Mothers and infants. Evaluation of 11 definitively pregnant women with COVID-19 Evaluation of 7 infants of these infected mothers Yu et al. (2020) [4] 16 mothers gave birth by cesarean section and 2 mothers gave birth by natural method with complete health. Neonates have been reported to be negative for COVID-19.
Mothers and infants.
18 pregnant women with definitive COVID-19 infection in the third trimester of pregnancy. 18 infants of these infected mothers.
Rasmussen et al. (2020) [11] Neonatal sputum samples and placenta samples were negatively reported in this study for COVID-19 virus.
Evaluation of neonatal throat sputum specimens after giving birth. Evaluation of placental tissue for COVID-19 nucleic acid.
Evaluation of placenta in 3 mothers
with definitive COVID-19.
Evaluation of 3 infants born from these infected mothers.
Chen et al. (2020) [15] The mother gave birth in perfect health, and the baby was reported to be negative for COVID-19.
Evaluation of the infant for COVID-19.
Evaluation of a 30-week-old pregnant woman with definitive COVID -19
Evaluation the infant of this infected mother.
Wang et al. (2020) [16] 7 mothers gave birth by cesarean section, and 2 mothers gave birth naturally by complete health. 10 infants were reported to be COVID-19 negative.
Samples of neonatal throat sputum
Evaluation of 9 definitively infected pregnant women. Evaluation of COVID-19 in infants (a case of twins) of these infected mothers. Zhu et al. (2020) [17] 7 mothers gave birth by cesarean section, and 2 mothers gave birth by natural method; 8 cases of preterm delivery, one case of infant death; 15 infants were tested and COVID-19 was negatively reported.
Mothers and infants
19 pregnant women with definitive COVID-19 infection in the third trimester of pregnancy. 20 infants of these infected mothers. Mullins et al. (2020) [18] Cesarean delivery was performed in 27 cases and normal delivery in 2 cases and 15 people (47%) had preterm delivery. One stillbirth and one neonatal death were observed. In 25 infants, no cases of vertical transmission were observed.
Mothers and infants
In a review report, 32 definitively infected pregnant women with COVID-19 examined 30 infants of these infected mothers. Mullins et al. (2020) [18] There was no significant difference between pregnant women with COVID-19 and healthy pregnant women in terms of the type of delivery, gestational age, birth weight, preterm delivery, meconium excretion, fetal disturbance and asphyxia, but in pregnant women with COVID-19, preventive use of Eutrotonic drugs have been reported to reduce bleeding during cesarean section. In addition, all infants of mothers with COVID-19 were negative for COVID-19.
Mothers and infants
Comparison of 16 pregnant women with COVID-19 with 45 healthy pregnant women Zhang et al. (2020) [19] The infant was reported to be COVID-19 negative. Infants Evaluation of a 35-week pregnant woman with definitively -COVID-19 Evaluation of the infant of this infected mother [20] Eleven patients had a successful delivery (10 cesarean deliveries and one normal delivery) and at the end of the period of the study, four patients were still pregnant (three in the second trimester and another in the third trimester). No cases of neonatal asphyxia, neonatal death, stillbirth, or miscarriage were reported.
Mothers and infants
Evaluation of 15 definitively pregnant women with COVID-19 Evaluation of the infants of this infected mothers [21] Contd... fetus. [33] If the mother has a preterm delivery, the mother should be allowed to have a vaginal delivery. [32] No unnecessary intervention in the delivery process should be done unless the mother is in urgent need of respiratory support. At present, the results of studies show that no positive cases of vaginal discharge infection with the COVID-19 virus have been observed. [31] Shortening the second stage with vaginal delivery can be considered because active pressure with a mask can be difficult for a woman.
In septic shock, acute organ failure, or fetal distress, a cesarean section should be performed immediately. "Water delivery" should be avoided to protect the medical team. [32] The use of epidural analgesia has been reported in safe studies at the request of the mother. If Antonox is used, the respiratory system must have a filter. In the case of cesarean section, general anesthesia should be avoided as much as possible and epidural or spinal anesthesia should be used. Precautions should be taken with regard to the use of prenatal steroids (dexamethasone or betamethasone) for fetal lung maturation in patients with critical condition COVID-19 as this can potentially worsen the clinical condition. A woman with COVID-19 who has a preterm delivery should not use Tawakulz to delay delivery (to prescribe prenatal steroids). [31]
Studied cases Studied group Authors (Study year)
Both mothers had a successful cesarean section and the baby was separated from the mother immediately after delivery. Cord blood samples, amniotic fluid, placental tissue, maternal vagina samples, breast milk samples, and neonatal throat sputum samples were reported to be COVID-19 negative.
Complete evaluation of umbilical cord blood samples, amniotic fluid, placental tissue, maternal vagina samples, breast milk samples, and postpartum throat sputum samples.
Evaluation of 2 definitively pregnant women with COVID-19 (In the third trimester of pregnancy) Evaluation of the infants of these infected mothers Fan et al. (2020) [22] No deaths have been reported in infected mothers.
No intrauterine transmission or infant infection were reported. Tested placenta samples were reported to be COVID-19 negative.
Complete evaluation of mothers and infants as well as postpartum placentas.
Evaluation of 38 definitively pregnant women with COVID-19 Evaluation of the infants of these infected mothers Schwartz et al. (2020) [3] Pregnant women are more prone to respiratory illnesses and severe pneumonia, which may make them more susceptible to COVID-19 infection than the general population, especially if they have chronic illnesses or maternal complications. Therefore, pregnant women and newborns should be considered in strategies focused on the prevention and management of COVID-19 infection of high-risk populations.
A qualitative study about the effects of COVID-19 on pregnant women and infants Qiao et al. (2020) [10] The mother had mild diarrhea (2-3 times) for one day. Mothers Evaluation of a 30-week pregnant woman with definitively -COVID-19 Wen et al. (2020) [23] The clinical signs of 33 infants at risk for COVID-19 were mild and the results were favorable. But three infants with the symptomatic COVID-19 were severely affected, with complications including prematurity, asphyxia, and sepsis.
Infants All infants born to mothers with COVID-19 from Children's Hospital Wuhan in Wuhan
Zeng et al. (2020) [24] The cytokinin test result was abnormal at 2 hours after birth, but PCR-RT was repeatedly negative (for nasopharyngeal swabs). In addition, vaginal discharge and breast milk were negative for COVID-19.
Mother and infant Evaluation of a pregnant woman with definitively -COVID-19
Evaluation of the infant of this infected mother Dong et al. (2020) [25] All 9 cases were delivered by cesarean section. Pregnancy hypertension, preeclampsia, PROM, and fetal distress were observed in 1, 1, 2, and 2 cases, respectively. Cough, shortness of breath, sore throat, diarrhea, chest pain, fever at admission and fever after childbirth were the most important symptoms. All newborns were negative for COVID-19. Fetal outcomes were including premature infancy (4 cases), SGA (1 case), and LBW (1 case).
Mothers and infants
Evaluation of 9 pregnant women with definitively -COVID-19 Chen et al. (2020) [26] Hypothyroidism and gestational diabetes were the most important maternal outcomes. Premature delivery, intrauterine fetal distress, and LBS were among the most obvious fetal outcomes.
Mothers and infants Evaluation of 3 pregnant women with definitively -COVID-19
Evaluation of the infants of these infected mothers [27] All three gave birth naturally (vaginally). Premature birth, fever, cough, and chest tightness were the most important maternal outcomes. The prematurity of an infant was one of the most obvious symptoms of fetal outcome.
Mothers and infants Evaluation of 3 pregnant women with definitively -COVID-19
Evaluation of the infants of these infected mothers Khan et al. (2020) [28] In the management of infants in suspected, probable, and confirmed cases of maternal COVID-19 infection, the umbilical cord should be closed immediately. The infant should be resuscitated for evaluation by a pediatric team. There is insufficient evidence as to whether delayed umbilical cord closure increases the infant's risk of COVID-19 through direct contact. [32] There is currently insufficient evidence on the safety of breastfeeding and the need for mother/infant separation. If the mother's condition is severe, it is best to separate the baby from the mother and hand expression of breast milk to maintain milk production. Precautions should be taken to clean breast pumps. If the patient is asymptomatic or has a mild illness, breastfeeding by the mother should be considered.
Because the main concern is that the COVID-19 virus may be transmitted through respiratory droplets instead of milk droplets, breastfeeding mothers should wash their hands and use a mask before contacting the baby. If the patient's mother is in a special room, the baby's bed should be kept at least 2 m away. [32] Conclusion Based on the results of the reviewed study, it can be concluded that clinical signs, laboratory results, and radiographic criteria in pregnant women with COVID-19 are similar to those in non-infected adults. Common manifestations of COVID-19 in pregnant women with the disease included fever, cough, and muscle aches. The most common laboratory results included decreased blood lymphocytes and increased blood CRP. Complications of pregnancy and childbirth in pregnant women included increased preterm delivery and increased cesarean section. Given that most of the studies are in China, there is a need for more studies in this area in other countries. In addition, it is suggested that mothers with COVID-19 should take precautions to prevent transmission of the virus and preventive health guidelines, particularly masking, should also be considered.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2022-10-21T15:15:42.097Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "f4f6c251c2b833500f7fa4e8762df6160766ab97",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/jfmpc.jfmpc_1291_21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6beb0534f94cfca16c884fc23e1390311500759f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267214872 | pes2o/s2orc | v3-fos-license | Innovative learning methods of Islamic education subject in Indonesia: a meta-analysis
ABSTRACT
INTRODUCTION
In Indonesia, Islamic education (ISE) is a subject that must be taught to all citizens who are Muslim, not only for those who specifically study religion-related disciplines, but also for those who study other disciplines such as natural sciences, social sciences, and engineering.This is applied to students from various levels, i.e., elementary school (Sekolah Dasar/SD), secondary school (Sekolah Menengah Pertama/SMP), high school (Sekolah Menengah Atas/SMA), and undergraduate students in the university.Such mandatory of ISE teaching is derived from the national constitution of Republic of Indonesia no.20,2003, regarding the national education system.It is stated that national education is based on Pancasila and the 1945 constitution of the Republic of Indonesia, which is rooted in religious values, Indonesian national culture and responsive to the changing times.Furthermore, the Indonesian national education aims to develop the potency of citizen to become human beings who believe to the God Almighty, have noble characters, healthy, knowledgeable, capable, creative, independent, democratic and responsible.Teaching of ISE itself is aimed at preparing students to recognize, understand, appreciate, believe, have noble characters, practice Islamic teachings from the sources of the Qur'an and Hadith, through guidance, teaching, training, and experiential.
The learning method of ISE at various educational institutions generally employs the lecturing method [1].In the lecturing method, ISE teachers deliver and explain learning materials orally to students in the classroom.The main activity of students in this method is listening to the learning material delivered by the teachers carefully and taking notes on the important points of the material [2].The lecturing method has a number of advantages, i.e., easier to organize classes, suitable for a large number of students, relatively easy to prepare teaching and learning activities, and suitable to deliver a difficult topic.Due to these reasons, it is not surprising that the lecturing method is typically used by the ISE teachers in the classroom.However, this method also has a number of disadvantages, i.e., teacher-centered learning, not easy to assess the extent to which students understand the lecture, possibility of misinterpretation by students, and tends to make students less creative [3].Such weaknesses of the conventional learning method, i.e., the lecturing method need to be overcome or at least reduced in order to enhance students' understanding.
The effectiveness of ISE learning has to be continuously improved in order to achieve learning objectives, and this depends on, among others, the learning method.Since the conventional (lecturing) method has a number of limitations as described, innovation of ISE learning methods is therefore required.A number of innovative learning methods that may serve as alternatives to the lecturing method include question and answer methods, discussions, demonstrations, experiments, recitations, group work, role playing, field trips, drills, discovery, team teaching systems.problem solving, projects, moral reasoning, mind maps, and quantum methods [4].This study aimed to evaluate the effects of innovative learning methods on learning motivation, learning activity and learning achievement of students in the ISE subject by employing the meta-analysis approach.
RESEARCH METHOD
This study used the meta-analysis method which is one of the research methods with a quantitative approach [5], [6].This method has been repeatedly used in the field of educational research and evaluation [7], [8].The stages of the study consisted of: i) problem formulation; ii) literature search and selection; iii) database development; iv) determination of effect size method and it is integration; and v) publication bias analysis.
Formulation of the research problem was carried out using the population, intervention, comparison, outcome (PICO) model [5].Population was students in elementary, secondary and high school education in Indonesia who obtain ISE subject.Intervention was the innovative learning methods.Comparison was the conventional learning method (the lecturing method).Outcome was learning motivation, learning activity and learning achievement.
Literature search was carried out by using the Google Scholar and Scopus platforms using the keywords "learning method", "Islamic education", "learning motivation", "learning activity" and/or "learning achievement".Articles obtained through the search process were then selected based on the following inclusion criteria: i) the research was conducted on students at elementary, secondary and high school education levels in Indonesia, ii) the article directly compared between the innovative learning methods and the conventional learning method, iii) the article reported dependent variables in the form of learning motivation, learning activity, and/or learning achievement, and iv) the subject was specific on ISE.
The selected articles were subsequently integrated into a database.The data were the number of samples or respondents from each study and the percentage values, both from the conventional/lecturing method and the interactive learning method in pairs.The response variables were learning motivation, learning activity and learning achievement.Moderator variables specified in the database were level of study (elementary, secondary and high school), region (Java, Kalimantan, Nusa Tenggara, Sulawesi, Sumatera), school category (state, private), category of innovative learning methods (direct, indirect, interactive, experiential, independent), ISE topic (Al-Quran, aqidah/belief, ibadah/worship, akhlak/moral, history, general), and learning cycle (first, second).Odds ratio (OR) was employed as the effect size for integrating data from different studies [9] with the (1) to (4).
Odds Ratio = e Log Odds Ratio
(1) Log Odds Variance = Where, A is proportion in the innovative learning method×number of samples; B is number of sample-A; C is proportion in the conventional learning method×number of sample; and D is number of sample-C.After each study had calculated the effect size in the form of OR, then the cumulative effect size was calculated through the integration process.The integration of the effect size was carried out using a random effects model with the DerSimonian Laird algorithm [9].The results of the synthesis were displayed in the form of forest plots [10].Sub-group analysis was conducted based on the pre-defined moderator variables, i.e., level of study, region, school category, category of innovative learning methods, ISE topic, and learning cycle.Publication bias was assessed by using the funnel plot, Egger and Begg tests [11].An analysis of publication bias is required when conducting a meta-analysis study since its presence may affect the validity and generalization of the results obtained.
The sub-group analysis was carried out on the learning achievement variable as shown in Table 3.The variables of learning motivation and learning activity could not be analyzed for their sub-groups due to the few amounts of data available.Based on Table 3, in the school level sub-group, innovative learning methods improved students' learning achievement at various levels, both elementary, secondary and high school (P<0.001).In the area sub-group where the school is located, innovative learning methods enhanced students' learning achievement in all areas, i.e., Java, Kalimantan, Nusa Tenggara, Sulawesi and Sumatera (P<0.001).In the school category sub-group, innovative learning methods elevated students' learning achievement in both public and private schools (P<0.001).In the learning method category, the innovative learning methods in the forms of direct, indirect, interactive, experiential and independent methods increased students' learning achievement (P<0.001).In the ISE topic sub-group, innovative learning methods significantly improved students' learning achievement on various ISE topics, i.e., Al-Quran, aqidah, ibadah, akhlak, history and general topics (P<0.001).In the cycle sub-group, innovative learning methods in cycle 1 and cycle 2 increased students' learning achievement (P<0.001).There were no significant differences among categories in almost all sub-groups, except for the cycle sub-group.Cycle 2 showed a higher learning achievement than cycle 1 (P<0.05).Analysis of publication bias was performed visually using a funnel plot and statistically using the Begg's and Egger's tests.The funnel plots for learning motivation and learning activity were symmetrical as shown in Figures 4 and 5, respectively, while that for learning achievement was asymmetric as shown in Figure 6.This was reinforced by the results of Begg's and Egger's tests as shown in Table 4, where both were not significant for the variables of learning motivation and learning activity (indicating no publication bias), but significant (P<0.001) for the learning achievement variable (indicating there was publication bias).Increasing students' learning motivation through innovative learning methods is in accordance with the theory that teaching methods are one of the factors that may influence learning motivation [59].Other factors that also affect learning motivation are the goals or targets to be achieved, the abilities of students, physical and psychological conditions, and environmental conditions such as family environment, place of residence, friendships, and society [59].The learning method is a stimulus that originates from external and therefore it is classified as an extrinsic motivation.Ideally, learning motivation that arises is an intrinsic motivation originating from the students themselves so that it is more stable and does not require external stimulation [60].However, extrinsic motivation is often needed to subsequently generate intrinsic motivation.
There are a number of indicators of students' learning motivation, i.e., i) desire to success; ii) encouragement and need for learning; iii) hopes and aspirations of the future; iv) appreciation in learning; v) interesting activities in learning; and vi) conducive learning environment [61].In this study, innovative learning methods that increased learning motivation were the make a match method, blended learning, and questions students have.The make a match learning method is a learning method in which students look for partners through cards; students receive a card containing a question or answer, then they look for a suitable partner according to the card they hold [62].The increased learning motivation of students in studying ISE with the make a match method is because this method involves all students to be active in the learning process and fun [26].The blended learning method is a learning model that combines face-to-face learning method with computer-assisted learning method or similar technology (in the form of text, audio, video and/or multimedia), both offline and online to form an integrated learning approach [63].The increase of learning motivation with the blended learning method is due to students' interest in using computer-based Innovative learning methods of Islamic education subject in Indonesia: … (Anuraga Jayanegara) 1155 learning technology, especially students who are belong to Z generation that sensitive to information and communication technology [64].
With regard to the questions students have learning method, this method is one of the active learning methods which is included in the collaborative learning category (learning by working together) which aims to train the ability to work together, listen to the opinions of others, improve memory of the material learned, train a sense of care and willingness to share, increase respect for others, train emotional intelligence, hone interpersonal intelligence, increase motivation and learning atmosphere, and increase learning speed and results [65].The increased learning motivation of students in studying ISE through the questions students have method is due to the active learning and fun.Active and interactive learning models have been recommended for application to various subjects since they are allegedly more effective than conventional learning models which tend to be passive [66].
Innovative learning methods increased students' learning activity in studying the ISE subject.This indicates that innovative learning methods drive students to be more enthusiastic and passionate about learning, thus increasing their activeness in the learning process.This condition is different if the learning method provided is in the form of conventional method, i.e., the lecturing method.In the lecturing method, ISE teachers convey and explain learning materials orally to students in class.The main activity of students in this method is listening to the learning material delivered by the teacher carefully and noting the important points of the material [2].Thus, students are relatively more passive so that their learning activity is lower.
Learning activity of students has a number of indicators, i.e., i) participating in carrying out learning assignments; ii) being involved in problem solving; iii) asking other students or the teacher if they do not understand the problems; iv) trying to find various information needed for problem solving; v) carrying out group discussions according to the teacher's instructions; vi) assessing their own abilities and the results they obtain; vii) training themselves in solving problems; and viii) using opportunities or apply what they have obtained in completing the tasks or problems [67].In line with this, Naziah et al. [68] described that indicators of students' learning activity are: i) students can carry out learning tasks; ii) students are active in discussions; iii) students are active in asking questions; iv) involved in problem solving; v) actively seek information to solve a problem; and vi) conduct an evaluation of the results that have been obtained during learning.When examining at these indicators, there are a number of indicators that can only be achieved through innovative learning methods and cannot be achieved by the lecturing method.
Increasing the learning activity through innovative learning methods is also related to the enhancement of learning motivation.Empirically, a number of studies indicate a close relationship between learning motivation and learning activity.For instance, Gunawan [69] demonstrated that learning motivation had a positive and significant effect on learning activity.Furthermore, Tegeh and Pratiwi [70] also reported a positive correlation between learning motivation and learning activity of elementary school students in learning science subjects, and these two variables together positively influenced students' learning achievement with a coefficient of determination of 0.721.
The increase in students' learning achievement when implementing innovative learning methods is inseparable from the increased learning motivation and learning activity.Students who possess high learning motivation will tend to be more active in participating in the learning process, and as a result of increasing these two variables, their learning achievement will also increase.There are two factors that affect the learning achievement, namely the internal factors or factors from inside and external factors or factors from outside.Internal factors that influence learning achievement are physiological aspects (body fitness and conditions) and psychological aspects (intelligence, attitudes, talents, interests, motivation and personality).External factors that affect learning achievement include: i) social environment, including friends, teachers, family and society; and ii) non-social environment, including the condition of houses, schools, equipment, and nature (weather) [67].Even though innovative learning methods are among the external factors that influence learning achievement, they may induce the psychological aspects of students in learning, especially interest and motivation to learn.
CONCLUSION
Innovative learning methods are able to enhance learning motivation, learning activity and learning achievement of school students to study the Islamic Education subject in Indonesia as compared to that of the conventional learning method (the lecturing method).The increase through innovative learning methods is 38.7% for learning motivation, 48.9% for learning activity, and 80.9% for learning achievement.Innovative learning methods elevate the learning achievement of students at various levels, i.e., elementary, secondary and high schools, and these apply to both public and private schools.In the learning method category, innovative learning methods in the forms of direct, indirect, interactive, experiential and independent methods can improve students' learning achievement.In the ISE topic sub-group, innovative learning methods increase students' learning achievement on various topics, i.e., the topics of Al-Qur'an, aqidah, ibadah, akhlak, history and general.In the cycle sub-group, innovative learning methods both in cycle 1 and cycle 2 improve learning achievement of students, and cycle 2 demonstrates better result in comparison to cycle 1.Future work should address the interaction between each particular innovative learning method and the ISE specific topic.
Table 1 .
Literatures used in the meta-database
Table 2 .
Classification of innovative learning methods used in the meta-analysis probing prompting, contextual teaching and learning, guided note taking and minutes paper, quantum teaching, blended learning, demonstration, lecturing variation, giving question and getting answer.2IndirectDiscovery learning/inquiry, learning cycle 5E, the power of two, problem-based learning.3InteractivePeer tutor, group investigation, make a match, team quiz, card sort, call on the next speaker, market place activities, expert group, mind mapping, cooperative script, teams games tournament, jigsaw, group discussion, learning cell, snowball throwing, quick on the draw, questions students have.
Figure 1.Forest plot of innovative learning method effect on learning motivation of students to study ISE subject Figure 2. Forest plot of innovative learning method effect on learning activity of students to study ISE subject ISSN: 2252-8822 Int J Eval & Res Educ, Vol. 13, No. 2, April 2024: 1148-1158 1152 Figure 3. Forest plot of innovative learning method effect on learning achievement of students to study ISE subject Innovative learning methods of Islamic education subject in Indonesia: … (Anuraga Jayanegara) 1153
Table 3 .
Sub-group analysis of innovative learning method effect on learning achievement Different superscripts within the same sub-group are significantly different at P<0.05.
Table 4 .
Begg's dan Egger's test results for learning motivation, activity and achievement variables Variable Begg's test (P-value) Egger's test (P-value) | 2024-01-26T16:56:36.582Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "56e648ab4925b6b6b3e621e9a21e394f8357f7e7",
"oa_license": "CCBYNC",
"oa_url": "https://ijere.iaescore.com/index.php/IJERE/article/download/26364/13860",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "39b20dc5fd3daeb103ee3beb981d131cd5dd147e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
235484352 | pes2o/s2orc | v3-fos-license | Elongational Flow Field Processed Ultrahigh Molecular Weight Polyethylene/Polypropylene Blends with Distinct Interlayer Phase for Enhanced Tribological Properties
Herein, we produced a series of ultrahigh molecular weight polyethylene/polypropylene (UHMWPE/PP) blends by elongational-flow-field dominated eccentric rotor extruder (ERE) and shear-flow-field dominated twin screw extruder (TSE) respectively and presented a detailed comparative study on microstructures and tribological properties of UHMWPE/PP by different processing modes. Compared with the shear flow field in TSE, the elongational flow field in ERE facilitates the dispersion of PP in the UHMWPE matrix and promotes the interdiffusion of UHMWPE and PP molecular chains. For the first time, we discovered the presence of the interlayer phase in blends with different processing modes by using Raman mapping inspection. The elongational flow field introduces strong interaction to enable excellent compatibility of UHMWPE and PP and induces more pronounced interlayer phase with respect to the shear flow field, eventually endowing UHMWPE/PP with improved wear resistance. The optimized UHMWPE/PP (85/15) blend processed by ERE displayed higher tensile strength (25.3 MPa), higher elongation at break (341.77%) and lower wear loss of ERE-85/15 (1.5 mg) compared to the blend created by TSE. By systematically investigating the microstructures and mechanical properties of blends, we found that with increased content of PP, the wear mechanism of blends varies from abrasive wear, fatigue wear, to adhesion wear as the dominant mechanism for two processing modes.
Introduction
Ultrahigh molecular weight polyethylene (UHMWPE) has multiple advantages including good self-lubricating ability, low friction coefficient, high impact strength, fatigue resistance, and biological inertness, which demonstrates its potential use as a wear-resistant material in industrial bearings, protective layer, and artificial joints [1]. However, owing to the relatively high average of UHMWPE, irregular inter-chain entanglement results in high regional chain density and low mass flow rate (MFR), making UHMWPE difficult to be processed by common injection molding or extrusion processes. In addition, the low surface hardness, low modulus of elasticity, and bending strength, and poor abrasion resistance of UHMWPE greatly limit its application [2]. It is therefore of great significance to develop Polymers 2021, 13,1933 2 of 11 industrially viable UHMWPE-based wear resistant materials through the regulation of polymer composition and the optimization of processing technology [3].
Recently, developing UHMWPE-based blends, i.e., introducing low density polyethylene (LDPE), Poly (lactic acid) (PLLA), poly (ethylene oxide) (PEO), etc., into the UHMWPE matrix, is a viable option to improve the processability [4][5][6][7]. However, excessive amounts of additives will lead to an increased mobility of the UHMWPE based blends, while its mechanical properties may be adversely affected [8,9]. Due to the superiority of polypropylene (PP) with high MFR value, a mixture of a certain amount of PP with UHMWPE also can effectively improve the processing properties of UHMWPE-based blends [10]. Unlike high density polyethylene (HDPE), which penetrates the UHMWPE particles, PP acts as a lubricant which is distributed between the primary and secondary particles of UHMWPE, enhancing the processing properties, but with limited improvements in wear resistance and mechanical properties [11]. Moreover, changing the strain type of the forming process can also be used to improve the target performance of the blends [12,13]. The flow field of polymer materials in processing is divided into a shear flow field where the velocity gradient direction is in line with the flow direction and the elongational flow field where the velocity gradient direction is perpendicular to the flow direction, corresponding to the twin-screw extruder (TSE) and eccentric rotor extruder (ERE), respectively. The different flow fields greatly influence the compatibility of the two phases in the blends, giving rise to differences in all aspects of performance [14][15][16][17]. Therefore, the relationship between the structure and the wear resistant properties of the blends in ERE and TSE needed to be systematically investigated.
As is well known, one marked difference between polymers and metallic wearresistant materials is that polymers tend to have a higher viscoelasticity and that wear volume loss in polymers is a complex behavior that is influenced by many factors such as friction type, strength, resistance, temperature, and geometry of friction nodes [18,19]. The common wear mechanisms in polymers are adhesive wear, abrasive wear, and fatigue wear, in which fatigue wear differs significantly from adhesive or abrasive wear and does not cause significant damage to the surface until a critical number of cycles is reached [20]. The wear resistant properties of polymers are generally determined to some extent by their chemical structures, while the processing technology exerts a great influence on the chemical structure of polymers [21,22]. In order to ensure that the physical and chemical properties of the products meet the required conditions, it is necessary to provide a lucid understanding on morphological structure of the products.
In this contribution, we prepared a series of UHMWPE/PP blends with different composition by two processing methods TSE and ERE respectively and provided a comparative study of structures and mechanical and wear resistance properties of blends processed by elongational flow field and shear flow field. With the same PP content, the elongation strength of ERE-85/15 (25.3 MPa) was higher than that of TSE-85/15 (22.7 MPa), while the elongation at break (341.77%) was four times higher. In the sliding wear test, the wear volume loss of ERE-85/15 (1.5 mg) was less than that of TSE-85/15 (3.0 mg), which indicates that the ERE-processed blend has superior mechanical and wear resistant properties. We also elucidated the enhancement mechanism by Raman spectroscopy and SEM, showing that the elongational flow field facilitates the dispersion of PP phase in the UHMWPE matrix and promotes the interdiffusion of UHMWPE and PP molecular chains, resulting in the formation of a wide interlayer phase (1~2 µm). This 'soft link' interlayer phase means that the chains of UHMWPE and PP are partially entangled under elongational flow field and effectively strengthens the interface between the two phases, endowing the UHMWPE/PP blend better wear resistance.
Briefly, the UHMWPE/PP blend was processed by using ERE with the UHMWPE/PP mass ratios of 100/0, 95/5, 85/15, 75/25, 65/35, and 50/50 at a speed of 25 rpm and various processing temperatures of 200 • C, respectively. The extrusion die was an 8 mm round bar mill; the samples were extruded from the ERE and immediately placed in a stainless-steel die and pressed into sheets at 17 MPa.
The UHMWPE/PP blend was also produced by TSE. The polymers with similar mass ratio as that processed by the ERE were put into a TSE for melt blending with a screw diameter of 21.7 mm, a speed of 220 rpm and a melt temperature of 200 • C. The samples were extruded by the twin-screw and immediately placed in a stainless-steel die and pressed into sheets at a pressure of 17 MPa. Pure UHMWPE was also processed into CM-UPE consists of pure UHMWPE, which is processed directly by compression molding (CM) using a flat plate vulcanizer (Flat Plate Vulcanizer: ZG-80T, Dongguan Zhenggong Mechanical & Electrical Equipment Technology Co., Dongguan, China).
Sample Characterization
The morphology and structure were examined by scanning electron microscopy (SEM: HITACHI-Regulus 8100, Tokyo, Japan). Confocal Raman spectroscopy (DXR2xi, Thermo Scientific) was employed to investigate microstructures and interface phase morphology with a 532 nm argon ion laser.
The crystallinity of blends was acquired by differential scanning calorimetry (DSC: Q20, TA, USA). The heating and cooling rate of the whole test process was set at 10 • C/min. Dynamic thermomechanical analysis (DMA: Q800, TA Instruments, New Castle, DE, USA) was used to analyze the thermo-mechanical properties of the samples with a temperature range of 30-180 • C, a temperature rise rate of 3 • C/min, an amplitude of 5 µm and a test frequency of 1 Hz. The samples were tested for mechanical properties according to GB/T 1040.2-2006 standard, with the elongational speed set at 50 mm/min and at room temperature.
The tensile strength and elongation at break were performed using microcomputercontrolled electronic universal testing machine (CMT4104, Shenzhen New Sansi Materials Testing Co., Shenzhen, China). The M-200 plastic sliding friction and wear tester was used to determine the sliding friction and wear properties of the material according to the standard GB/T 3960-2006. The sample size was 30 mm × 7 mm × 6 mm, and the test was carried out under dry friction conditions on 45# steel with a speed of 200 r/min. The temperature of the metal grinding wheel surface was monitored at 30 min intervals using an infrared thermometer.
Results and Discussion
To achieve the imaging of fracture surface, the sample was completely submerged in liquid nitrogen for a certain period and then, after complete freezing, an isotropic stress was applied at both ends of the sample, resulting in a fracture in the central stress concentration area of the sample.
The crystallinity of blends was acquired by DSC, and the crystallinity of UHMWPE was calculated according to Equation (1): where ∅ m as the mass fraction of UHMWPE; ∆H m as the enthalpy of melting of UHMWPE; ∆H m 0 represents the enthalpy of melting when X c is 100%, and its value is 290 J/g [23]. Based on the Raman spectra of the microstructure of the samples, the crystallinity (X c ) is calculated according to Equation (2) [24]: The phase morphology is indicative of blends was derived from Equation (3) [25]: where I denotes the peak area corresponding to the position of the Raman peak. I 804 and I 1295 are the indications of PP and UHMWPE, respectively. UHMWPE/PP blends with various mass ratios of 100/0, 95/5, 85/15, 75/25, 65/35, and 50/50 were extruded into sheets using the ERE and TSE, respectively. Figure 1 and Figure S1 (see Supplementary Materials) shows typical SEM images of the fractural surface of the UHMWPE/PP blends processed by ERE and TSE. As seen from Figure 1, the fractured surface of UHMWPE/PP by TSE exhibits a high density of cavities. A higher magnification of SEM imaging reveals that the cavity wall is smooth, and the two-phase interface is clearly visible. This could be caused by the poor compatibility of UHMWPE and PP and the weak interfacial bonding strength under the shear flow field dominated in TSE ( Figure 1A,a). In contrast, for UHMWPE/PP extruded by ERE, the cavity number on the fractured surface is markedly reduced and the cavity wall becomes rougher with fibrillike structure exposed, while the two-phase interface is more blurred ( Figure 1B,b). This indicates that PP is more uniformly dispersed in the UHMWPE matrix with the stronger interfacial bonding strength induced by the elongational flow field.
where ∅m as the mass fraction of UHMWPE; Δ as the enthalpy of melting of UHMWPE; Δ represents the enthalpy of melting when is 100%, and its value is 290 J/g [23]. Based on the Raman spectra of the microstructure of the samples, the crystallinity (Xc) is calculated according to Equation (2) [24]: The phase morphology is indicative of blends was derived from Equation (3) [25]: where I denotes the peak area corresponding to the position of the Raman peak. I804 and I1295 are the indications of PP and UHMWPE, respectively. UHMWPE/PP blends with various mass ratios of 100/0, 95/5, 85/15, 75/25, 65/35, and 50/50 were extruded into sheets using the ERE and TSE, respectively. Figure 1 and Figure S1 (see Supplementary Materials) shows typical SEM images of the fractural surface of the UHMWPE/PP blends processed by ERE and TSE. As seen from Figure 1, the fractured surface of UHMWPE/PP by TSE exhibits a high density of cavities. A higher magnification of SEM imaging reveals that the cavity wall is smooth, and the two-phase interface is clearly visible. This could be caused by the poor compatibility of UHMWPE and PP and the weak interfacial bonding strength under the shear flow field dominated in TSE ( Figure 1A,a). In contrast, for UHMWPE/PP extruded by ERE, the cavity number on the fractured surface is markedly reduced and the cavity wall becomes rougher with fibril-like structure exposed, while the two-phase interface is more blurred ( Figure 1B,b). This indicates that PP is more uniformly dispersed in the UHMWPE matrix with the stronger interfacial bonding strength induced by the elongational flow field. As observed in Figure S2 (see Supplementary Materials), the Raman spectrum of pure UHMWPE shows a characteristic peak at 1295 cm −1 assigned to the twisted vibration of CH 2 , while pure PP reveals a characteristic peak at 804 cm −1 , attributable to the wobble vibration of CH 2 [26]. The distribution of UHMWPE and PP in the blends can be obtained by fitting the Raman data according to Equations (1)-(3) mentioned above. As shown in Figure 2, the red part in the Raman mapping means a larger I PP /I UHMWPE ratio, indicating that the red region is dominated by the PP phase. The blue part denotes a smaller I PP /I UHMWPE ratio, implying the dominant presence of the UHMWPE phase. The green part suggests the coexistence of equivalent UHMWPE and PP phases. Under either elongational or shear flow fields, as the PP content is less than 25%, UHMWPE act as the continuous phase while PP is dispersed in the UHMWPE matrix to form an island structure (Figure 2a 1 ,a 2 ,b 1 ,b 2 ). As the PP content approaches 25%, the area of the green region increases and a bi-continuous structure form (Figure 2c 1 ,c 2 ). When the PP content is above 25%, the area of the red area gradually increases while the area of the blue area decreases (Figure 2d 1 ,d 2 ,e 1 ,e 2 ), indicating a reverse blend system with PP as the continuous phase and UHMWPE as the dispersed phase. Having established that ERE dominated by elongational stress can induce a uniformly dispersed phase morphology of PP. In contrast, PP under the action of shear flow field is more likely to agglomerate in the UHMWPE matrix and the UHMWPE/PP blend is prone to phase separation.
Polymers 2021, 13, x FOR PEER REVIEW 5 of 11 As observed in Figure S2 (see Supplementary Materials), the Raman spectrum of pure UHMWPE shows a characteristic peak at 1295 cm −1 assigned to the twisted vibration of CH2, while pure PP reveals a characteristic peak at 804 cm −1 , attributable to the wobble vibration of CH2 [26]. The distribution of UHMWPE and PP in the blends can be obtained by fitting the Raman data according to Equations (1)-(3) mentioned above. As shown in Figure 2, the red part in the Raman mapping means a larger IPP/IUHMWPE ratio, indicating that the red region is dominated by the PP phase. The blue part denotes a smaller IPP/IU-HMWPE ratio, implying the dominant presence of the UHMWPE phase. The green part suggests the coexistence of equivalent UHMWPE and PP phases. Under either elongational or shear flow fields, as the PP content is less than 25%, UHMWPE act as the continuous phase while PP is dispersed in the UHMWPE matrix to form an island structure ( Figure 2a1,a2,b1,b2). As the PP content approaches 25%, the area of the green region increases and a bi-continuous structure form (Figure 2c1,c2). When the PP content is above 25%, the area of the red area gradually increases while the area of the blue area decreases ( Figure 2d1,d2,e1,e2), indicating a reverse blend system with PP as the continuous phase and UHMWPE as the dispersed phase. Having established that ERE dominated by elongational stress can induce a uniformly dispersed phase morphology of PP. In contrast, PP under the action of shear flow field is more likely to agglomerate in the UHMWPE matrix and the UHMWPE/PP blend is prone to phase separation. DSC curves of UHMWP/PP blends obtained by different processing methods are shown in Figure S3a,b (see Supplementary Materials). The UHMWPE/PP blends extruded under the elongational flow field exhibit two melt peaks at near ~135 °C (UHMWPE) and 164 °C (PP), respectively, while the melt peak of PP becomes apparent with the increase of the PP content. Moreover, the melt enthalpy and crystallinity of UHMWPE decrease with the addition of PP, suggesting that UHMWPE/PP is an incompatible system, and the two phases compete during the crystallization process. Interestingly, the melting point of PP in the UHMWPE/PP blend by ERE (~164 °C) is higher than that of PP in the blend prepared by TSE (~155 °C) [27,28]. The cooling crystallization curves demonstrate that only one crystallization peak appears in the UHMWPE/PP blend obtained under the elongational flow field, while the blend sample from the shear flow field feature a double peak appears when the PP content is >25% ( Figure S3c,d, see Supplementary Materials). Moreover, the crystallinity analysis derived from the DSC results confirms that there is no significant difference in the crystallinity variation of the blend samples produced by TSE and ERE (Table S1, see Supplementary Materials).
The tensile strength and elongation at break of the UHMWPE/PP blends processed by ERE and TSE are shown in Figure S4 (see Supplementary Materials). Clearly, in terms DSC curves of UHMWP/PP blends obtained by different processing methods are shown in Figure S3a,b (see Supplementary Materials). The UHMWPE/PP blends extruded under the elongational flow field exhibit two melt peaks at near~135 • C (UHMWPE) and 164 • C (PP), respectively, while the melt peak of PP becomes apparent with the increase of the PP content. Moreover, the melt enthalpy and crystallinity of UHMWPE decrease with the addition of PP, suggesting that UHMWPE/PP is an incompatible system, and the two phases compete during the crystallization process. Interestingly, the melting point of PP in the UHMWPE/PP blend by ERE (~164 • C) is higher than that of PP in the blend prepared by TSE (~155 • C) [27,28]. The cooling crystallization curves demonstrate that only one crystallization peak appears in the UHMWPE/PP blend obtained under the elongational flow field, while the blend sample from the shear flow field feature a double peak appears when the PP content is >25% ( Figure S3c,d, see Supplementary Materials). Moreover, the crystallinity analysis derived from the DSC results confirms that there is no significant difference in the crystallinity variation of the blend samples produced by TSE and ERE (Table S1, see Supplementary Materials).
The tensile strength and elongation at break of the UHMWPE/PP blends processed by ERE and TSE are shown in Figure S4 (see Supplementary Materials). Clearly, in terms of the tensile strength and elongation at break, the UHMWPE/PP blends by ERE exhibit similar variation trend with the increased PP content compared to those by TSE, while the ERE-processed blends display much higher tensile strength and elongation at break (i.e., 25.31 MPa and 341.77% at 15% PP content) with respect to the TSE-processed counterpart (22.71 MPa and 85.13%). This result highlights the distinct advantage of the elongational flow field for superior mechanical performance in the UHMWPE/PP blends. Figure 3a presents the wear loss of the UHMWPE and UHMWPE/PP blends with different processing methods after 2 h of wear test. For the UHMWPE blends by either ERE or TSE, the optimal PP content to yield the best tribological property is 15% (the UHMWPE/PP ratio: 85/15), while the optimized ERE-processed blend produces lower wear loss (1.5 mg) relative to the TSE-processed blend (3.0 mg), highlighting the superiority of the elongational flow field-dominated processing. Interestingly, such weight loss values from the blends are significantly lower than those of pure UHMWPE samples produced by CM and ERE (13.1 and 12.4 mg) under identical wear test conditions, verifying the crucial role of the introduced PP within the UHMWPE matrix toward the tribological property. On the other hand, as indicated by the weight loss time profiles in Figure 3b, the wear degree for the UHMWPE blends produced by either ERE or TSE shows very small variations with increased time throughout the whole friction period in stark contrast to pure UHMWPE showing marked increase with increased friction time. This result implies that the blends are less sensitive to the friction time compared to pure UHMWPE. An analysis of the tribological performance based on UHMWPE/PP reveals that the wear resistance of UHMWPE increases significantly when the PP content is 15%. The relationship curves of friction coefficient-the time for pure UHMWPE and UHMWPE/PP (85/15) indicates that the variation trend for samples with different processing methods is basically the same ( Figure S5, see Supplementary Materials). In the initial stage (0-2400 s), the surface friction coefficient increases sharply, then, it shows a slow increase, and after 7200 S, the coefficient of friction of ERE-85/15 (0.35) and TSE-85/15 (0.36) containing pp is significantly smaller than that of CM-UPE (0.37) and ERE-UPE (0.38) without PP. This is most likely since the lateral methyl group contained in the structural formula of PP makes the hardness and modulus of PP larger than that of UHMWPE, which improves the ability of UHMWPE to resist plastic deformation and reduces the contact area between the friction, which in turn reduces the friction.
The investigation of the surface morphology of the sample after rubbing can intuitively assess the wear degree of the sample and the corresponding wear mechanism [19,29]. As shown in Figure S6 (see Supplementary Materials), the wear track of the blends with low PP content (<5%) shows many furrows, known as abrasive wear, while the surface morphology of the blend with medium PP content (5-25%) features small, shallow depressions or plastic deformations, which are caused by contact stress. The wear surface of the samples with high PP content (25-50%) exhibits tears, which is resulted from the adhesion between the sample surface and the counter-abrasive surface; the wear mechanisms corresponding to these three morphologies are, respectively, abrasive wear, fatigue wear and adhesive wear. the UHMWPE/PP blend prepared by TSE and ERE has a similar friction morphology but is clearly different from pure UHMWPE. The wear tracks show significantly less furrows, indicating that the addition of PP improves the wear resistance of UHMWPE to a certain extent. In addition, it is noted that under the action of the shear flow field, the wear surface of the UHMWPE/PP blend also exhibits more fatigue cracks when the PP content was >25%, where the wear behavior is in the form of adhesive wear and fatigue wear. In contrast, the cracks in the UHMWPE/PP blend prepared by ERE only appears when the PP content was >35%. Because the interfacial bonding strength of the UHMWPE/PP blends prepared by TSE is weaker than of the blends prepared by ESE, the material is prone to tear under wear process, thus triggering cracks. The investigation of the surface morphology of the sample after rubbing can intuitively assess the wear degree of the sample and the corresponding wear mechanism [19,29]. As shown in Figure S6 (see Supplementary Materials), the wear track of the blends with low PP content (<5%) shows many furrows, known as abrasive wear, while the surface morphology of the blend with medium PP content (5-25%) features small, shallow depressions or plastic deformations, which are caused by contact stress. The wear surface of the samples with high PP content (25-50%) exhibits tears, which is resulted from the adhesion between the sample surface and the counter-abrasive surface; the wear mechanisms corresponding to these three morphologies are, respectively, abrasive wear, fatigue wear and adhesive wear. the UHMWPE/PP blend prepared by TSE and ERE has a similar friction morphology but is clearly different from pure UHMWPE. The wear tracks show significantly less furrows, indicating that the addition of PP improves the wear resistance of UHMWPE to a certain extent. In addition, it is noted that under the action of the shear flow field, the wear surface of the UHMWPE/PP blend also exhibits more fatigue cracks when the PP content was >25%, where the wear behavior is in the form of adhesive wear and fatigue wear. In contrast, the cracks in the UHMWPE/PP blend prepared by ERE only appears when the PP content was >35%. Because the interfacial bonding strength of the UHMWPE/PP blends prepared by TSE is weaker than of the blends prepared by ESE, the material is prone to tear under wear process, thus triggering cracks.
Dynamic thermomechanical analysis (DMA) result in Figure 3c reveals that, with increased temperature, the storage modulus for all the samples including pure UHMWPE and the UHMWPE/PP blends by different processing exhibit an initial dramatic decrease, followed by the appearance of a stable plateau at high temperature. Of note, at lower temperature the ERE-processed UHMWPE/PP (85/15) blend presents higher storage modulus with respect to other counterparts, indicating better resistance to plastic deformation and Dynamic thermomechanical analysis (DMA) result in Figure 3c reveals that, with increased temperature, the storage modulus for all the samples including pure UHMWPE and the UHMWPE/PP blends by different processing exhibit an initial dramatic decrease, followed by the appearance of a stable plateau at high temperature. Of note, at lower temperature the ERE-processed UHMWPE/PP (85/15) blend presents higher storage modulus with respect to other counterparts, indicating better resistance to plastic deformation and yield stress. The corresponding loss factor (Tanδ)-temperature curve in Figure 3d indicates that the UHMWPE/PP blend exhibits lower Tanδ than those of pure UHMWPE.
Based on the analysis of the dynamic thermomechanical properties of UHMWPE/PP, the surface temperature of the material during the wear process was examined using an infrared thermometer, and the results are shown in Figure S7 and Table S2 (see Supplementary Materials). CM-100/0 and ERE-100/0 have a similar temperature evolution, both reaching surface temperatures of around 140 • C after 120 min of testing, which exceeded the melting point temperature of UHMWPE (~135 • C), and was significantly higher than TSE-85/15 (127 • C) and ERE-85/15 (124 • C), suggesting that the PP content has ability to optimize thermal conductivity and decrease coefficient of friction, which facilitates to reduce the friction temperature of the UHMWPE/PP surface and mitigate the degree of oxidative degradation.
Raman mapping plots of the surface crystallinity distribution of the samples before and after sliding friction testing are shown in Figure 4. The crystallinity of CM-UPE, ERE-UPE, ERE-85/15, and TSE-85/15 are basically distributed around 50% before wear. After sliding wear testing, the mapping distribution of all samples partly shifted towards red, i.e., the crystallinity increased, which result from orientation-induced crystallization of the surface. For the ERE-85/15 and TSE-85/15, the increase in crystallinity was significantly smaller than that of the CM-UPE and ERE-UPE, demonstrating the high friction temperature promotes the rearrangement of polymer chain, while ERE-85/15, and TSE-85/15 remain stable during friction due to its low coefficient of friction and good thermal conductivity. i.e., the crystallinity increased, which result from orientation-induced crystallization of the surface. For the ERE-85/15 and TSE-85/15, the increase in crystallinity was significantly smaller than that of the CM-UPE and ERE-UPE, demonstrating the high friction temperature promotes the rearrangement of polymer chain, while ERE-85/15, and TSE-85/15 remain stable during friction due to its low coefficient of friction and good thermal conductivity. The above results confirm that the PP content in UHMWPE matrix plays a crucial role in enforcing the mechanical properties and wear resistant performance, while the differences in frictional properties triggered by TSE and ERE need to be explored in more depth. The interfacial microstructure of the UHMWP/PP blend was further characterized by the high-resolution Raman mapping ( Figure 5). Clearly, the UHMWP/PP blend extruded by ERE displays a thicker interphase (1-2 μm) compared to the sample produced by TSE, which attributes to better compatibility of blends processed by ERE. We propose The above results confirm that the PP content in UHMWPE matrix plays a crucial role in enforcing the mechanical properties and wear resistant performance, while the differences in frictional properties triggered by TSE and ERE need to be explored in more depth. The interfacial microstructure of the UHMWP/PP blend was further characterized by the high-resolution Raman mapping ( Figure 5). Clearly, the UHMWP/PP blend extruded by ERE displays a thicker interphase (1-2 µm) compared to the sample produced by TSE, which attributes to better compatibility of blends processed by ERE. We propose the reason for strong interactions based on the principles underlying the process; as shown in Figure 6a, under the shear flow field in TSE, the velocity gradient is perpendicular to the direction of the flow field, which tends to form a flow with weak interaction between the layers. Thus, it is difficult for the molecular chains of UHMWPE and PP to diffuse with each other. When subjected to the elongational flow field in ERE, the introduction of a compressive stress perpendicular to the flow field tends to result in strong interaction between UHMWPE and PP and thus the molecular chains easily diffuse and entangle each other (Figure 6b). Owing to the long relaxation time of the molecular chains, phase separation of blends is difficult to form during the cooling period and cause entanglement of the interfacial molecular chains, increasing the interfacial bonding strength and greatly improving the target performance of the material processed by ERE. diffuse with each other. When subjected to the elongational flow field in ERE, the introduction of a compressive stress perpendicular to the flow field tends to result in strong interaction between UHMWPE and PP and thus the molecular chains easily diffuse and entangle each other (Figure 6b). Owing to the long relaxation time of the molecular chains, phase separation of blends is difficult to form during the cooling period and cause entanglement of the interfacial molecular chains, increasing the interfacial bonding strength and greatly improving the target performance of the material processed by ERE.
Conclusions
UHMWPE/PP with different component ratios developed by TSE and ERE processing techniques were systematically investigated for wear-resistant properties and we found that PP was able to reduce the loss factor (Tan δ) and friction coefficient of the UHMWPE matrix, contributing to the maintenance of good mechanical properties of the composite during friction, resulting in less oxidative degradation of the friction surface and less increase in crystallinity of the friction surface. Importantly, the elongational flow field clearly contributed to the ability of PP reinforcement to disperse in the UHMWPE diffuse with each other. When subjected to the elongational flow field in ERE, the introduction of a compressive stress perpendicular to the flow field tends to result in strong interaction between UHMWPE and PP and thus the molecular chains easily diffuse and entangle each other (Figure 6b). Owing to the long relaxation time of the molecular chains, phase separation of blends is difficult to form during the cooling period and cause entanglement of the interfacial molecular chains, increasing the interfacial bonding strength and greatly improving the target performance of the material processed by ERE.
Conclusions
UHMWPE/PP with different component ratios developed by TSE and ERE processing techniques were systematically investigated for wear-resistant properties and we found that PP was able to reduce the loss factor (Tan δ) and friction coefficient of the UHMWPE matrix, contributing to the maintenance of good mechanical properties of the composite during friction, resulting in less oxidative degradation of the friction surface and less increase in crystallinity of the friction surface. Importantly, the elongational flow field clearly contributed to the ability of PP reinforcement to disperse in the UHMWPE
Conclusions
UHMWPE/PP with different component ratios developed by TSE and ERE processing techniques were systematically investigated for wear-resistant properties and we found that PP was able to reduce the loss factor (Tan δ) and friction coefficient of the UHMWPE matrix, contributing to the maintenance of good mechanical properties of the composite during friction, resulting in less oxidative degradation of the friction surface and less increase in crystallinity of the friction surface. Importantly, the elongational flow field clearly contributed to the ability of PP reinforcement to disperse in the UHMWPE matrix, facilitating the interdiffusion of UHMWPE and PP molecular chains, and the formation of a wider intercalation phase (approximately 2 µm) was demonstrated for the first time by Raman mapping techniques. This interlayer phase effectively strengthens the interface between UHMWPE and PP and dispersed the stress from the surface layer to a wider area during wear process, giving the UHMWPE/PP blend superior wear resistance compared to the interface created by the TSE process. The elongational strength of ERE-85/15 (25.3 MPa) was higher than that of TSE-85/15 (22.7 MPa) with the same PP content, while the elongation at break (341.77%) was four times higher and the wear volume loss of ERE-85/15 (1.5 mg) is less than of TSE-85/15 (3.0 mg) in the sliding wear test. This new hybrid composite material developed using ERE processing has excellent tribological properties that can be expanded into many new applications such as advanced structural materials, protective coatings for micromechanical systems and components that are resistant to contact damage.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/polym13121933/s1, Figures S1-S7, Tables S1-S2. Figure S1: SEM images of cryofractured surfaces of UHMWPE/PP Figure S2: Raman spectra of UHMWPE and PP, Figure S3: DSC curve of UHMWPE/PP under different processing methods Figure S4: Mechanical properties of UHMWPE/PP under different processing methods Figure S5: Curve of friction coefficient as a function of friction time of UHMWPE and UHMWPE/PP under different processing methods, Figure S6: SEM images of the end of the friction test of UHMWPE and UHMWPE/PP. Figure S7: Friction temperature of UHMWPE and UHMWPE/PP under different processing, Table S1: DSC related data of UHMWPE by different processing methods, Table S2
Data Availability Statement:
The data presented in this study are available upon request from the corresponding authors. | 2021-06-20T08:41:42.463Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "e038b5460911c507ee2a6568deaad65ab1f01485",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/13/12/1933/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e038b5460911c507ee2a6568deaad65ab1f01485",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215782417 | pes2o/s2orc | v3-fos-license | COVID-19 global pandemic planning: Decontamination and reuse processes for N95 respirators
Coronavirus disease 2019 (COVID-19) is an illness caused by a novel coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The disease was first identified as a cluster of respiratory illness in Wuhan City, Hubei Province, China, in December 2019, and has rapidly spread across the globe to greater than 200 countries. Healthcare providers are at an increased risk for contracting the disease due to occupational exposure and require appropriate personal protective equipment (PPE), including N95 respirators. The rapid worldwide spread of high numbers of COVID-19 cases has facilitated the need for a substantial supply of PPE that is largely unavailable in many settings, thereby creating critical shortages. Creative solutions for the decontamination and safe reuse of PPE to protect our frontline healthcare personnel are essential. Here, we describe the development of a process that began in late February 2020 for selecting and implementing the use of hydrogen peroxide vapor (HPV) as viable method to reprocess N95 respirators. Since pre-existing HPV decontamination chambers were not available, we optimized the sterilization process in an operating room after experiencing initial challenges in other environments. Details are provided about the prioritization and implementation of processes for collection and storage, pre-processing, HPV decontamination, and post-processing of filtering facepiece respirators. Important lessons learned from this experience include, developing an adequate reserve of PPE for effective reprocessing and distribution, and identifying a suitable location with optimal environmental controls (i.e. operating room). Collectively, information presented here provides a framework for other institutions considering decontamination procedures for N95 respirators. Impact statement There is a critical shortage of personal protective equipment (PPE) around the globe. This article describes the safe collection, storage, and decontamination of N95 respirators using hydrogen peroxide vapor (HPV). This article is unique because it describes the HPV process in an operating room, and is therefore, a deployable method for many healthcare settings. Results presented here offer creative solutions to the current PPE shortage.
INTRODUCTION
Rapid global dissemination of a novel coronavirus disease (COVID-19) caused by an the enveloped nonsegmented positive-sense RNA virus, SARS-CoV-2, has overwhelmed healthcare systems around the world. The rapid increase in clinical cases presenting at healthcare facilities when the disease propagates in a particular geographic region requires a rapid response by the healthcare system. The primary means of protecting frontline healthcare personnel (HCP) from contracting COVID-19 is through the proper use of personal protective equipment (PPE), such as N95 filtering facepiece respirators (FFRs). Based on the rapid spread of the virus around the globe, there is a high-volume demand for the continuous supply of PPE. The consequences of such a global demand has created a significant strain on the supply-chain of N95 respirators and other PPE. The shortage of PPE raises substantial concerns for healthcare facilities and HCP. The Centers for Disease Control and Prevention (CDC) has implemented an ongoing and continually updated release of information to optimize the supply of N95 respirators with most recent updates on 4 April 2020 1 . While it is without question that reuse of N95 respirators (and other PPE) would be obviated if an adequate supply were available, creative strategies are required when there is an imbalance in the supply and demand. Given the current global shortage of PPE, creative solutions are immediately required to mitigate the risk of exposure of HCP to SARS-CoV-2. In anticipation of such a shortage, we began exploring the most viable and safe methods for sterilizing PPE for reuse in late February 2020 at the University of New Mexico (UNM). During this short period of time, we have quickly learned the importance of having concerted and coordinated efforts devoted to the overall workflow for the safe collection, storage, decontamination, and distribution of reprocessed PPE, along with requisite safety training of staff who perform the reprocessing.
. CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
is the (which was not peer-reviewed) The copyright holder for this preprint .
Selection and Prioritization of Decontamination Methods:
In preparation for a probable shortage of PPE at our study sites in Kenya, and a possible shortage in the US (including UNM), we began investigating methods for decontaminating of FFRs in late February 2020. At that time, it became apparent that several decontamination procedures had been investigated, and that some of the methods (importantly) did not substantially impact on the structural integrity (i.e., filter aerosol penetration, airflow resistance, and physical integrity) of the N95 respirators after multiple decontamination cycles. In considering the possible options, we used a data-driven approach based on the currently available peer-reviewed literature, publicly available information, and consultation with subject matter experts. The strategic planning also considered the availability of instruments commonly found in in healthcare systems that could be rapidly transitioned and implemented for decontamination of N95 respirators. . They discovered that all the methods for all six FFRs maintained the optimal levels of filter aerosol penetration (<5%), excect for HPGP which had >5% penetration levels for four of the six FFRs. Neither of the two studies, however, examined organism killing as part of the experimental paradigm.
One published report from an FDA award to Battelle Memorial Institute investigated decontamination of N95 FFRs (3M model 1860) using hydrogen peroxide vapor (up to 50 cycles) delivered from a Bioquell Clarus C HPV decontamination system 4 . The study found that aerosol collection efficiency and air flow resistance were not affected over the 50 cycles of reprocessing. Although no visible degradation of the elastic straps was observed at up to 20 cycles, after 30 cycles the elastic straps showed signs of fragmentation upon stretching. The Battelle study also measured decontamination properties using a biological indicator (BI), Geobacillus stearothermophilus, since this spore-forming organism has resistance . CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
is the (which was not peer-reviewed) The copyright holder for this preprint .
to HPV decontamination and heat, and therefore, represents a high stringency surrogate for pathogen inactivation. Importantly, their work demonstrated biological aerosol exposure and HPV decontamination were effective for up to 50 cycles with a 6-log reduction in the BI. Battelle recently received approval by the FDA to incorporate the VHP method into a mobile Critical Care Decontamination System TM (CCDS) for large-scale decontamination of PPE for reuse, including N95 respirators for up to 20 cycles 5 . In line with the Battelle findings, Duke University and Health System recently evaluated and implemented VHP methods for the decontamination and reuse of N95 respirators for up to 30 cycles 6 . The University of Nebraska Medical Center recently developed a detailed workflow for decontamination of N95 respirators and opted to utilize a UVGI process 7 . Deployment of reprocessed FFRs for some of their HCP has already been implemented.
Based on the available literature and consultation with subject matter experts throughout the planning phase, we prioritized VHP decontamination of FFRs as a top-choice by mid-February, and subsequently began developing our processes. Additional influence for our choice included: 1) HPV technology is a widely used industry standard for decontamination/sterilization in research and medical facilities, and 2) improved hydrogen peroxide has the lowest EPA acute toxicity category (i.e., category IV) meaning that it is essentially non-toxic and not an irritant for oral, inhalation, and dermal routes of administration 8,9 . For additional validation of our choice for HPV decontamination, the CDC recently released information about FFR decontamination and reuse as a "crisis capacity strategy to ensure continued availability", and HPV was listed as one of the most promising potential methods 10 .
Collection and Storage of used FFRs:
We employed a process in which the HCP removes the FFR following the appropriate institutional guidelines. Inspection for visible soiling, saturation, or loss of structural integrity is performed, and FFRs that are structurally intact and not visibly soiled or saturated are placed in a designated foot-pedal receptacle containing a biohazard bag. Those FFRs that do not meet the inspection standards are discarded in a separate receptacle using standard institutional procedures. This process is followed by safely doffing of the gloves and hand-hygiene.
Designated personnel retrieve the biohazard bags from the unit when the receptacles become half-full per communication (telephone call) from the originating unit. Information communicated from the unit to the designated pick-up individual includes: unit name, location of bins (e.g., room numbers), and assigned contact person on the unit. The individual retrieving the material follows the designated . CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
is the (which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20060129 doi: medRxiv preprint institutional guidelines for ensuring safety. The biohazard bag being retrieved is placed in another biohazard bag and closed using a zip tie. A sticker is placed on the outside of the bag designating date and unit of origin, followed by transport of material to a locked storage area. Lakewood, CO), with another BI placed immediately outside of the room to serve as a control. Once the aeration phase is complete, a PortaSens III Hydrogen Peroxide Sensor is used to ensure that H2O2 vapor in the room is below 1.0 ppm prior to personnel entry into the room 11 . The CIs were visually inspected immediately after the run and the BIs placed in culture following manufactures instructions. Each run using the conditions listed above has achieved 6-log reduction for the CIs and negative cultures for the BIs ( Figure 3). FFRs are not removed from the racks until they reach 0.0 ppm.
Post-Processing:
The personnel performing the post-processing wear a procedure mask and gloves. Once the FFRs are removed from the rack, they are visibly inspected for any damage, and those with signs of physical damage (mask surfaces, staples, and elastic bands) are discarded. FFRs that pass the physical inspection are marked with a small indelible mark (using a sharpie pen). The marking pattern on the FFRs for up to 20 cycles, the maximum number of reprocessing runs, is shown in Figure 4. The reprocessed FFRs are then placed into individual bags marked with the processing date and batch run, followed by sorting into size and model for redistribution. All users of the reprocessed FFRs should perform a visual inspection of the N95 prior to donning to ensure overall structural integrity, followed by a fit test to ensure that an effective seal is achieved. Those FFRs that do not meet this integrity check are discarded.
. CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
CONCLUSION AND DISCUSSION
A short time ago, the decontamination of FFRs for reuse would have been considered (by most) to be either unnecessary or non-viable. However, strain on the global supply chain of PPE, in the context of providing a safe working environment for HCPs, has fostered creative solutions that are now being considered and implemented at some institutions. The most critical steps in the process are: 1) to consider PPE as a limited commodity with a finite supply, and 2) to begin the safe collection and storage of PPE for potential reuse. Without a reserve of supplies to reprocess, the ability to efficiently create a workflow for decontamination and deployment of reprocessed FFRs (or other PPE) becomes exceedingly limited. Prior to deciding on the exact method for the future decontamination procedure that we may needed to implement, we created the workflow to safely collect and store the FFRs (and other PPE) to create sufficient reserves. This allowed us to focus our efforts on deciding which procedure(s) were viable in our environment, and once determined, the ability to rapidly implement the steps involved in the decontamination process.
Based on the available information at the time, we prioritized HPV decontamination as our first choice, and UVGI as a viable second option. However, since we did not have any pre-existing configurations that contained large chambers with external sources of HPV, we started testing in HPV generators in different environments. Learning through trial and error, in an iterative process and with open minds, was critical to our eventual success. Initially, we tested the process in a standard room (22' x 8' with 8' ceilings; 131 m 3 ) and were meet with challenges. For example, the room did not have adequate airflow to cool the environment to an optimal temperature between the HPV processing runs. This resulted in the Bioquell instrument shutting down during the gassing phase due to overheating, thereby, reducing the desired levels of H2O2 (ppm). It became apparent that waiting for a protracted period to allow the room to reach the desired temperature for a subsequent run would not achieve desired efficiently. As such, we eliminated this environment as a viable option and set up the HPV decontamination process in one of four unused operating rooms. Based on their intended use, such environments are constructed with optimized climate control, outside air exchanges, and finishes that are monolithic, scrubbable, and free of crevices and fissures. Sterilization of operating rooms with portable HPV generators, such as the instrument we employed, is an industry standard for no-touch disinfection of the environment to prevent transmission of pathogens. During the HVP exposure application we further isolated the operating room by sealing off the heating, ventilation and air conditioning (HVAC) supply / exhaust ducts and door with polyethylene sheeting and tape. Upon setting up the HPV process in the operating room, we achieved immediate . CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
is the (which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20060129 doi: medRxiv preprint success and moved forward in that setting. We have achieved similar efficacy in a second operating room with a different Bioquell system (BQ-50), indicating flexibility in the overall process.
Results presented in this manuscript are meant to serve as an information sharing tool for other institutions who may wish to set up such processes, particularly for those who do not already have specific HPV chambers already in place. The workflow described here is one of many different options to operationalize the overall process. It is realized that different institutions will have creative ways to find solutions for their own unique challenges with PPE shortages. The two most important lessons learned from our experience are: 1) develop an adequate reserve of PPE for efficiently implementing the reprocessing workflow, and 2) locate a suitable environment for the HPV decontamination procedure, such as an operating room, which has the pre-existing conditions required for conducing the HPV decontamination process. While it is certain that we face unique challenges with COVID-19 that were not previously imagined, an efficient and safe workflow for reprocessing FFRs, and other PPE, can foster substantial improvements for protecting our HCP during this phase of critical shortages. An efficient and robust reprocessing workflow can also promote re-implementation of previous (more stringent) standards of PPE use that were commonly used before the current shortage. Figure 1. FFR placement and spacing on processing rack.
FIGURES
. CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
. CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
is the (which was not peer-reviewed) The copyright holder for this preprint . https://doi.org/10.1101/2020.04.09.20060129 doi: medRxiv preprint Figure 3. Culture results from biological indicators (Geobacillus stearothermophilus spores) with control placed outside the room (left, yellow) and 10 BIs placed in the processing room during the HPV decontamination (right 1-10, purple).
. CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
Reprocessed 1x
Reprocessed 6x Reprocessed 20x . CC-BY-NC-ND 4.0 International license It is made available under a author/funder, who has granted medRxiv a license to display the preprint in perpetuity. | 2020-04-16T09:17:34.169Z | 2020-04-14T00:00:00.000 | {
"year": 2020,
"sha1": "ea98adf4d7c46b92f5752e2db72058f74bf3e529",
"oa_license": "CCBYNCND",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/04/14/2020.04.09.20060129.full.pdf",
"oa_status": "GREEN",
"pdf_src": "MedRxiv",
"pdf_hash": "e165780491152cf6fcadf7f222b6057e76e2858f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
} |
4143984 | pes2o/s2orc | v3-fos-license | Segregation of RD-114 and FeLV-related sequences in crosses between domestic cat and leopard cat
TYPE C viruses of the RD-114 (ref. 1) group have been isolated, either spontaneously or after chemical induction, from cell cultures of the domestic cat (Felis catus)2–4. Nucleic acid sequences related to the RD-114 genome are in the DNA of all domestic cats5–8. Thus these viral genomes are transmitted vertically from parent to offspring as integral components of cat cellular DNA. Although the family Felidae consists of closely related animals, only four Felis species have been found to contain RD-114-related sequences. These include the domestic cat, the European wildcat (F. sylvestris), the sand cat (F. margarita), and the jungle cat (F. chaus); other members of the Felidae lack nucleic acid sequences related to RD-114 (ref. 9). The observation that RD-114 is partially related to the endogenous baboon type C viruses10–12 and that sequences related to RD-114 are found in the cellular DNA of all Old World monkeys led to the postulate that this group of viruses originated from an endogenous primate type C virus13 transmitted horizontally to the germ line of ancestors of certain Felis species during the Pliocene or early Pleistocene somewhere in the region of the Mediterranean basin9.
Segregation of RD-114 and FeL V -related sequences in crosses between domestic cat and leopard cat TYPE C viruses of the RD-114 (ref. I) group have been isolated, either spontaneously or after chemical induction, from cell cultures of the domestic cat (Felis catus)2-4. Nucleic acid sequences related to the RD-114 genome are in the DNA of all domestic cats 5 -8. Thus these viral genomes are transmitted vertically from parent to offspring as integral components of cat cellular DNA. Although the family Felidae consists of closely related animals, only four Felis species have been found to contain RD-114-related sequences. These include the domestic cat, the European wildcat (F. sylvestris), the sand cat (F. margarita), and the jungle cat (F. chaus); other members of the Felidae lack nucleic acid sequences related to . The observation that RD-114 is partially related to the endogenous baboon type C viruses 10 -12 and that sequences related to RD-114 are found in the cellular DNA of all Old World monkeys led to the postulate that this group of viruses originated from an endogenous primate type C virus 13 transmitted horizontally to the germ line of ancestors of certain Felis species during the Pliocene or early Pleistocene somewhere in the region of the Mediterranean basin 9 • A second distinct group of type C viruses, the feline leukaemia viruses (FeL V), also has been isolated from domestic cats14. Although FeLVs are horizontally transmitted among domestic cats, genes partially related to the RNA genome of FeL V are found in F. catus DNN5 and in the DNA of the other species of Felidae which contain RD-114 related nucleic acid sequences 16 . Viruses of the FeL V group are also postulated to have been transmitted to an ancestor of these Felis species, but to have originated from a rodent rather than a primate source 16 .
The leopard cat (F. bengalensis) is a spotted wildcat found throughout South-east Asia which lacks RD-114 and FeLVrelated DNA sequences (RD-, FL -)9. Since leopard cats produce viable offspring when bred with domestic cats (RD+, FL +), we studied the segregation of both sets of virogenes in Fl hybrids and in the progeny of a backcross to the RD-, FL -parent. The cellular DNA of the Fl hybrids contains half the number of copies of each set of sequences. The RD and FL virogenes segregate together in the backcrossed animals in a manner consistent with their localisation at a single chromosomal site.
The reassociation kinetics obtained by hybridising 3H-DNA transcripts of viral RNA to cellular DNA can be used to estimate relative gene frequencies by determination of half Cot values (the midpoint of the renaturation curve)!'. The number of gene copies can also be estimated by plotting reassociation kinetics as the reciprocal of the fraction of unhybridised 3H-DNA against Cot (Wetmur-Davidson plot)l8. In such plots, the slope is proportional to the number of copies of those sequences measured. Using RD-114 3H-DNA probes, Nature Vol. 257 October 9 1975 multiple copies of virus-related sequences can be detected in the cellular DNA of stray domestic cats, domestic cats reared in a germ-free environment (Merck, Sharp and Dohme, West Point, Pennsylvania) and in European wildcats (F. sylvestris) ( Fig. la and Table I). The Cot t values ranged from 120 to 170. In contrast, the cellular DNA of the leopard cat completely lacks RD-114 related sequences. The Cot t values for the selfannealing of non-repetitive domestic cat cellular DNA, and for the hybridisation of the 3H-DNA RD-114 probe to the DNA of a canine thymus cell line infected with RD-114, ranged from 1,800 to 2,000 ( Fig. 1, Table I). Given that Cot 1 values of 1,800--2,000 are obtained with genes present in a si~gle copy per haploid genome, domestic cat and European wildcat cellular DNAs contain 10-13 copies of RD-114-related sequences per haploid genome, Since these copies represent a family of diverging gene sequences only partially related to one another 19 , the calculated number of copies may be an underestimate 20 ,21.
Leopard cat males were mated to domestic cat females and the F 1 hybrids studied. These DNAs contain a complete complement of sequences related to the RD-114 probe, but only half the number of copies present in the domestic cat parent (Fig. la), The C of t values (275-350) represent, as a minimum estimate, five to seven virogene copies per haploid genome. Two kittens obtained from an F 1 hybrid female backcrossed to the leopard cat ( Fig. 2) were also studied. Figure la shows that kitten No.1 contains all the RD-l14-related information, but only half the number of copies (Cot t 280), like the F 1 parent, whereas kitten No.2 (from the same litter) lacks RD-114-related DNA sequences like its leopard cat parent. These results suggest that the multiple copies of RD-114 re- kidney, lung) and hybridised to RD-114 and FeLV 3H-DNA as described in Fig. I. Domestic cats Nos 2 and 3 were from the germfree colony of cats at Merck, Sharp and Dohme; animals from this colony have never been found to be positive for infectious feline leukaemia virus. FI hybrid cats Nos I, 2 and 3 and 4 belong to three separate litters. t C ott values represent the midpoint of the reannealing curves".
:j: The approximate number of copies per haploid genome of sequences related to either RD-114 or FeL V were estimated from reciprocal plots (Fig. I). The number of copies is determined by the ratio of the slope of each line to the slope of the line described by the reassociation of non-repetitive domestic cat cellular DNA (Cot, = 1,800-2,000; see also ref. 19). In the case of RD-114, where two sets of viral sequences can be detected in cellular DNA, the number of copies listed is the average of the two populations. § CCC clone 6 is from a continuous line of domestic cat kidney fibroblasts and is not releasing type C virus"', and RD-114/FCf2Th and FeL V /FCf2Th are a dog thymus cell line infected, respectively, with RD-1I4 (ref. 1) or with the helper virus from the Gardner-Arnstein" strain of feline sarcoma virus. lated sequences are located together at one (or relatively few) chromosomal sites. The same cats were examined for FeLV-related genes using 3H-DNA probes prepared from various strains of FeLV22,23. No cross-hybridisation between these probes and RD-114 is detectable'5.24. The results parallel exactly the data obtained with transcripts of RD-114 RNA. The domestic cat parent contains multiple copies of FeLV-related virogenes whereas the leopard cat parent lacks these sequences. The F I hybrids contain half the number of copies. Backcrossed kitten No.1 has the same number of virogene copies as its parent (the F, hybrid) although its littermate has no detectable sequences related to FeL V. Table 1 summarises the hybridisation data; four clear classes of DNAs are evident. The first consists of animals that contain full complements of RD-114 and Fe LV-related gene sequences. These cats contain both sets of virogene sequences reiterated a comparable number of times, although there may be fewer FeLV-related copies. The animals in the second class, including all the F, hybrids and one of the backcrossed kittens, contain half the virogene complement of the RD+, FL + parents. A third class contains sequences that anneal to the viral probe with reassociation kinetics comparable with that seen for the association of the most slowly reannealing cellular DNA sequences and therefore probably contains one viral copy. This includes cloned heterologous cell lines producing high titres of either RD-114 or FeL V viruses. The fourth class completely lacks sequences related to either RD-114 or FeLV and includes the leopard cats and one of the backcrossed kittens. that each of the multiple copies of RD-1l4 or FeLV-related virogenes occurs on a different linkage group, as well as models of non-chromosomal inheritance of the mUltiple viral copies. Since only certain Felis species contain RD-1l4 and FeLVrelated sequences in their DNA, we propose that both classes of viruses were acquired by cats subsequent to their major radiation, most likely in the Pliocene, and that both sets of sequences have been perpetuated in the germ line 9 ,16. The mUltiple virogene copies presumably arose as a result of gene duplication and/or unequal crossing-over after infection. The presence of multiple copies of both sets of virogenes in cat cellular DNA seems to be a general property of endogenous mammalian type C viruses; mouse, rat, hamster, pig and baboon DNAs also contain similar numbers of copies of their respective endogenous viruses 19 • The genetic crosses described here provide a new approach to the study of the evolution of multiple gene systems. The virogene sequences can be considered as one of the group of moderately repetitive sequences such as the genes for 5S RNA26, histones 26 and feather keratin 20 • The existence of natural populations of animals that either lack or contain DNA sequences related to both RD-1l4 and FeLV, and the ability of these cats to interbreed permits the study of the physiological and potentially pathological role of each of these genetically transmitted gene sequences. Hybrid animals containing half the number of virogene copies and virogenenegative cats should allow the study of the effects of gene dose on susceptibility and resistance to diseases mediated by both groups of type C viruses.
Identification of heat-dissociable RNA complexes in two porcine coronaviruses THE corona virus genome has been shown to comprise singlestranded RNAI". Examination of the viral nucleic acid synthesised by pig kidney cells infected with transmissible gastroenteritis virus (TGEV) suggested that several molecular species, ranging in size between 18 and 28S, were involved in the viral replicative cycle 3 ; similarly Tannock found a wide variation in the size of RNA molecules extracted from avian infectious bronchitis virus (IBV) by a phenol-sodium dodecyl sulphate (SDS) method'. Extraction of IBV RNA by 1 % SDS at 60°C has however revealed a single component of molecular wei~ht 9 X 10: corresponding to 60S by electrophoresis through 2.2 % polyacrylamide gels'.
We have examined the RNA extracted from purified preparations of TGEV and a second porcine coronavirus -haemagglutinating encephalomyelitis virus (HEV)-and have found a 60--70S RNA component which dissociates into 35S and 4S material on heating above 60 °C in a way that closely resembles the genome of the oncogenic RNA viruses.
We had observed that treatment of purified TGEV with 1 % SDS at 20 °C disrupted the virions and liberated a high molecular weight complex containing the RNA. On the assumption that this complex might comprise the hitherto undetected ribonucleoprotein, we extracted the material from TGEV preparations radioactively labelled with 3H_ uridine or with 3H-leucine to determine which structural polypeptide was associated with the complex, As is shown in Fig. I, however, the fast moving RNA complex has no detectable protein associated with it, while polyacrylamide gel analysis of the radioactivity remaining near the top of | 2018-03-24T13:50:51.572Z | 1975-10-09T00:00:00.000 | {
"year": 1975,
"sha1": "c3038257447be2d2e08f140639a9463f2ad85f4a",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc7086506?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "23d9759a661ac8679e869223b1c6539c51f7161c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
222421150 | pes2o/s2orc | v3-fos-license | Chiral Cyclobutane-Containing Cell-Penetrating Peptides as Selective Vectors for Anti-Leishmania Drug Delivery Systems
Two series of new hybrid γ/γ-peptides, γ-CC and γ-CT, formed by (1S,2R)-3-amino-2,2,dimethylcyclobutane-1-carboxylic acid joined in alternation to a Nα-functionalized cis- or trans-γ-amino-l-proline derivative, respectively, have been synthesized and evaluated as cell penetrating peptides (CPP) and as selective vectors for anti-Leishmania drug delivery systems (DDS). They lacked cytotoxicity on the tumoral human cell line HeLa with a moderate cell-uptake on these cells. In contrast, both γ-CC and γ-CT tetradecamers were microbicidal on the protozoan parasite Leishmania beyond 25 μM, with significant intracellular accumulation. They were conjugated to fluorescent doxorubicin (Dox) as a standard drug showing toxicity beyond 1 μM, while free Dox was not toxic. Intracellular accumulation was 2.5 higher than with Dox-TAT conjugate (TAT = transactivator of transcription, taken as a standard CPP). The conformational structure of the conjugates was approached both by circular dichroism spectroscopy and molecular dynamics simulations. Altogether, computational calculations predict that the drug-γ-peptide conjugates adopt conformations that bury the Dox moiety into a cavity of the folded peptide, while the positively charged guanidinium groups face the solvent. The favorable charge/hydrophobicity balance in these CPP improves the solubility of Dox in aqueous media, as well as translocation across cell membranes, making them promising candidates for DDS.
Introduction
Novel drug delivery systems (DDS) have been developed to achieve a more effective and specific delivery of the drugs to the target organ or cell. One of the major advantages of this approach is to avoid, or at least decrease, the side-effects associated with conventional drugs [1]. In this regard, DDS, together with imaging techniques, are key tools in theranostics or personalized medicine [2,3].
Cell penetrating peptides (CPP) [4][5][6] are potential carriers for DDS. They consist of 5-30 amino acid-long peptides which are capable of transporting the cargo molecule into the intracellular space in the absence of a cognate transporter. At a higher specificity level, the inclusion of an import motif in the CPP may even make it possible to direct the cargo into specific organelles. Either covalent or noncovalent bonding between the cargo molecule and the CPP underlies their use as DDS [7][8][9]. CPP possess appealing features such as good biocompatibility, the potential to fine tune their stability and solubility in biological environments, and potential to create new multifunctional DDS [10]. As such, CPP were implemented as a new tool in the therapeutics of a variety of diseases, with special relevance for cancer [11].
Optimization of the performance of CPP aims to improve both their proteolytic resilience and cell uptake, as well as to achieve minimal toxicity. Endosomal escape ability is also an important characteristic of optimal CPP. Imperviousness to peptidases can be tackled by the inclusion of nonnatural or stereochemically modified amino acids, as well as of peptide bond surrogates, in the peptide sequence [12]. Otherwise, this can also be addressed by the introduction of conformational constraints to obtain a more stable secondary structure in the CPP, which is sometimes associated with better cell uptake [12]. For cationic CPP, uptake and toxicity are highly dependent on the number and spatial distribution throughout their sequence. With these two goals in mind, CPP were designed with the inclusion of cyclic amino acids, mainly proline and γ-aminoproline [13,14], helical peptide foldamers [15][16][17], and cyclic peptide backbones [18][19][20].
In recent years, two diastereomeric series of short cell-penetrating hybrid γ/γ-peptides were efficiently prepared through convergent synthesis in solution. These peptides were formed by repetition of a dimeric unit constituted by cis-γ-amino-L-proline, 3, combined with a protected derivative of either (1S,3R)-or (1R,3S)-3-amino-2,2-dimethylcyclobutane-1-carboxylic acid, namely (1S,3R)-γ-CBAA, 1, or (1R,3S)-γ-CBAA, 2, (Figure 1). By high-resolution NMR, these peptides displayed very rigid and compact structures due to the intra-and inter-residue hydrogen-bonded ring formation [21]. Their uptake by HeLa cells increased with the length of the peptide; a five-carbon spacer between the proline N α atom and the terminal guanidinium group of the side chain was optimal for cell uptake, while the stereochemistry of the γ-CBAA was largely irrelevant. A good polar-hydrophobicity balance was achieved by the alternation of the guanidinium groups and the hydrophobicity of the (gem-dimethyl)cyclobutane ring [21,22]. These cyclobutane-containing CPP showed lower toxicity but similar cell uptake to those made exclusively of γ-amino proline residues [23], likely due to their halved number of guanidinium groups compared with the γ-aminoproline peptides of the same length.
Nowadays, the therapeutic use of CPP-based DDS is mainly focused on cancer, running far beyond examples involving infectious diseases and their causative agents [24]. Nevertheless, infective bacteria, fungi and protozoa are an appealing test for CPP. The cell membrane of these pathogens shows an external anionic hemilayer, in contrast to the zwitterionic one of mammalian cells, privileging interactions of cationic CPP with such pathogens [25]. Furthermore, pathogenic organisms are frequently endowed with a high proteolytic armamentarium as part of their virulence program [26].
A case in point is the human protozoan parasite Leishmania, the causative agent for the wide clinical spectrum of leishmaniasis, a disease with a significant impact on global human health [27]. There is an urgent need for new chemotherapy alternatives for leishmaniasis [28][29][30] to reduce the side effects associated with current chemotherapy, despite the existence of some drugs such as paromomycin [31] or miltefosine [32]. Furthermore, Leishmania is a challenging organism for membrane active peptides [33], including CPP. All the endocytic traffic, a common pathway for CPP uptake, is carried out through the flagellar pocket [34], a specialized region accounting for only 5% of the total surface of the parasite [35]. In addition, the promastigote, responsible for the primary infection in the mammalian host, displays a highly anionic glycocalyx [36], and its main plasma membrane protein is leishmaniolysin, a Zn 2+ -metalloprotease of broad substrate specificity [37]. The aflagellated amastigote is the pathological form of the parasite in vertebrates, dwelling in the parasitophorus vacuole of the macrophage, a strong proteolytic and nutrient-demanding environment.
The anthraquinone doxorubicin (Dox) is broadly used in cancer chemotherapy, and its biological activity is preserved once conjugated to CPP through its amino group [11,49]. As doxorubicin also has leishmanicidal activity [50][51][52] and is a fluorescent compound, it is an excellent molecular beacon to test the functional performance of CPP as functional DDS, but also for an easy track down of intracellular fate and the accumulation of the conjugate.
The aim of the current work is the validation of [(1S,3R)-γ-CBAA]-based peptides as vectors in DDS for Leishmania chemotherapy, with two major advantages. The first one comprises circumventing the high proteolytic environment associated with this parasite throughout its life cycle; the second is the ability to take advantage of the anionic surface of Leishmania to achieve a higher specific recognition by cationic peptides compared to mammalian cells. To this end, two series of diastereomeric peptides of variable length, formed by the alternation of 1 with N α -substituted cisor trans-γ-amino-l-proline, respectively (Chart 1), were synthesized in order to define the length and stereochemistry as the main descriptors for their toxicity and internalization, both on Leishmania and HeLa cells, as well as their ability as DDS by conjugation of Dox. conveniently functionalized cis-or trans-γ-amino-L-proline (γ-CC series and γ-CT series, respectively), (Chart 1). All of them were prepared using standard protocols of solid phase peptide synthesis (SPPS), either in their free N-terminus form or functionalized with a carboxyfluorescein (CF) fluorescent group, as detailed at the Section 3 and the Supplementary Materials (SM).
TAT48-57 (TAT) [53][54][55] was also synthesized as a reference CPP, either with a free or carboxyfluoresceinated N-terminus (see Supplementary Materials). Chart 1. Hybrid peptides of the γ-CC and γ-CT series constituted by repetition of the dipeptide unit γ-CBAA (1)/(cis-or trans)-γ-amino-L-proline, respectively. Dox was covalently conjugated to γ-CC and γ-CT tetradecameric peptides (Scheme 1), due to their higher internalization rates in Leishmania (Section 2.3). For this purpose (Scheme 1), a cysteine was added to the N-terminal end of the peptides to obtain 12 and 13. Then, they were conjugated to doxorubicin, previously functionalized as 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (MCC), 14, (conjugates 15 and 16). The same conjugation procedure was applied to TAT (see Supplementary Materials for details). To note, a new stereogenic center (marked with an asterisk) was created in the succinimide ring of the linker in the last synthetic step of Dox conjugates, giving two epimers that were used as a mixture. Their influence on the preferred conformations of these conjugates is discussed in Section 2.5 and in the Supplementary Materials. Chart 1. Hybrid peptides of the γ-CC and γ-CT series constituted by repetition of the dipeptide unit γ-CBAA (1)/(cis-or trans)-γ-amino-l-proline, respectively.
Moreover, the rationale of their cell-transfection ability was examined by studying their structural features through a combination of circular dichroism (CD) spectroscopy and computational calculations.
In all, ((1S,3R)-γ-CBAA)-based peptides afforded better results than TAT (transactivator of transcription) as vectors for anti-Leishmania DDS, opening a new avenue for the implementation of CPP in the chemotherapy against this important protozoan disease.
Design and Synthesis of Peptides and Conjugates
Two series of dodecameric and tetradecameric hybrid oligomers (4-11) were synthesized. They were formed by the repetition of a dipeptide motif made up by (1S,3R)-γ-CBAA, 1, [21] linked to a conveniently functionalized cis-or trans-γ-amino-l-proline (γ-CC series and γ-CT series, respectively), (Chart 1). All of them were prepared using standard protocols of solid phase peptide synthesis (SPPS), either in their free N-terminus form or functionalized with a carboxyfluorescein (CF) fluorescent group, as detailed at the Section 3 and the Supplementary Materials (SM).
Dox was covalently conjugated to γ-CC and γ-CT tetradecameric peptides (Scheme 1), due to their higher internalization rates in Leishmania (Section 2.3). For this purpose (Scheme 1), a cysteine was added to the N-terminal end of the peptides to obtain 12 and 13. Then, they were conjugated to doxorubicin, previously functionalized as 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (MCC), 14, (conjugates 15 and 16). The same conjugation procedure was applied to TAT (see Supplementary Materials for details). To note, a new stereogenic center (marked with an asterisk) was created in the succinimide ring of the linker in the last synthetic step of Dox conjugates, giving two epimers that were used as a mixture. Their influence on the preferred conformations of these conjugates is discussed in Section 2.5 and in the Supplementary Materials. Scheme 1. Conjugation of doxorubicin to γ-CC 15 and γ-CT 16 peptides. The Dox moiety is highlighted in red, the cysteine residue in green, and the linker in blue.
Cytotoxicity and Cellular Uptake in HeLa Cells
As a first step to establish the soundness of γ-CC and γ-CT peptides as therapeutic CPP for Leishmania, peptides 4-11 were assayed for their toxicity on HeLa cells, used as a model of mammalian cells. The cellular viability after 24 h incubation with the respective peptide was assessed by the reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) [56] ( Figure 2).
Even at the highest concentration assayed (50 μM), viability was over 90%, regardless of the peptide stereochemistry ( Figure 2). Peptide toxicity was not dependent either on the number of guanidinium groups in the sequence, or on the carboxyfluoresceination of the terminal amino group. In this sense, these results agree with the low toxicity of polyarginine peptides (R)n, up to n ≤ 10 [57].
Cytotoxicity and Cellular Uptake in HeLa Cells
As a first step to establish the soundness of γ-CC and γ-CT peptides as therapeutic CPP for Leishmania, peptides 4-11 were assayed for their toxicity on HeLa cells, used as a model of mammalian cells. The cellular viability after 24 h incubation with the respective peptide was assessed by the reduction of 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) [56] (Figure 2). concentration (10 μM and higher), and that the cell population becomes heterogeneous with respect to peptide uptake [58], could explain the apparently anomalous results obtained when CF-TAT was used as a reference (Figure 3a). In contrast, the cellular fluorescence for both the γ-CT and γ-CC peptide series increased with concentration and with the length of the peptide when using CF as a control of endocytosis ( Figure 3b). Interestingly, in all cases, the fluorescence values for the γ-CT peptides were higher than those of the respective γ-CC homologues. Even at the highest concentration assayed (50 µM), viability was over 90%, regardless of the peptide stereochemistry ( Figure 2). Peptide toxicity was not dependent either on the number of guanidinium groups in the sequence, or on the carboxyfluoresceination of the terminal amino group. In this sense, these results agree with the low toxicity of polyarginine peptides (R) n , up to n ≤ 10 [57].
By flow cytometry, more than 98% of the HeLa cells increased their respective cell-associated fluorescence after incubation with peptides 6, 7, 10 and 11 at the range of concentrations tested ( Figure 3). The mean of the population represented in Figure 3 refers either to CF-TAT as a standard CPP, or to CF as an endocytosis marker. By flow cytometry, more than 98% of the HeLa cells increased their respective cell-associated fluorescence after incubation with peptides 6, 7, 10 and 11 at the range of concentrations tested ( Figure 3). The mean of the population represented in Figure 3 refers either to CF-TAT as a standard CPP, or to CF as an endocytosis marker.
The facts that the performance of TAT on HeLa cells does not increase linearly with peptide concentration (10 μM and higher), and that the cell population becomes heterogeneous with respect to peptide uptake [58], could explain the apparently anomalous results obtained when CF-TAT was used as a reference (Figure 3a). In contrast, the cellular fluorescence for both the γ-CT and γ-CC peptide series increased with concentration and with the length of the peptide when using CF as a control of endocytosis ( Figure 3b). Interestingly, in all cases, the fluorescence values for the γ-CT peptides were higher than those of the respective γ-CC homologues. . Cellular internalization of carboxyfluoresceinated peptides 6, 7, 10, and 11 normalized (a) with respect to CF-TAT (100%), and (b) as a fluorescence ratio of the peptides with respect to CF as a control of endocytosis (CF fluorescence = 1 a.u.). HeLa cells were incubated with the respective peptide at 10 (empty column), or 25 μM (black column) for 24 h, and the level of fluorescence associated with the cells was assessed by flow cytometry (λEXC = 488 nm and λEM = 530 nm). Results were expressed as mean ± SD. Samples were made in triplicate (p ≤ 0.05, (*); p ≤ 0.01, (**); p ≤ 0.001, (***)). Three independent experiments were carried out. The facts that the performance of TAT on HeLa cells does not increase linearly with peptide concentration (10 µM and higher), and that the cell population becomes heterogeneous with respect to peptide uptake [58], could explain the apparently anomalous results obtained when CF-TAT was used as a reference (Figure 3a). In contrast, the cellular fluorescence for both the γ-CT and γ-CC peptide series increased with concentration and with the length of the peptide when using CF as a control of endocytosis ( Figure 3b). Interestingly, in all cases, the fluorescence values for the γ-CT peptides were higher than those of the respective γ-CC homologues.
The cell internalization of peptides 7 and 11 in HeLa cells was evidenced by confocal microscopy under identical conditions to flow cytometry. The intracellular fluorescence showed a spotted cytoplasmic pattern after incubation with the peptides at 25 µM peptide ( Figure 4). Confocal 3D ruled out their mere association with the plasma membrane.
The cell internalization of peptides 7 and 11 in HeLa cells was evidenced by confocal microscopy under identical conditions to flow cytometry. The intracellular fluorescence showed a spotted cytoplasmic pattern after incubation with the peptides at 25 μM peptide ( Figure 4). Confocal 3D ruled out their mere association with the plasma membrane.
Uptake, Microbicidal Activity and Intracellular Location of Peptides on Leishmania Parasites
Once the lack of toxicity of γ-CPP and their low uptake in mammalian cells compared to TAT had been demonstrated, their performance in Leishmania was examined as a model of microorganisms with anionic plasma and endowed with a high proteolytic level. To this end, peptides were assayed for viability, uptake and location on the protozoan Leishmania parasites.
The cellular viability in the presence of the peptides was assessed by the MTT reduction at both stages of the parasite after 4 h incubation ( Figure 5).
Uptake, Microbicidal Activity and Intracellular Location of Peptides on Leishmania Parasites
Once the lack of toxicity of γ-CPP and their low uptake in mammalian cells compared to TAT had been demonstrated, their performance in Leishmania was examined as a model of microorganisms with anionic plasma and endowed with a high proteolytic level. To this end, peptides were assayed for viability, uptake and location on the protozoan Leishmania parasites.
The cellular viability in the presence of the peptides was assessed by the MTT reduction at both stages of the parasite after 4 h incubation ( Figure 5). The loss of parasite viability due to peptides was concentration-dependent, with promastigotes ( Figure 5b) being more susceptible than amastigotes (Figure 5a). The γ-CT series showed higher toxicity than its respective γ-CC counterparts. This trend was especially noticeable for peptide concentrations over 25 μM. Concerning toxicity, the tetradecapeptides (7 and 11) were slightly more toxic than the corresponding dodecamers (6 and 10), likely due to the increase of their cationic character [59]. Thus, these γ-CPP behave as mild leishmanicidal agents.
Next, the internalization of the carboxyfluoresceinated CPP into promastigotes and amastigotes was measured by flow cytometry, and their values compared to that of CF-TAT (100%). As shown in Figure 6, a similar trend was observed for both parasite systems. The fluorescence values of dodecamers 6 and 10 were relatively similar or up to 30% lower than that of CF-TAT, while parasites incubated with tetradecamers 7 and 11 showed higher fluorescence than CF-TAT. Under the same conditions, the intensity of the fluorescence was much higher in promastigotes than in amastigotes (data not shown), likely due to their much smaller cellular size; consequently, the promastigotes were chosen for the studies with confocal microscopy. The loss of parasite viability due to peptides was concentration-dependent, with promastigotes ( Figure 5b) being more susceptible than amastigotes (Figure 5a). The γ-CT series showed higher toxicity than its respective γ-CC counterparts. This trend was especially noticeable for peptide concentrations over 25 µM. Concerning toxicity, the tetradecapeptides (7 and 11) were slightly more toxic than the corresponding dodecamers (6 and 10), likely due to the increase of their cationic character [59]. Thus, these γ-CPP behave as mild leishmanicidal agents.
Next, the internalization of the carboxyfluoresceinated CPP into promastigotes and amastigotes was measured by flow cytometry, and their values compared to that of CF-TAT (100%). As shown in Figure 6, a similar trend was observed for both parasite systems. The fluorescence values of dodecamers 6 and 10 were relatively similar or up to 30% lower than that of CF-TAT, while parasites incubated with tetradecamers 7 and 11 showed higher fluorescence than CF-TAT. The loss of parasite viability due to peptides was concentration-dependent, with promastigotes ( Figure 5b) being more susceptible than amastigotes (Figure 5a). The γ-CT series showed higher toxicity than its respective γ-CC counterparts. This trend was especially noticeable for peptide concentrations over 25 μM. Concerning toxicity, the tetradecapeptides (7 and 11) were slightly more toxic than the corresponding dodecamers (6 and 10), likely due to the increase of their cationic character [59]. Thus, these γ-CPP behave as mild leishmanicidal agents.
Next, the internalization of the carboxyfluoresceinated CPP into promastigotes and amastigotes was measured by flow cytometry, and their values compared to that of CF-TAT (100%). As shown in Figure 6, a similar trend was observed for both parasite systems. The fluorescence values of dodecamers 6 and 10 were relatively similar or up to 30% lower than that of CF-TAT, while parasites incubated with tetradecamers 7 and 11 showed higher fluorescence than CF-TAT. Under the same conditions, the intensity of the fluorescence was much higher in promastigotes than in amastigotes (data not shown), likely due to their much smaller cellular size; consequently, the promastigotes were chosen for the studies with confocal microscopy. Under the same conditions, the intensity of the fluorescence was much higher in promastigotes than in amastigotes (data not shown), likely due to their much smaller cellular size; consequently, the promastigotes were chosen for the studies with confocal microscopy.
Thus, these γ-CPP are potential cargo carriers at concentrations ≤ 25 µM; over this value, they behave as leishmanicidal agents on their own. This, together with their nil toxicity and poor uptake in mammalian cells, makes them amenable as DDS systems.
The internalization of these peptides in Leishmania donovani promastigotes was assessed by confocal microscopy on those peptides with the best results in flow cytometry. Thus, promastigotes were incubated with peptides 6, 7, 10 and 11 as well as TAT (Figure 7) at a final concentration of 10 µM for 2 h at 26 • C. DAPI was used as a nucleic acid dye to stain the nucleus and the kinetoplast (blue fluorescence) (Figure 7). Thus, these γ-CPP are potential cargo carriers at concentrations ≤25 μM; over this value, they behave as leishmanicidal agents on their own. This, together with their nil toxicity and poor uptake in mammalian cells, makes them amenable as DDS systems.
The internalization of these peptides in Leishmania donovani promastigotes was assessed by confocal microscopy on those peptides with the best results in flow cytometry. Thus, promastigotes were incubated with peptides 6, 7, 10 and 11 as well as TAT (Figure 7) at a final concentration of 10 μM for 2 h at 26 °C. DAPI was used as a nucleic acid dye to stain the nucleus and the kinetoplast (blue fluorescence) (Figure 7). According to Figure 7, the peptides accumulated inside the promastigote. The pattern was not homogenous. Some dotted areas showed higher fluorescence intensity, with a special relevance near the flagellar pocket, the specialized area of Leishmania in charge of all endocytic and exocytic traffic, indicating endocytic CPP uptake. Under these conditions, the nucleus excluded peptides.
In contrast to HeLa uptake, γ-CPP exceeded TAT uptake on Leishmania. In other words, an unexpected selectivity towards the parasite was found for these peptides, which supports their potential as selective vectors in DSS for this protozoan in its vertebrate host. A higher selectivity threshold for DSS against the parasite is of special relevance, due to the feasible avoidance of sideeffects associated with the current leishmanicidal drugs in Leishmania [60], and the probable lowering of the selectivity index for the payload with respect to its administration as free drug.
Due to their higher accumulation both in Leishmania and HeLa cells, tetradecamers 7 and 11 were selected as vehicles for Dox-conjugation (conjugates 15 and 16, respectively), in order to be applied as DDS on Leishmania. Figure 8 shows the viability of the parasites when treated with these conjugates. According to Figure 7, the peptides accumulated inside the promastigote. The pattern was not homogenous. Some dotted areas showed higher fluorescence intensity, with a special relevance near the flagellar pocket, the specialized area of Leishmania in charge of all endocytic and exocytic traffic, indicating endocytic CPP uptake. Under these conditions, the nucleus excluded peptides.
In contrast to HeLa uptake, γ-CPP exceeded TAT uptake on Leishmania. In other words, an unexpected selectivity towards the parasite was found for these peptides, which supports their potential as selective vectors in DSS for this protozoan in its vertebrate host. A higher selectivity threshold for DSS against the parasite is of special relevance, due to the feasible avoidance of side-effects associated with the current leishmanicidal drugs in Leishmania [60], and the probable lowering of the selectivity index for the payload with respect to its administration as free drug.
Due to their higher accumulation both in Leishmania and HeLa cells, tetradecamers 7 and 11 were selected as vehicles for Dox-conjugation (conjugates 15 and 16, respectively), in order to be applied as DDS on Leishmania. Figure 8 shows the viability of the parasites when treated with these conjugates. Free doxorubicin did not show any toxicity towards Leishmania donovani promastigotes at the full range of concentrations assayed, which avoided the leishmanicidal effect caused by free doxorubicin at longer incubation times (<12 h).
In contrast, the Dox-conjugates 15 and 16 showed an increasing toxicity at concentrations ≥ 1 μM. The peptide γ-CC 15 was shown to be slightly more toxic than γ-CT 16 at 5, 10 and 25 μM. MD simulations could account for such a difference (see below). In addition, Dox-TAT toxicity was like that of 15 and 16.
The uptake of free doxorubicin and of their peptide conjugates in L. donovani promastigotes was quantified by flow cytometry at 5 and 10 μM, measured at 2 and 4 h incubations ( Figure 9). Free doxorubicin did not show any toxicity towards Leishmania donovani promastigotes at the full range of concentrations assayed, which avoided the leishmanicidal effect caused by free doxorubicin at longer incubation times (<12 h).
In contrast, the Dox-conjugates 15 and 16 showed an increasing toxicity at concentrations ≥ 1 µM. The peptide γ-CC 15 was shown to be slightly more toxic than γ-CT 16 at 5, 10 and 25 µM. MD simulations could account for such a difference (see below). In addition, Dox-TAT toxicity was like that of 15 and 16.
The uptake of free doxorubicin and of their peptide conjugates in L. donovani promastigotes was quantified by flow cytometry at 5 and 10 µM, measured at 2 and 4 h incubations ( Figure 9). Dox-conjugates 15 and 16 showed better internalization than Dox-TAT, a trend that was observed with parent peptides 7 and 11 (see Figure 6) despite the modifications introduced (doxorubicin conjugation plus the additional cysteine). Longer incubation did not substantially improve the uptake for 15 and 16 at 5 µM, but it did at 10 µM. Under these conditions, the accumulation of 15 was 2.5-fold higher than that of Dox-TAT.
CD Spectroscopy
The CD of γ-CC 5 and γ-CT 9 conjugates showed respective monosignated spectra with λ max at 215 and 212 nm (see Figure S1 in Supplementary Materials); nevertheless, an accurate insight into their folding from these data was insufficient, even when some defined conformational bias for oligomers consisting of γ-amino acids was published [61]. To make these aspects clear, molecular modeling studies were undertaken.
μM. The peptide γ-CC 15 was shown to be slightly more toxic than γ-CT 16 at 5, 10 and 25 μM. MD simulations could account for such a difference (see below). In addition, Dox-TAT toxicity was like that of 15 and 16.
The uptake of free doxorubicin and of their peptide conjugates in L. donovani promastigotes was quantified by flow cytometry at 5 and 10 μM, measured at 2 and 4 h incubations (Figure 9).
Molecular Modeling
The hydrophobicity of amphipathic CPP [62,63] is crucial for their high affinity binding to lipid membranes, including those from internalized vesicles [64,65]. In addition, the creation of high-density positive areas in amphipatic cationic peptides favors their specific binding into anionic biopolymers, such as the cell membranes of pathogens [66].
Therefore, molecular dynamics (MD) simulations were carried out on γ-CC 5 and γ-CT 9 peptides under an explicit solvent scenario. The goals were to shed light on the folding of peptides in aqueous solution and to determine how the arrangement of positive charges could be involved in cellular binding. In addition, the role of their respective cargo molecules (doxorubicin and carboxyfluorescein) in the conformation of the conjugates was also examined. For doxorubicin conjugates, both (R) and (S) epimers were considered (see Supplementary Materials for details).
The trajectories of both CC and CT diastereomeric peptides converged into defined folded states after~40 ns. In all of them, a hydrophobic core was formed and the polar guanidinium groups of the positively charged residues faced the solvent. A double hairpin motif was predominant for γ-CC 5 (Figure 10a), promoted mostly by diverse inter-residue hydrogen-bonds ( Figure S7a). These hairpins, and also β-sheet-like structures, have been observed for similar peptides based on cis-γ-amino-l-proline [67]. For the conjugate (R)-Dox-γ-CC, (R)-15, a stable conformation occurred after~75 ns, with the Dox moiety being partially buried in the main peptide scaffold (Figure 10b). This perturbation of the original double hairpin by the presence of Dox is the consequence of long-range hydrogen-bonding between the primary hydroxyl group of the cyclohexane side-chain in the Dox moiety and N α -side-chain CO in proline residue i = 8 ( Figure S7b).
A related conformation was the most stable for epimer (S)-Dox-γ-CC, (S)-15; the Dox moiety was folded towards the peptide backbone cavity with the establishment of a long-range hydrogen bond between the hydroxyl group of the amino sugar ring of the Dox-moiety and proline residue i = 12 (Figures S6 and S7c). The hairpin-like conformation was also obtained, along with a laminar one, when the conjugate CF-γ-CC 7 was considered ( Figures S8 and S9).
the Dox moiety being partially buried in the main peptide scaffold (Figure 10b). This perturbation of the original double hairpin by the presence of Dox is the consequence of long-range hydrogenbonding between the primary hydroxyl group of the cyclohexane side-chain in the Dox moiety and N α -side-chain CO in proline residue i = 8 ( Figure S7b). On the other hand, peptide γ-CT 9 adopted a stable right-handed helix structure that was massively occupied (77%) along the trajectory (Figure 11a herein and Figure S10). This folding was driven by intra-residue hydrogen bonds formed by the NH and the carbonyl group in the γ-CBAA units, and inter-residue ones formed between the protons of the guanidinium group in an i residue and the proline N α -side-chain C=O at I + 4 (Figure 11a). A related conformation was the most stable for epimer (S)-Dox-γ-CC, (S)-15; the Dox moiety was folded towards the peptide backbone cavity with the establishment of a long-range hydrogen bond between the hydroxyl group of the amino sugar ring of the Dox-moiety and proline residue i = 12 (Figures S6 and S7c). The hairpin-like conformation was also obtained, along with a laminar one, when the conjugate CF-γ-CC 7 was considered ( Figures S8 and S9).
On the other hand, peptide γ-CT 9 adopted a stable right-handed helix structure that was massively occupied (77%) along the trajectory (Figure 11a herein and Figure S10). This folding was driven by intra-residue hydrogen bonds formed by the NH and the carbonyl group in the γ-CBAA units, and inter-residue ones formed between the protons of the guanidinium group in an i residue and the proline N α -side-chain C=O at I + 4 ( Figure 11a). Helical conformations were also predominant in both (R) and (S) isomers of conjugate Dox-γ-CT 16; in both cases, the Dox moiety faced the peptide backbone, becoming partially hidden. Nevertheless, this interaction is dependent on the configuration at the epimeric center of the linker. In the (R) epimer, a relevant long-range hydrogen bond between the hydroxyl group of the Dox cyclohexane moiety with the N α -side-chain C=O in proline residue i = 6 was established (Figure 11b and Figure S11b). In contrast, this hydrogen bond was not relevant for the (S) epimer ( Figure S11a,c), and doxorubicin was buried in the core of the peptide, probably to minimize its hydrophobic interactions with the aqueous medium. This feature could account for the slight difference of toxicity observed for both conjugates Dox-γ-CC 15 and Dox-γ-CT 16, as noted in Section 2.3.
Moreover, the stability of the helical structure of CF-γ-CT 11 was maintained along 450 ns in the simulation, despite its CF terminal moiety, located at the periphery of the peptide backbone ( Figure S12). Therefore, MD highlights the critical role of the cis/trans stereochemistry of the γ-amino-Lproline monomer for the folding of γ-CC and γ-CT peptide series. Although a specific conformation is not unequivocally associated with a better uptake, good cellular uptake has been reported for helical, hairpin and sheet-like peptides [34,68]. In addition, it is well documented that the delocalized charge of the guanidinium group endows a high membrane-translocation efficiency upon a variety Figure 11. Representative MD conformations for (a) peptide γ-CT 9, inter-residue interactions are marked; and (b) conjugate (R)-Dox-γ-CT, (R)-16. The peptide scaffold is highlighted by the green ribbon; Dox is represented in orange; the arrow points out long-range hydrogen-bonding between the tertiary hydroxyl group of the cyclohexane ring in the Dox-moiety and the N α -side chain CO in proline residue i = 6.
Helical conformations were also predominant in both (R) and (S) isomers of conjugate Dox-γ-CT 16; in both cases, the Dox moiety faced the peptide backbone, becoming partially hidden. Nevertheless, this interaction is dependent on the configuration at the epimeric center of the linker. In the (R) epimer, a relevant long-range hydrogen bond between the hydroxyl group of the Dox cyclohexane moiety with the N α -side-chain C=O in proline residue i = 6 was established (Figure 11b and Figure S11b). In contrast, this hydrogen bond was not relevant for the (S) epimer ( Figure S11a,c), and doxorubicin was buried in the core of the peptide, probably to minimize its hydrophobic interactions with the aqueous medium. This feature could account for the slight difference of toxicity observed for both conjugates Dox-γ-CC 15 and Dox-γ-CT 16, as noted in Section 2.3.
Moreover, the stability of the helical structure of CF-γ-CT 11 was maintained along 450 ns in the simulation, despite its CF terminal moiety, located at the periphery of the peptide backbone ( Figure S12). Therefore, MD highlights the critical role of the cis/trans stereochemistry of the γ-amino-l-proline monomer for the folding of γ-CC and γ-CT peptide series. Although a specific conformation is not unequivocally associated with a better uptake, good cellular uptake has been reported for helical, hairpin and sheet-like peptides [34,68]. In addition, it is well documented that the delocalized charge of the guanidinium group endows a high membrane-translocation efficiency upon a variety of arginine-rich CPP [68][69][70]. In our case, the conjugation of doxorubicin to the N-terminus of the peptide induced the hydrophobic collapse of the skeleton around the drug, which become buried, regardless of the diastereomeric series or the configuration of epimeric center. As this folding exposes the guanidinium group of the side-chains to the external aqueous medium, better membrane translocation is assumed. Peptides (4-7), γ-CT Peptides (8-11) and TAT [48][49][50][51][52][53][54][55][56][57] Peptides
Synthesis of Doxorubicin-MCC (14)
Doxorubicin·HCl salt (3 mg, 5.17 µmol), succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (SMCC) (2.1 mg, 6.60 µmol) and diisopropylethylamine (5 µL, 7.75 µmol) were added to 450 µL DMF in a 2 mL round-bottomed flask. The reaction was allowed to run in the dark at room temperature. The reaction was followed by RP-HPLC-MS. After 1.5 h, 450 µL of PBS were added and the pH was lowered to 6-7 with 1 M HCl. m/z (ESI): Ms Calcd for C 39 (16) 1.2 Equivalents of peptide 12 or 13, respectively, were added to the reaction mixture containing 14, and the reaction was stirred at room temperature until the reactants were consumed (determined by RP-HPLC-MS). After the reaction, the crude product was immediately purified by RP-HPLC.
Peptide and Conjugate Purification
Semiprep-RP-HPLC-MS was used to purify the peptides, formed by a binary gradient Waters 2545, a Waters Alliance 2767 sample manager module and an automatic fraction collector coupled to a dual UV-vis λ Absorbance Detector 2487 and a mass spectrometer Micromass ZQ. An XBridge ® Prep BEH C18 (19 × 100 mm, 5 µm) column (Waters, Cerdanyola del Vallès, Spain) was used. The eluents were A: ACN/HCOOH (99.93:0.07, v/v) and B: H 2 O/HCOOH (99.9:0.10, v/v). The column was preconditioned by washing with buffer A for 10 min. Gradient conditions are described below.
Cellular Viability, Internalization, and Localization Experiments with HeLa Cells
The HeLa cell line, derived from a human cervical cancer, was used to perform the biological assays. CPP dissolved in nonsupplemented Minimum Essential Medium (MEM) at their final concentration were sterilized by filtration through a 0.2 µm polycarbonate filter (Whatman ® Puradisc, Sigma Aldrich, Barcelona, Spain) filters.
Cellular Viability
The MTT assay (Sigma Aldrich, Barcelona, Spain) was used to analyze the cellular viability. This method is based on the ability of living cells to reduce the MTT to formazan salts, as a measure of their viability. HeLa cells were seeded into 24-microwell plates at 6 × 10 4 cells/mL (30,000 cells/well). After 24 h of incubation, the medium was replaced with the respective peptide concentration and incubated for 24 h. Then, the cells were washed three times with Hanks buffered saline solution (HBSS) (Biowest, Labclinics, Barcelona, Spain), and 500 µL of 0.1 mg/mL MTT in HBSS was added to the cell suspension and incubated for 3 h at 37 • C in darkness. The cell layer was dried in darkness, and the resulting formazan solubilized in pure DMSO. The absorbance was measured at 540 nm in an X3 Multilabel Plate Reader coupled to Perkin Elmer 2030 Manager control software. At least three independent experiments with four different replicates of each peptide and concentration were performed. Controls, nontreated cells or cells incubated with CF were included at identical concentrations. The absorbance values of nontreated cells were taken as 100% cellular viability.
Peptide Internalization
For flow cytometry experiments, HeLa cells were seeded into 35 mm culture dishes (2 × 10 5 cells/dish). After 24 h incubation under standard conditions, culture medium was removed, and cells were incubated for 2 h with the corresponding peptides at 10 and 25 µM. Next, the culture medium was removed, the cells were washed twice with HBSS and trypsinized with 0.5 mL of 0.25% trypsin-EDTA (Gibco, Thermo Fisher Scientific, Cornellà de Llobregat, Spain). After 5 min incubation at 37 • C, 2 mL of MEM + 10% FCS was added to the cells to stop trypsinization and the mixture was centrifuged (5 min, 300× g). Cells were then additionally washed with 2 mL of HBSS under the same conditions. Finally, the cell pellet was resuspended in 200 µL of PBS at pH = 6 to detach any peptide adhering to the plasma membrane. To exclude dead cells from gating, 5 µg/mL propidium iodide (PI, Sigma-Aldrich, Barcelona, Spain) was added to the cells immediately before the flow cytometric analysis, carried out in a BD FACSCanto cytometer (Bio-Rad, Alcobendas, Spain) coupled to FACSDiva v7.0 software using 488 nm and 635 nm lasers to excite the peptides and PI, respectively.
A total of 10,000 single cells were analyzed per sample, and at least three independent experiments were performed with each peptide and concentration. Untreated cells (autofluorescence control) and cell cultures incubated with TAT as positive reference CPP, and CF as negative reference, were included in every experiment. The fluorescence intensity of cells treated with CF was taken as one arbitrary unit for experiment normalization. For confocal microscopy, HeLa cells were seeded into glass bottom culture dishes (MatTek, Bratislava, Slovakia) at a density of 2 × 10 5 cell/dish. After 24 h of incubation, the culture medium was removed, and the cells were incubated for an extra 2 h in the presence of peptides at 25 µM. Then, cells were rinsed three times with PBS, and nuclei and plasma membrane were counterstained with 1 µl/mL of Hoechst 33342 (10 mg/mL, Thermo Fisher, Scientific, Cornellà de Llobregat, Spain) and 1 µl/mL CellMask™ deep red plasma membrane stain (5 mg/mL, Thermo Fisher Scientific, Cornellà de Llobregat, Spain), respectively. Finally, cells were washed with PBS prior to be resuspended in PBS pH = 6.0. The experiments were performed using an Olympus Fluoview FV1000 confocal laser scanning microscope (Olympus Iberia, Hospitalet de Llobregat, Spain) equipped with Olympus Fluoview as control software. The excitation wavelengths used were 405, 488 and 658 nm to visualize the nuclei, the peptides and the plasma membrane, respectively; the wavelength of emission was 460, 510 and 690 nm, respectively. A 3D reconstruction was generated to obtain orthogonal projections using the ImageJ/fiji software.
Cellular Viability
The viability assays were carried out using the reduction of MTT as described. The parasites were aliquoted into 96-microwell plates at a final concentration of 20 × 10 6 cells/mL in HBSS supplemented with 10 mM D-glucose. The peptides were incubated for 4 h at the corresponding peptide concentration at 26 • C or 32 • C for promastigotes and axenic amastigotes, respectively. Afterwards, 0.5 mg/mL MTT in HBSS + 10 mM D-glucose was added and cells were incubated for two additional hours. The resulting formazan was solubilized with DMSO (1% final concentration) and read at 595 nm in a Bio-Rad 640 microplate reader.
Peptide Uptake for Leishmania donovani Promastigotes
Parasites were resuspended in HBSS + Glc and dispensed in 24-microwell plates (2 mL/well) at a final concentration of 20 × 10 6 cells/mL. After incubation with the peptides for 4 h at 26 • C, parasites were washed twice with 2 mL of HBSS + Glc plus 1% fatty-acid free bovine seroalbumin in order to remove the noninternalized peptides, and resuspended in the same medium at 1 × 10 6 cells/mL. Propidium iodide (PI) at a final concentration of 5 µg/mL was added immediately to the flow cytometric analysis to gate the viable cells exclusively. Flow cytometry was carried out in a FC500 flow cytometer, using λ EXC = 488 nm and λ EM = 525 nm for fluoresceinated peptides and λ EXC = 488 nm and λ EM = 515 nm for free-and conjugated doxorubicin.
Circular Dichroism (CD) Spectroscopy Details
CD spectra were recorded on a JASCO-715 spectropolarimeter (Jasco, Madrid, Spain). For all CD analyses, 50 µM solutions of the peptides in PBS were prepared and the CD spectra were measured after 24 h equilibration at room temperature. A quartz cuvette of 0.1 mm optical path length was used. Spectra were recorded from 195 to 260 nm with 1 nm spectral bandwidth, at room temperature, with a time constant of 4 s, a step resolution of 1 nm and 5 repetitions per experiment. Before each measurement, a blank measurement with pure PBS was performed. CD data are given as mean residual molar ellipticities in deg cm 2 dmol -1 .
Computational Details
The geometries of the unnatural amino acids were optimized with Gaussian09 revision D01 [71] at the DFT level of theory using the B3LYP functional combined with the 6-31G(d,p) basis set. Frequency calculations were carried out for all the structures in order to characterize them as minima in the potential energy surface (PES). Atomic charges were computed with the restrained electrostatic potential (RESP) protocol [72]. The atom types and force field parameters were assigned through antechamber and parmchk2 included in AmberTools18 [73].
Semi-empirical calculations at PM6 theory level, using G09, were carried out on the peptide structures before running MD minimizations in order to obtain an initial structure avoiding steric clashes.
Molecular Dynamics (MD) simulations were set up solvating the peptide with a cubic box of TIP3P water molecules and neutralizing the total charge with chloride ions (ions94.lib). The AMBER99SB force field [74] was used for the standard residues, while the GAFF force field [75] was adopted for the remaining atoms. MD simulations were carried under periodic boundary conditions with the OpenMM [76] engine using the OMM Protocol [77]. The trajectories convergence was analyzed by RMSD, RMSF, Counting Clustering Method and PCA analysis, considering the full exploration of the conformational space as criterion of convergence (see the Supplementary Materials) [78]. In particular, for the γ-CC (5) and γ-CT (9) peptides the simulations were carried out along 200 ns. From a structural point of view, the trajectories were analyzed using CPPTraj implemented in AmberTools18.
Conclusions
Two diastereomeric series of new dodecameric and tetradecameric hybridd γ/γ-peptides, γ-CC and γ-CT, were synthesized and evaluated as CPP, focusing on their toxicity and cell penetrating activity. These series were formed by repetition of a dipeptide unit constituted by a chiral cyclobutane amino acid and either a cisor trans-γ-amino-l-proline, respectively. In preliminary studies, they were shown to be innocuous to HeLa cells and to effect moderate cellular uptake. In contrast, the toxicity of both peptide series both on Leishmania pifanoi amastigotes and Leishmania donovani promastigotes was concentration dependent. Only full viability was preserved under 10 µM. Therefore, these peptides behave as mild leishmanicidal agents at higher concentrations. There is a thin red line separating the CPP from membrane-active antimicrobial peptides. Only their affinity towards the targeted membrane separates both concepts: either they form a stable disruption of the membrane, with irreversible and lethal outcome, or they provoke a mild and reversible distortion to ensure the peptide translocation. In other words, the polarization towards one of these two stages is based on quantitative factors; as such, a continuous spectrum of intermediate stages must be expected. Among these factors, the nature of the peptide and the phospholipid composition of the membrane are crucial to tip the balance towards one phenotype or the other. These subtleties are further highlighted with cationic cell penetrating peptides acting on prokaryotes or lower eukaryotes with a strongly anionic membrane, as happens with Leishmania, and especially with the promastigote. This presents a strong anionic glycocalyx and a high percentage of anionic phospholipids in its plasma membrane, which accounts for a higher γ/γ-peptide internalization than in the amastigote. The relevance of this interaction is endorsed by the higher activity of the tetradecamer over the dodecamer for both peptide series. This specific toxicity of the γ/γ-peptides towards Leishmania is a "must" for their use as CPP against the parasite; the deleterious effect of the doxorubicin on the parasite will be synergic with the leishmanicidal activity of the peptidic vehicle on its own. The tetradecameric Dox-γ-CC conjugate delivered a significant and faster leishmanicidal blow than the free drug. In addition, their Leishmania/mammalian cell selectivity index was higher than that of TAT, opening new avenues for the implementation of CPP-based therapies against the parasite. The molecular modeling stresses that the adoption of a well-defined conformation for the tetradecamers is highly influenced by the choice of the diastereomeric unit, even when the impact of the conformation on the leishmanicidal activity is not dramatic. It must be kept in mind that the importance of the peptide conformation relies on that adopted when in contact with the membrane. This is a well-documented effect on natural α-helical antimicrobial peptides, which are mostly devoid of a defined structure in solution, but are capable of forming a stable α-helix when in contact with the biological membranes or in membrane-mimicking solvents. Nevertheless, it must be highlighted that these γ/γ-peptides are much more conformationally restrained than those made of α-amino acids; hence, simulation with membranes will be performed in future. In our case, the CD results point to a defined conformational preference in solution of the free peptides. Moreover, an important fact from MD is that the doxorubicin moiety becomes wrapped by the peptide chain, regardless of the diastereomerism of the peptides employed, hidden from the aqueous environment while exposing the polar guanidinium groups at the periphery of the peptide, and improving the solubility of the drug. | 2020-10-16T13:06:45.773Z | 2020-10-01T00:00:00.000 | {
"year": 2020,
"sha1": "5d7847d45eeb14af02e363e1b54b499c2ac12b0f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/20/7502/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5a35904fefd2634888eda682963da7f687ff356d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
67750685 | pes2o/s2orc | v3-fos-license | A Generalised Solution to Distributed Consensus
Distributed consensus, the ability to reach agreement in the face of failures and asynchrony, is a fundamental primitive for constructing reliable distributed systems from unreliable components. The Paxos algorithm is synonymous with distributed consensus, yet it performs poorly in practice and is famously difficult to understand. In this paper, we re-examine the foundations of distributed consensus. We derive an abstract solution to consensus, which utilises immutable state for intuitive reasoning about safety. We prove that our abstract solution generalises over Paxos as well as the Fast Paxos and Flexible Paxos algorithms. The surprising result of this analysis is a substantial weakening to the quorum requirements of these widely studied algorithms.
Introduction
We depend upon distributed systems, yet the computers and networks that make up these systems are asynchronous and unreliable. The longstanding problem of distributed consensus formalises how to reliably reach agreement in such systems. When solved, we become able to construct strongly consistent distributed systems from unreliable components [13,21,4,17]. Lamport's Paxos algorithm [14] is widely deployed in production to solve distributed consensus [5,6], and experience with it has led to extensive research to improve its performance and our understanding but, despite its popularity, both remain problematic.
Paxos performs poorly in practice because its use of majorities means that each decision requires a round trip to many participants, thus placing substantial load on each participant and the network connecting them. As a result, systems are typically limited in practice to just three or five participants. Furthermore, Paxos is usually implemented in the form of Multi-Paxos, which establishes one participant as the master, introducing a performance bottleneck and increasing latency as all decisions are forwarded via the master. Given these limitations, many production systems often opt to sacrifice strong consistency guarantees in favour of performance and high availability [7,3,18]. Whilst compromise is inevitable in practical distributed systems [10], Paxos offers just one point in the space of possible trade-offs. In response, this paper aims to improve performance by offering a generalised solution allowing engineers the flexibility to choose their own trade-offs according to the needs of their particular application and deployment environment.
Paxos is also notoriously difficult to understand, leading to much follow up work, explaining the algorithm in simpler terms [20,15,19,23] and filling the gaps in the original description, necessary for constructing practical systems [6,2]. In recent years, immutability has been increasingly widely utilised in distributed systems to tame complexity [11]. Examples such as append-only log stores [1,8] and CRDTs [22] have inspired us to apply immutability to the problem of consensus.
This paper re-examines the problem of distributed consensus with the aim of improving performance and understanding. We proceed as follows. Once we have defined the problem of consensus ( §2), we propose a generalised solution to consensus that uses only immutable state to enable more intuitive reasoning about correctness ( §3). We subsequently prove that both Paxos and Fast Paxos [16] are instances of our generalised consensus algorithm and thus show that both algorithms are conservative in their approach, particularly in their rules for quorum intersection and quorum agreement ( §4 & §5). Finally, we conclude by illustrating the power of our abstraction by outlining three different instances of our generalised consensus algorithm which provide alternative performance trade-offs compared to Paxos ( §6).
Problem definition
The classic formulation of consensus considers how to decide upon a single value in a distributed system. This seemingly simple problem is made non-trivial by the weak assumptions made about the underlying system: we assume only that the algorithm is correctly executed (i.e., the non-Byzantine model). We do not assume that participants are either reliable or synchronous. Participants may operate at arbitrary speeds and messages may be arbitrarily delayed.
We consider systems comprised of two types of participant: servers, which store the value, and clients, which read/write the value. Clients take as input a value to be written and produce as output the value decided by the system. Messages may only be exchanged between clients and servers and we assume that the set of participants, servers and clients, is fixed and known to the clients.
An algorithm solves consensus if it satisfies the following three requirements: • Non-triviality. All output values must have been the input value of a client.
• Agreement. All clients that output a value must output the same value.
• Progress. All clients must eventually output a value if the system is reliable and synchronous for a sufficient period. The progress requirement rules out algorithms that could reach deadlock. As termination cannot be guaranteed in an asynchronous system where failures may occur [9], algorithms need only guarantee termination assuming liveness.
If we have only one server, the solution is straightforward. The server has a single persistent write-once register, R0, to store the decided value. Clients send requests to the server with their input value. If R0 is unwritten, the value received is written to R0 and is returned to the client. If R0 is already written, then the value in R0 is read and returned to the client. The client then outputs the returned value. This algorithm achieves consensus but requires the server to be available for clients to terminate. To overcome this limitation requires deployment of more than one server, so we now consider how to generalise to multiple servers.
Generalised solution
Consider a set of servers, {S0, S1, . . . , Sn}, where each has a infinite series of write-once, persistent registers, {R0, R1, . . . }. Clients read and write registers on servers and, at any time, each register is in one of the three states: • unwritten, the starting state for all registers; or • contains a value, e.g., A, B, C; or • contains nil , a special value denoted as ⊥. (d) Figure 1: Sample configurations for systems of three or four servers.
No decisions yet Figure 2: Sample state tables for a system using the configuration in Figure 1a.
A quorum, Q, is a (non-empty) subset of servers, such that if all servers have a same (non-nil) value v in the same register then v is said to be decided. A register set, i, is the set comprised of the register Ri from each server. Each register set i is configured with a set of quorums, Q i , and four example configurations are given in Figure 1. The state of all registers can be represented in a table, known as a state table, where each column represents the state of one server and each row represents a register set. By combining a configuration with a state table, we can determine whether any decision(s) have been reached, as shown in Figure 2 Figure 3 describes a generalised solution to consensus by giving four rules governing how clients interact with registers to ensure that the non-triviality and agreement requirements for consensus ( §2) are satisfied.
Correctness
Rule 1 (quorum agreement) ensures that clients only output values that have been decided. Rule 2 (new value) ensures that only client input values can be written to registers thus only client input values can be decided and output by clients. Rules 3 and 4 ensures that no two quorums Rule 1: Quorum agreement. A client may only output a (non-nil) value v if it has read v from a quorum of servers in the same register set. Rule 2: New value. A client may only write a (non-nil) value v provided that either v is the client's input value or that the client has read v from a register. Rule 3: Current decision. A client may only write a (non-nil) value v to register r on server s provided that if v is decided in register set r by a quorum Q ∈ Q r where s ∈ Q then no value v ′ where v = v ′ can also be decided in register set r. Rule 4: Previous decisions. A client may only write a (non-nil) value v to register r provided no value v ′ where v = v ′ can be decided by the quorums in register sets 0 to r − 1. can decide upon different values. Rule 3 (current decision) ensures that all decisions made by a register set will be for the same value whilst Rule 4 (previous decisions) ensures that all decisions made by different register sets are for the same value.
Implementing the correctness rules
Rules 1 and 2 are easy to implement, but Rules 3 and 4 require more careful treatment.
Rule 3 (current decision). The simplest implementation of Rule 3 is to permit only configurations with one quorum per register set, as in Figure 1b. We generalise this to intersecting quorums configurations by permitting multiple quorums per register set, provided that all quorums for a given register set intersect, as in Figure 1d. The requirement for intersection ensures that if multiple quorums in a register set decide a value then they must decide the same value as they must share a common register. Alternatively, we can support disjoint quorums if we require that all values written to a given register set are the same. This can be achieved by assigning register sets to clients and requiring that clients write only to their own register sets, with at most one value. In practice, this could be implemented by using an allocation such as that in Figure 4 and by requiring clients to keep a persistent record of which register sets they have written too. We refer to these as client restricted configurations.
Both techniques, intersecting quorums configurations and client restricted configurations, can be combined on a per-register-set basis.
Rule 4 (previous decisions). Rule 4 requires clients to ensure that, before writing a (non-nil) value, previous register sets cannot decide a different value. This is trivially satisfied for register set 0, however, more work is required by clients to satisfy this rule for subsequent register sets.
Assume each client maintains their own local copy of the state table. Initially, each client's state table is empty as they have not yet learned anything regarding the state of the servers. A client can populate its state tables by reading registers and storing the results in its copy of the state table. Since the registers are persistent and write-once, if a register contains a value (nil or otherwise) then any reads will always remain valid. Each client's state tables will therefore always contain a subset of the values from the state table.
From its local state table, each client can track whether decisions have been reached or could be reached by previous quorums. We refer to this as the decision table. At any given time, each quorum is in one of four decision states: • Any: Any value could be decided by this quorum.
• Maybe v: If this quorum reaches a decision, then value v will be decided.
• Decided v: The value v has been decided by this quorum; a final state.
• None: This quorum will not decide a value; a final state. The rules for updating the decision table are as follows: Initially, the decision state of all quorums is Any. If there is a quorum where all registers contain the same value v then its decision A client may output value v provided at least one quorum state is Decided v (Rule 1). A client c may write a non-nil value v to register set r provided: i. v is c's input value or has been read from a register (Rule 2), and ii. r is either: • quorum intersecting, or • client restricted and r has been allocated to c but not yet used (Rule 3), and iii. the decision state of each quorum from register sets 0 to r − 1 is None, Maybe v or Decided v (Rule 4).
(d) State after reading read B from R1 on S2. state is Decided v. When a client reads nil from register r on server s then for all quorums Q ∈ Q r where s ∈ Q, the decision state Any/Maybe v becomes None. When a client reads a non-nil value v from a client restricted register set r then for all quorums over register sets 0 to r, the decision state Any becomes Maybe v and Maybe v ′ where v = v ′ becomes None. When a client reads a non-nil value v from a quorum intersecting register set r on server s then for all quorums Q ∈ Q r where s ∈ Q and for all quorums over register sets 0 to r − 1, the state Any becomes Maybe v and These rules utilise the knowledge that if a client reads a (non-nil) value v from the register r on server s, it learns that: • If r is client restricted then all quorums in r must decide v if they reach a decision (Rule 3).
• If any quorum of register sets 0 to r − 1 reaches a decision then value v is decided (Rule 4). Figure 5 describes how clients can use decision tables to implement the four rules for correctness.
Examples
This process is illustrated by Figures 6 and 7, which demonstrate how a client's state is updated as they read registers. Figure 6 shows the state of a client C0 in a system of 4 servers using the intersecting quorum configuration from Figure 1b. Figure 6a shows the client's initial state. The S0 S1 S2 S3 R0 Maybe B (c) State after reading nil from R0 and B from R1 on S3. client's state table is empty thus the status of all quorums in the decision table is Any. At this time, the client may only write non-nil values to R0 due to condition (iii) in Figure 5. Next, Figure 6b, the status of quorum {S2, S3} over register set 1 is updated to Maybe B since, depending on the state of register R1 on S2, either this quorum will not reach a decision or it decides value B. Since the client that wrote B into R1 on S3 must have followed Rule 4, the quorum in R0 must decide B if it reaches a decision thus its status is updated to Maybe B. The client C0 can now write value B to R1 or R2. Subsequently in Figure 6c, the client could now safely write its input value to R1 but there would be no use in doing so. Finally in Figure 6d, the client learns that B is decided and thus can output B. Figure 7 shows the state of a client C0 in a system using the configuration in Figure 1c and the client allocation from Figure 4. Figure 7a shows the initial state of the client C0. At this time, the client C0 can only write non-nil values to R0. Later in Figure 7c, the client has updated the status of both quorums in R1 to Maybe B after reading B from R1. This is because register set 1 is client restricted to value B.
Generalisation of Paxos
The (unoptimised) Paxos algorithm is described in Figure 8 using only write-once registers. Figure 9 gives an example of the message exchange as two clients execute Paxos with three servers.
We observe that Paxos is a conservative instance of our generalised solution to consensus. The configuration used by Paxos is majorities for all register sets, such a configuration is given in Figure 1d. Paxos also uses client restricted for all register sets and a suitable client assignment is given in Figure 4. The purpose of phase one is to implement Rule 4 and the purpose of phase two is to implement Rule 1. Earlier ( §3), we proposed client state and decision tables as a mechanism for clients to implement the rules for correctness. Upon receiving P1b(r,R) where R is the set Phase 1 • A client c chooses a register set r that it has been assigned but not yet used and sends P1a(r) to all servers. • Upon receiving p1a(r), each server checks if register r is unwritten. If so, any unwritten registers up to r − 1 (inclusive) are set to nil. The server replies with p1b(r, S) where S is a set of all written non-nil registers. • If c receives p1b messages from a majority of servers then c chooses the value from the greatest (non-nil) register. If no values were returned with P1b messages then c chooses its input value. c then proceeds to phase two. Otherwise, c times out and restarts phase one. Phase 2 • c sends P2a(r, v) to all servers where v is value chosen at the end of phase one.
• Upon receiving P2a(r, v), each server checks if register r is unwritten. If so, any unwritten registers up to r − 1 (inclusive) are set to to nil and register r is set to v. The server replies with P2b(r, v). • If c receives P2b messages from the majority of servers then c learns that the value v has been decided and can output v. Otherwise, c times out and restarts phase one. Figure 11: Sample client state tables (left) and decision tables (right) for client C1 during the execution in Figure 9.
of registers from a server, the client learns the contents of registers 0 to r − 1. This is because registers are always written to in-order on each server and register r must be unwritten. Therefore the client's state table and thus its decision table can be updated accordingly. This is demonstrated in Figure 10 for client C0 and Figure 11 for client C1.
Weakened quorum intersection requirements
The boolean function I tests whether two or more quorum sets are intersecting and is defined as Paxos utilises majorities as it requires all quorums, Q ∈ Q, to intersect, regardless of the register set or phase of the algorithm. That is, in terms of I, I(Q, Q).
Instead, we differentiate between the quorums used for each register set and which phase of Paxos the quorum is used for. Q k r is the set of quorums for phase k of register set r. We observe that quorum intersection is required only between the phase one quorum for register set r and the phase two quorums of register sets 0 to r − 1. This is the case because a client can always proceed to phase two after intersecting with all previous phase two quorums since the condition (iii) in Figure 5 will be satisfied. More formally, ∀r ∈ N 0 , ∀r ′ ∈ N <r : I(Q 1 r , Q 2 r ′ ). (*) This result confirms the findings of Flexible Paxos [12]. This is illustrated in Figure 10a where the client was safe to proceed to phase two from startup since there is no intersection requirement for register set 0.
Progress without quorums
Each of the two phases of Paxos waits for agreement from a quorum of servers. However, we observe that it may be possible to proceed prior to reaching quorum agreement.
A client can safely terminate once it learns that a value has been decided (Rule 1). This need not be the result of completing both phases of the algorithm. This is illustrated in Figure 11b where the client learns that value A has been decided prior to starting phase two.
Similarly, if a server learns that a register r contains a (non-nil) value v then it also learns that if any quorums from register sets 0 to r reach a decision then v must be chosen. By updating their decision table, we observe that it is no longer necessary for the client in phase one to intersect with the phase two quorums of registers up to r (inclusive). This is illustrated in Figure 11a where the client could safely proceed to phase two after one P1b message as the client reads a non-nil value from predecessor register set.
Generalisation of Fast Paxos
Paxos requires client restricted configuration for all register sets. Fast Paxos [16] generalises Paxos by permitting intersecting quorum configurations for some register sets, known as fast sets, whilst still utilising client restricted configurations for remaining sets, known as classic sets. Quorums for classic sets must include > 1 /2 of servers whereas quorums for fast sets must include ≥ 3 /4 of servers. Figure 12a is an example of such a configuration. Fast Paxos modifies our original Paxos algorithm ( Figure 8) as follows: • If a register set is fast then a client does not need to be assigned the register set nor do they need to ensure that they write to it with only one value. Any client can use the any fast register set.
Weakened quorum intersection requirements
Fast Paxos uses quorums of 3 /4 of servers for fast sets and 1 /2 of servers for classic sets since it requires the following intersection between quorums for fast sets, Q f and quorums for classic sets, Q c : I(Q c , Q c ), and I(Q c , Q f , Q f ). 1 As with Paxos, these intersection requirements are conservative. We differentiate between the quorums used for each register set and which phase of the algorithm the quorum is used for. Q k r is the set of quorums for phase k of register set r. In addition to Paxos's weakened intersection requirement (Eq. (*)), we observe that two additional quorum intersections are required: between the quorums for each fast register set, and between the phase one quorum for register set r and any pair of phase two quorums of fast register sets from 0 to r − 1. Denoting the set of fast register sets as F, we express these requirements as follows: ∀r ∈ F : I(Q 2 r , Q 2 r ) and ∀r ∈ N 0 , ∀r ′ ∈ F <r : I(Q 1 r , Q 2 r ′ , Q 2 r ′ ). (**)
Progress without quorums
Utilising decision tables, we observe that quorum agreement is sufficient but not necessary for a client to complete a phase of the algorithm. In particular, during the following three cases.
(i) As with Paxos, once a client learns that a quorum of registers contain a value then the client can terminate and return that value. (ii) If a client learns that a register r contains a (non-nil) value v then it also learns that if any quorums from register sets 0 to r − 1 reach a decision then v must be chosen. If r is a classic register set then it also learns that if any quorums from register sets r reach a decision then v must be chosen. The client therefore no longer needs to intersect with quorums 0 to r − 1 if r is fast or quorums 0 to r if r is classic.
(iii) Furthermore, a client in phase one will only need to intersect with any previous two fast quorums if it is unable to determine which value to propose. Figures 13 & 14 give an example of this with the configuration from Figure 12a. According to Equation (**), the client C0 needs to read three registers from register set 0 before it can safely write to register set 1. However, in Figure 13, the client can safely write to register set 1 after reading just two registers. This is not the case in Figure 14 however. Figure 15 summaries how these generalisation can be combined into a revised Fast Paxos algorithm. Note that a client can complete a phase once the completion criteria (underlined) has been satisfied even if it has not executed every step.
Example consensus algorithms
In this section, we outline three uses of our generalisation of Paxos and Fast Paxos by utilising different configurations.
Co-located consensus. Consider a configuration which uses a quorum containing all servers for the first k register sets and majority quorums afterwards, as shown in Figure 12b. All registers sets are client restricted. Participants in a system may be deciding a value between themselves, and so a server and client are co-located on each participant. A client can therefore either achieve consensus in one round trip to all servers (if all are available) or two round trips to any majority (in case a server has failed).
Fixed-majority consensus. Consider a configuration with one majority quorum for register set 0 and majority quorums for register sets 1 onwards, as shown in Figure 12c. Register set 0 is quorum intersecting and register sets 1 onwards are client restricted. A client can either achieve consensus in one round trip to a specific majority or two round trips to any majority.
Reconfigurable consensus. Consider a set of servers partitioned into a primary set and backup set. Consider a configuration which uses only primary servers for register set 0 to k − 1 and only backup servers from register set k, as shown in Figure 12d. A client can move the systems from primary servers to backup servers by executing Paxos for register set k or greater. No subsequent Phase 1 • A client c chooses a register set r that is either: quorum intersecting or is client restricted and has been assigned to c but not yet used. c sends P1a(r) to all servers. • Upon receiving p1a(r), each server checks if register r is unwritten. If so, any unwritten registers up to r − 1 (inclusive) are set to to nil. The server replies with p1b(r,S) where S is a set of all written registers. • Each time c receives a P1a, it updates its state and decision tables accordingly. If the decision state of all quorums from register sets 0 to r − 1 is None or Maybe v then c chooses v (or if all states are None then its input value) and proceeds to phase two. If c times out before completing phase one, it restarts phase one. Phase 2 • c sends P2a(r,v) to all servers where v is value chosen at the end of phase one.
• Upon receiving P2a(r,v), each server checks if register r is unwritten. If so, any unwritten registers up to r − 1 (inclusive) are set to nil and register r is set to v. The server replies with P2b(r,v). • Each time c receives a P2a, it updates its state and decision tables accordingly. If the decision state of a quorum is Decided v then c outputs v. If c times out before completing phase two, it restarts phase one. client will need a reply from a primary server to make progress whilst the backup set is available.
Conclusion
Paxos has long been the de facto approach to reaching consensus, however, this "one size fits all" solution performs poorly in practice and is famously difficult to understand. In this paper, we have reframed the problem of distributed consensus in terms of write-once registers and thus proposed a generalised solution to distributed consensus. We have demonstrated that this solution not only unifies existing algorithms including Paxos and Fast Paxos but also demonstrates that such algorithms are conservative as their quorum intersection requirements and quorum agreement rules can be substantially weakened. We have illustrated the power of our generalised consensus algorithm by proposing three novel algorithms for consensus, demonstrating a few interesting points on the diverse array of algorithms made possible by our abstract. Our aim is to make reasoning about correctness sufficiently intuitive that proofs are not necessary to make a convincing case for the safety; nonetheless, we include in Appendix A for completeness.
Appendices A Proofs of safety
In this appendix, we provide proofs for the safety properties (non-triviality, agreement) of our proposed algorithms for solving consensus. Figure 3 proposed four rules which we claim are sufficient to satisfy the non-triviality and agreement requirements of distributed consensus ( §2). We now consider each requirement in turn. We will use s[r] = v to denote that the value v is in register r on server s. Theorem A.1 (Satisfying non-triviality). If a value v is the output of a client c then v was the input of some client c ′ .
Proof.
Assume v was the output of client c. According to Rule 1, ∃r ∈ N 0 , ∃Q ∈ Q r , ∀s ∈ Q : s[r] = v therefore at least one register contains v.
Consider the invariant that all (non-nil) registers contain client input values. Initially, all registers are unwritten thus this invariant holds. According to Rule 2, each client will only write either their input value or a value copied from another register, thus the invariant will be preserved.
Proof.
Assume that value v was the output of client c. Assume that value v ′ was the output of client c ′ .
According to Rule 1, the following must be true: Since register sets are totally ordered, it must be the case that either r = r ′ , r < r ′ or r > r ′ : Case r = r ′ : Both decisions are in the same register set. It is either the case that both clients have read from the same quorum or they have read from different quorums. Case Q = Q ′ : Each quorum can decide at most one value thus v = v ′ Case Q = Q ′ : According to Rule 3, since Q has decided v, each client who wrote a register in Q must have ensured that no other quorum in register set r can reach a different decisions. Thus v = v ′ . Case r < r ′ : According to Rule 4, a client will only write v to register set r ′ after ensuring no quorum in register set r will reach a different decision. Thus v = v ′ .
Case r > r ′ : This is the same as r < r ′ with r and r ′ swapped. Thus v = v ′ .
A.2 Client decision table rules
We have shown that the four rules for correctness are sufficient to satisfy the non-triviality and agreement requirements of consensus. We will now show that the client decision table rules (Figure 5) implement the four rules for correctness ( Figure 3) and thus satisfies the non-triviality and agreement requirements of consensus.
Theorem A.3 (Satisfying Rule 1). If the value v is the output of client c then c has read v from a quorum Q ∈ Q r in register set r.
Proof.
Assume the value v is the output of client c. There must exist a register set r and quorum Q ∈ Q r in the decision table of c with the status Decided v ( Figure 5). A quorum Q can only reach decision state Decided v if ∀s ∈ Q : s[r] = v.
Theorem A.4 (Satisfying Rule 2). If the value v is written by a client c then either v is c's input value or v has been read from some register.
Proof.
Assume the value v has been written by client c. According to Figure 5, v must be either the input value of c or read from some register.
Proof.
Assume the value v is decided in register set r by quorum Q ∈ Q r , thus ∀s ∈ Q : s[r] = v. Assume the value v ′ is decided in register set r by quorum Q ′ ∈ Q r , thus ∀s ∈ Q ′ : s[r] = v ′ .
The register set r either uses intersecting quorums or client restricted configuration. Case r is client restricted: Each client is assigned a disjoint subset of register sets thus at most one client is assigned r. A client will only write a (non-nil) value to r if they have been assigned it and not yet written to it ( Figure 5). The register set r will therefore only contain one (non-nil) value thus v = v ′ . Case r has intersecting quorums: This requires that there exists a server s such that s ∈ Q and s ∈ Q ′ . We require that both Theorem A.6 (Satisfying Rule 4). If the value v is decided in register set r and the (non-nil) value v ′ is written to register set r ′ where r < r ′ then v = v ′ We will prove this by induction over the writes to register sets > r.
Theorem A.7 (Satisfying Rule 4 -Base case). If the value v is decided in register set r then the first (non-nil) value to be written to a register set r ′ where r < r ′ is v.
Proof.
A.3 (Fast) Paxos Figure 8 describes the Paxos algorithm using write-once registers. Section 5 describe how to generalise Figure 8 to Fast Paxos. In this section, we proof that Fast Paxos (and therefore Paxos) implements the four rules for correctness ( Figure 3) and thus satisfies the non-triviality and agreement requirements of consensus.
Theorem A.9 (Satisfying Rule 1). If the value v is the output of client c then c has read v from a quorum Q ∈ Q r in register set r.
Proof.
Assume the value v is the output of client c. This must be the result of c completing phase two of Fast Paxos for some register set r. c must have received the message P2b(r,v) from > 1 2 /≥ 3 4 of servers (depending on either r is classic/fast). Prior to sending P2b(r,v), each server s has written register r to v. Q r is any subset of servers containing > 1 2 /≥ 3 4 of servers (depending on either r is classic/fast). Thus c has read a quorum Q ∈ Q r in register set r.
Theorem A.10 (Satisfying Rule 2). If the value v is written by a client c then either v is c's input value or v has been read from some register.
Proof.
Assume a value v is written by a client c. This must be the result of completing phase one of Fast Paxos for some register set r and choosing the value v. The value v must have been chosen in one of following ways: Case 0 (non-nil) registers where returned with P1b messages: In this case, v is c's input value. Case 1 or more (non-nil) registers where returned with P1b messages: In this case, v is the most common value from the greatest register set thus v has been read from some register. Proof.
Assume the values v and v ′ are decided in register set r. It is therefore the case that there exists two quorums Q, Q ′ ∈ Q r such that ∀s ∈ Q : s[r] = v and ∀s ∈ Q ′ : s[r] = v ′ The register set r is either fast (quorum intersecting) or classic (client restricted): Case r is fast: There exists a server s where s ∈ Q and s ∈ Q ′ . We require that s At most one client is assigned register set r. Each client only writes (non-nil) values to assigned register sets and each does so with only one value. Therefore v = v ′ .
Theorem A.12 (Satisfying Rule 4). If the value v is decided in register set r and the (non-nil) value v ′ is written to register set r ′ where r < r ′ then v = v ′ We will prove this by induction over the writes to register sets > r.
Theorem A.13 (Satisfying Rule 4 -Base case). If the value v is decided in register set r then the first (non-nil) value to be written to a register set r ′ where r < r ′ is v.
Proof.
Assume the value v is decided in register set r. If r is fast (quorum intersecting), v must have been written to register r on 3 4 or more of servers. Otherwise, if r is classic (client restricted), v must have been written to register r on least 1 2 of servers. The writing of v to r must be the result of receiving P2a(r,v).
Assume the (non-nil) value v ′ is written to register set r ′ by client c. This must be the result of completing phase one of Fast Paxos for register set r ′ and choosing the value v ′ . The value v ′ could be chosen in one of two ways: Case v ′ is c's input value: This requires that (non-nil) registers where returned to c with the P1b messages for r. At last one server s must both write s[r] = v and send a P1b message to c since both require at least 1 2 of servers. Case s sends P1b for register r ′ first: Prior to sending P1b, the server s must write nil to all unwritten registers 0 to r ′ − 1, including register r since r < r ′ . Server s will not be able to later write s[r] = v so this case cannot occur. | 2019-02-18T19:53:56.000Z | 2019-02-18T00:00:00.000 | {
"year": 2019,
"sha1": "ff43aa5dc0e2a37c8521b3b3fd3a19c8fefe80a0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ff43aa5dc0e2a37c8521b3b3fd3a19c8fefe80a0",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
118948072 | pes2o/s2orc | v3-fos-license | BVI Photometric Study of the Old Open Cluster Ruprecht 6
We present a BVI optical photometric study of the old open cluster Ruprecht 6 using the data obtained with the SMARTS 1.0 m telescope at the CTIO, Chile. Its color-magnitude diagrams show the clear existence of the main-sequence stars, of which turn-off point is located around V ~ 18.45 mag and B-V ~ 0.85 mag. Three red clump (RC) stars are identified at V = 16.00 mag, I = 14.41 mag and B-V = 1.35 mag. From the mean Ks-band magnitude of RC stars (Ks=12.39 +- 0.21 mag) in Ruprecht 6 from 2MASS photometry and the known absolute magnitudes of the RC stars (M_Ks = -1.595 +- 0.025 mag), we obtain the distance modulus to Ruprecht 6 (m-M)_0 = 13.84 +- 0.21 mag (d=5.86 +- 0.60 kpc). From the (J-K_s) and (B-V) colors of the RC stars, comparison of the (B-V) and (V-I) colors of the bright stars in Ruprecht 6 with those of the intrinsic colors of dwarf and giant stars, and the PARSEC isochrone fittings, we derive the reddening values of E(B-V) = 0.42 mag and E(V-I) = 0.60 mag. Using the PARSEC isochrone fittings onto the color-magnitude diagrams, we estimate the age and metallicity to be: log (t) =9.50 +- 0.10 (t =3.16 +- 0.82 Gyr) and [Fe/H] = -0.42 +- 0.04 dex. We present the Galactocentric radial metallicity gradient analysis for old (age>1 Gyr) open clusters of Dias et al. catalog, which likely follow a single relation of [Fe/H] =(-0.034 +- 0.007) R_GC + (0.190 +- 0.080) (rms = 0.201) for the whole radial range or dual relation of [Fe/H] =(-0.077 +- 0.017) R_GC + (0.609 +- 0.161) (rms = 0.152) and constant ([Fe/H] ~ -0.3 dex) value, inside and outside of R_GC ~ 12 kpc, respectively. The metallicity and Galactocentric radius (13.28 +- 0.54 kpc) of Ruprecht 6 obtained in this study seem to be consistent with both of the relations.
OCs, especially the old ones, are a good laboratory for verifying stellar evolution theories, and they are also good tracers of the star formation and evolutionary history of the Galactic disk. While Lyngå (1987) published data for over 1200 OCs, the observed number of OCs including candidates has increased to 2167 in the latest version (3.5; 2016 January 28) of Dias et al. (2002) Kharchenko et al. (2013) presented MW object catalog for 3006 objects including 2808 OCs, 147 globular clusters and 51 associations, almost complete up to 1.8 kpc from the Sun, where only 386 out of 3006 have metallicity values. Considering the estimated total number of MW OCs (∼ 100, 000;Piskunov et al. (2006); Tadross (2011)), more surveys and detailed photometric and spectroscopic studies on individual clusters are awaited.
Ruprecht 6 (Ruprecht 1966) is an old OC, located in the constellation of Canis Major and at very large distance (∼ 13.28 ± 0.54 kpc; see Table 1 below) from the Galactic center. It is one of the small-size and poor OCs, which means it contains not so many number of member stars. This could be the reason why there have been few studies on the cluster. Hasegawa et al. (2008) Table 1 presents the Trumpler class of Ruprecht 6 to be III 1 p, which means, respectively, (i) Ruprecht 6 is detached and shows no noticeable concentration, (ii) most stars in the cluster are of nearly the same apparent brightness, and (iii) it is poor containing less than 50 stars (Trumpler 1930;Dias et al. 2002). In this study, using the BV I optical imaging data obtained by using the CTIO 1.0 m telescope, we present the photometric analysis and physical parameters of the OC Ruprecht 6. Section 2 describes the observations and data reduction processes. Section 3 presents the results in this study: cluster center position, color-magnitude diagrams, reddening and distance estimations, and PAR-SEC isochrone fitting results. In Sections 4 and 5, we discuss the Galactocentric metallicity distribution and summarize our results, respectively. For the OC Ruprecht 6, we have obtained three 1200 sec, three 900 sec, and two 800 sec images for B, V, and I filters, respectively, under 1.5 median seeing conditions. The final images are average combined for each filter. Figure 1 displays the combined grey-scale image of three V -band frames for the OC Ruprecht 6, where red circles show the region of Ruprecht 6 with radius of 2 .
OBSERVATIONS AND DATA REDUCTION
The raw observation data have been processed with the IRAF 3 /CCDRED package with standard procedure.
That is, the overscan correction, bias correction, and the twilight sky flattening have been made. Since interference patterns are seen in the I-band images, we made I-band supersky image using all the I-band images obtained on the same night and performed second flattening. In addition, because of the large format of the detector and the slow speed of the shutter, the shutter shading correction is applied. The IRAF script (y4kshut.cl) given at the CTIO homepage is used for the correction.
Photometry was performed using the DAOPHOT II/ALLSTAR stand-alone package after separating the CCD quadrants to treat them as four separate CCD chips (Stetson 1990). This is also because each chip has different gain and readout noise. Due to the large field of view, we adopt the quadratic variable point spread function. Aperture corrections were obtained with 20 − 30 isolated, bright, and unsaturated stars in each CCD quadrant. Figure 2 shows the error distribution of the BV I photometry results. The photometric errors typically attain 0.1 mag at B ≈ 22.7 mag, V ≈ 22.4 mag and I ≈ 21.2 mag (Kim et al. 2009), indicating ten times of object signal above the background.
Four Landolt (1992Landolt ( , 2007Landolt ( , 2009 where small and capital letters represent instrumental and standard magnitudes, respectively, and X is the airmass for each filter. We also have obtained the secondary extinction coefficient (k2B = −0.052 ± 0.016) as a free parameter. Including the secondary extinction coefficient, however, caused the errors of other parameters increase and the overall residual remained at the same value (0.037). So we decided not to adopt k2B but follow the simple solution. Figure 3 Astrometry was done using the routines provided in astrometry.net (Lang et al. 2010). The total number of stars with photometry is 5570, while that for the region of Ruprecht 6 with radius < 2 is 296. Table 2 lists the photometry of 92 stars of Ruprecht 6 with V < 19 mag.
Using the bright stars (V ≤ 20 mag) with the cluster center fixed as above, we plotted the radial number density profile with the bin size of 0.5 in Figure 4. We determined the radius of Ruprecht 6 to be 2.0 , in which radius most of the member stars are located.
Color-Magnitude Diagrams
colormagnitude diagrams (CMDs) of the OC Ruprecht 6 for the radial range of < 2 obtained in this study, while Figure 6 shows the comparison region (3 < R < 3.6 ) with the same area. Some characteristic features that can be found for Ruprecht 6 in these CMDs are : (i) main-sequence stars are clearly seen and red dotted lines show the blue turn-off point at B ≈ 19.30 mag, V ≈ 18.45 mag, I ≈ 17.30 mag, B − V ≈ 0.85 mag, and V − I ≈ 1.15 mag (see Figure 2 of Kaluzny (1994) for the definition of the blue turn-off), (ii) some red giant stars evolved after the turn-off are found, and (iii) three RC stars are seen and denoted as boxes in Figure 5 with mean magnitudes and colors of B = 17.36 ± 0.09 mag, V = 16.00 ± 0.10 mag, I = 14.41 ± 0.13 mag, B − V = 1.35 ± 0.05, mag V − I = 1.59 ± 0.05 mag.
To further check the membership probability of the three RC stars statistically, we extracted all stars with similar magnitudes and colors to the RC stars in the whole observed area of 19.57 × 19.57 . Figure 7 shows the spatial distribution of the resultant 10 stars located in the RC box area of Figure 5 and Figure 6, including the three RC stars in Ruprecht 6. The stellar spatial density of the three stars in the cluster area is 3/(π2 2 ) = 0.239 arcmin −2 , while that of the seven stars outside of the cluster area is 7/(19.57 2 − π2 2 ) = 0.019 arcmin −2 . The stellar density of the cluster area is 0.239/0.019 = 12.58 times more higher than that of the background area, and thus we conclude that the three RC stars in Ruprecht 6 are members of the cluster with high probability.
Distance Estimation
RC giant stars are low-mass stars at the evolutionary stage of core-helium-burning, that define a sharpest feature (almost constant absolute magnitude) in the CMDs of stellar systems like nearby galaxies and star clusters (Paczyński & Stanek (1998) 2015)). In the near-infrared (NIR), especially in the KS-band, the RC is known to have small dependence on metallicity and age, showing MK S = −1.595 ± 0.025 mag and an intrinsic color of (J − KS)0 = 0.612 ± 0.003 mag (Yaz Gökçe et al. 2013;Özdönmez et al. 2016). Recently, Girardi (2016) published a detailed review paper on the RC stars, and summarized the mean absolute magnitudes of these stars, which could possibly be used to estimate the distances and extinctions to stellar systems with the age of 1 to 10 Gyr though with some caveats such as population effects. Table 4 shows the mean I-band absolute magnitudes of RC stars from Girardi (2016) and the mean values of the five data is MI = −0.236 ± 0.024 mag.
Since it is known that the near-infrared K-band magnitude of the RC stars is not sensitive to the age and metallicity of the star cluster than the optical I-band magnitude (Sarajedini 1999;Grocholski & Sarajedini 2002;Kyeong et al. 2011), we mainly used the mean K(RC) magnitude of the RC stars in Ruprecht 6. Table 5 shows the JHKs magnitudes of the three RC stars in Ruprecht 6 extracted from the 2MASS (Two Micron All Sky Survey 4 , Skrutskie et al. (1997,2006)) data archive. Using the mean K(RC) magnitude of Ks = 12.39 ± 0.21, the extinction value of AV = 1.30 (see Section 3.4), and the extinction ratio of AK = 0.118AV = 0.15 (Dutra et al. 2002), we obtain The field-of-view of the image is 19.57 × 19.57 (4064 × 4064 pixels). Thick red circle shows the region of Ruprecht 6 with radius of 2 , centered on the coordinates of αJ2000 = 06 h 56 m 06 s and δJ2000 = −13 • 15 00 from Hasegawa et al. (2008), which is adopted in this study. Blue and white circles show the regions of radius of 2 centered on the coordinates of Dias et al. (2002) and SIMBAD, respectively. Lower panel. Zoomed-in image of the region of Ruprecht 6. Red circle shows the region of Ruprecht 6 with radius of 2 (the same as in the upper panel) and the field-of-view of the image is ∼ 9 × 9 .
Reddening Estimation
For the estimation of the interstellar reddening value toward the OC Ruprecht 6, we adopted the following four methods. First, we have used the mean NIR color of the RC stars ( J − Ks = 0.81 ± 0.06) from Table 5
PARSEC Isochrone Fittings
In Figure 9, we have plotted the V versus (B − V ) (a), V versus (V − I) (b), B versus (B − V ) (c), and I versus (V − I) (d) CMDs for Ruprecht 6 together with the theoretical PAdova and TRieste Stellar Evolution Code (PARSEC) isochrones (Bertelli et al. 1994;Girardi et al. 2000Girardi et al. , 2002Bressan et al. 2012). The best matched isochrones are for the parameters of log (age) = 9.50 ± 0.10 (t = 3.16 ± 0. Figure 5, but for the comparison field of 3 < radius < 3.6 around Ruprecht 6. This radius range gives the same area as in the case of Ruprecht 6. The red dotted lines and black boxes are the turn-off point and the region of the RC stars for Ruprecht 6 shown here just for guidelines. With the limited data of BV I photometry only, it is not an easy task to determine the reddening, distance, age, and metallicity simultaneously from the isochrone fitting due to the degeneracy of parameters. We, therefore, have used predetermined values of distance (Section 3.3) and reddening (Section 3.4) for the isochrone fittings just to find optimum values of age and metallicity, after which slight fine-tunings are made. Good matches of the theoretical isochrones and the observed photometry data lend support for the distance and reddening values, and the newly obtained values of age and metallicity.
Galactocentric Metallicity Distribution
OCs have been used as one of the tools to probe the Galactocentric radial metallicity distribution in our own Galaxy (Kim & Sung 2003;Wu et al. 2009;Ryu & Lee 2011). This radial variation of the metallicity in the disk of the Galaxy is a powerful tool for the understanding of the star formation and chemical evolution of the system (Fernández-Martín et al. 2017). Kim et al. (2005) have compiled the slope (= ∆[Fe/H]/∆RGC) values of the Galactocentric radial metallicity gradient published up to then. Using the nine published slope values, they obtained the mean value of the slope ∆[Fe/H]/∆RGC = −0.066 ± 0.019. In Table 6, we have compiled again the slopes and intercept values incorporating recently published results for OCs.
For the OCs in the DAML02 catalog, we have calculated the Galactocentric distances using the distance estimates, Galactic coordinates in the catalog and the equation : where d is the heliocentric distance to the cluster, l and b are the Galactic longitude and Galactic latitude, respectively, of the cluster, R0 is the distance of the Sun from the Galactic center (8.5 kpc is used in this study). Out of 2167 OCs listed in the DAML02 catalog, only 298 clusters have both of metallicity and distance estimates, which are shown in Figure 10 as a function of the Galactocentric distance, RGC. We have performed the least square fitting for all the 298 OCs with both metallicity and distance estimates, and obtained [Fe/H] = (−0.034 ± 0.005)RGC + (0.204 ± 0.053) with rms = 0.229 for all the radial range of the Galaxy (shown as blue dashed line in Figure 10 (a)). Assuming the outer (R > 12 kpc) clusters might follow constant ([Fe/H] ∼ −0.3 dex) metallicity trend, we also made the least square fitting only for the inner (RGC < 12 kpc) OCs with 2σ clipping, which returns [Fe/H] = (−0.061 ± 0.008)RGC + (0.479 ± 0.071) with rms = 0.155 (N = 231) (shown as red solid line in Figure 10 (a)).
Taking into account only the old OCs with age > 1 Gyr, we plotted the same plot in Figure 10 Figure 10 (b)). This result is in very good agreement with those obtained recently by Ryu & Lee (2011) and Andreuzzi et al. (2011), while the number of OCs used in this study is smaller than those of the previous studies which resulted from introduction of the process of extracting only the old and good (by 2σ clipping) clusters for the analysis.
Whether we adopt the single relation for the whole radial range or the dual relation broken at RGC ∼ 12 kpc, Ruprecht 6 seems to conform either of the radial metallicity trends at its location of RGC = 13.28 ± 0.54 kpc.
The metallicity estimates from photometric indices are less reliable than those from spectroscopic observations. (U − B) colors for late-F to early-K-type stars, Washington and DDO photometric systems for G and K-type giants can give some reliable values. However, while the DAML02 catalog gives detailed references for proper motions and radial velocities, the sources for the metallicity estimates are not listed. Among the 81 objects in Figure 10 (b), there are 18 objects for which N ≥ 10 stars are used to determine the metallicity, 40 objects for which 1 ≤ N ≤ 9 stars are used, and no information is given for the other 23 objects. Investigating the sources of the metallicity estimates again is beyond the scope of this paper, but compilation and classification of these estimates using the criteria of the methods to find abundances and number of stars used will be a good approach to obtain more reliable Galactocentric radial metallicity distribution of the OCs in the Milky Way.
SUMMARY
We derived the physical parameters of the small-size and poorly studied old OC Ruprecht 6 using the BV I optical photometry data. The main results obtained in this study are : • The color-magnitude diagrams of Ruprecht 6 show clear MS stars. The MS turn-off point is at V ≈ 18.45 mag and B − V ≈ 0.85 mag.
• is only for the 81 old (age > 1 Gyr) OCs, and panel (c) is for the young (age ≤ 1 Gyr) OCs. Blue dashed lines are the least square fittings for all the data for the whole radial range. Panels (a) and (b) show that OCs in the outer region of R > 12 kpc might show almost constant metallicity value ([Fe/H] ∼ −0.3 dex) regardless of the radius, while the red solid lines for the inner region of R < 12 kpc are the least square fittings only for the OCs remained after 2σ clipping (N=231 for panel (a) and N=49 for panel (b)). Small open circles at R < 12 kpc in (a) and (b) denote the OCs excluded during the 2σ clipping process, and the large red dot with error bars represents the position of Ruprecht 6 with parameters obtained in this study.
we derive the reddening values of E(B − V ) = 0.42 mag and E(V − I) = 0.60 mag.
• For the old (age > 1 Gyr) OCs of DAML02 catalog, we obtain the Galactocentric radial metallicity relations of either (
ACKNOWLEDGMENTS
We thank the anonymous referee and the Scientific Editor for the thorough review and helpful comments that helped to improve the manuscript. The participation of I. H. and S. K. in this project was made possible by UST Research Internship for Undergraduates grant in 2016 July. Based on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO Prop. ID 2010B-0178, PI Sang Chul KIM), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. | 2017-06-09T06:43:24.000Z | 2017-06-09T00:00:00.000 | {
"year": 2017,
"sha1": "57cb1c1ab73b6d29e95f6a4290b9415820595ec6",
"oa_license": null,
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201720636501127&method=download",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "57cb1c1ab73b6d29e95f6a4290b9415820595ec6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
252846223 | pes2o/s2orc | v3-fos-license | Context Generation Improves Open Domain Question Answering
Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this inefficiency, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract the relevant knowledge and answer a question. We first generate a related context for a given question by prompting a pretrained LM. We then prompt the same LM to generate an answer using the generated context and the question. Additionally, we marginalize over the generated contexts to improve the accuracies and reduce context uncertainty. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods. For example on TriviaQA, our method improves exact match accuracy from 55.3% to 68.6%, and is on par with open-book QA methods (68.6% vs. 68.0%). Our results show that our new methodology is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge.
Introduction
Open-domain question answering (ODQA) produces an answer to a given question in the form of natural language, and the task has been extensively studied in recent years. Significant progress on ODQA has been made by developing the openbook QA methods (Chen et al., 2017;Lewis et al., 2020b;Guu et al., 2020;Izacard and Grave, 2021;Figure 1: An example illustrating our two-stage, CGAP framework. CGAP generates more accurate answer (e.g. Richard Marx) compared to standard few-shot prompting (e.g. Ross Bagdasarian). Lazaridou et al., 2022) that explicitly exploit external knowledge corpus via dense retrieval techniques like DPR (Karpukhin et al., 2020). However, learning a good retriever requires substantial resources, such as a large number of domain-specific pairs of question and contexts in the knowledge corpus (Karpukhin et al., 2020), or intensive compute resources (Lee et al., 2019). In addition, as the size of the knowledge corpus increases, it becomes harder to retrieve accurate contexts due to the high dimensionality of the search space (Reimers and Gurevych, 2021).
Another class of models, known as closed-book question answering (CBQA), were recently proposed (Roberts et al., 2020). CBQA tries to directly answer the open-domain questions without accessing any external knowledge sources, and instead leverages the parametric knowledge stored in the pretrained language models (LMs) (Raffel et al., 2020;Brown et al., 2020;Ye et al., 2020).
However, even with larger LMs, the closed-book methods are not competitive with the open-book methods in term of accuracy (Lewis et al., 2021).
While it has been shown that large pretrained LMs store an abundant amount of knowledge (Petroni et al., 2019;Roberts et al., 2020), we hypothesize the accuracy gaps are largely because the way of exploiting the parameterized knowledge are not sophisticated enough. Prior works on CBQA either finetune pretrained LM models on the entire QA datasets (Roberts et al., 2020;Ye et al., 2020), or they directly prompt those models using several few-shot QA pairs (Brown et al., 2020;Radford et al., 2019). On the contrary, open-book models use a two-stage pipeline. They first retrieve relevant contexts from external corpus, then they extract the answer based on the retrieved contexts.
Therefore, to better exploit the parameterized knowledge in pretrained LMs and bridge the large accuracy gaps between the closed-book and openbook methods, we propose a coarse-to-fine, twostage method for CBQA task. The main idea is to leverage generated contexts as an intermediate bridge between the huge amount of parameterized knowledge stored in the LM and the answer that lies within this knowledge. To the best of our knowledge, no previous work has been conducted on generating context from large pretrained LMs for CBQA and leveraging them to predict answer.
Our proposed framework CGAP consists of two stages. It first performs Context Generation relevant to a given question by prompting a pretrained LM. It then prompts the same LM for Answer Prediction using the generated context and the question. In order to improve the accuracies and to reduce context uncertainties, we generate multiple contexts for each question and predict the final answer by majority voting. This step does not increase the inference cost as we generate the contexts in parallel by batching in a single inference call. Figure 1 illustrates how our two stage prompting and majority voting works. For the input question, CGAP generates 3 contexts and 3 predicted answers at the two stages respectively, and choose the most voted answer as the final answer. Note that we do not finetune the large pretrained LMs for context generation or answer prediction. This facilitates our approach to take advantage of large LMs such as GPT-3 (Brown et al., 2020), PALM (Chowdhery et al., 2022) or Megatron-Turing NLG 530B (Smith et al., 2022), which are only available through APIs.
We conduct in-depth experimental studies on three open-domain QA benchmarks, Natural Questions (Kwiatkowski et al., 2019), WebQuestions (Berant et al., 2013), andTriviaQA (Joshi et al., 2017), and demonstrate significant improvements by our two stage prompting method. Our contributions are summarized as follows: • We propose a simple yet effective few-shot prompting approach for ODQA that does not rely on any external knowledge sources or fine-tuning, but performs significantly better than existing closed-book approaches (e.g. exact matching 68.6% vs. 55.3%), and is on par with open-book methods (e.g. 68.6% vs. 68.0%).
• We show that the generated context can improve standard few-shot prompting based closed-book QA accuracy at various model scales (e.g. from 11.7% to 28.5%), and demonstrate that scaling up the context generation model further enlarges their accuracy gaps (e.g. 357M 28.5% vs. 530B 68.6%).
To the best of our knowledge, we are the first to leverage generated context from large pretrained LMs for open-domain question answering.
• We show that generating multiple contexts without increasing the inference cost by batching can mitigate errors in answer prediction caused by variability in the unknown context (e.g. from 36.3% to 45.7%).
Methodology
Our proposed Context Geration and Answer Prediction (CGAP) framework is illustrated in Figure 2. CGAP consists of two stages. First, it generates relevant context to a given question by prompting a large pretrained LM. In the second stage, it predicts an answer using the generated context and the question by prompting the same LM. To accurately predict the answer, we generate multiple contexts. We run each of the two stages multiple times in parallel in batch for the same question, generating different contexts for each, and use majority voting to select the final answer. Formally, for our task we have a question Q to be answered, and a support repository D = {(c 1 , q 1 , a 1 ), . . . , (c n , q n , a n )} that consists of tuples of question q i and answer a i pairs with mapping to the context c i . In our experiments, we use the training sets of the corresponding datasets as D.
Context Generation
As shown in Figure 2, in the first stage, given question Q, we select the m context generation prompts S = {(q 1 , c 1 ), . . . , (q m , c m )} from the support repository D. We then use S with Q to prompt pretrained LM to generate k contexts, which are denoted by C gen = {c 1 gen , c 2 gen , . . . , c k gen }. Sample Selection Selecting appropriate samples for the prompts is the key to generate high-quality context relevant to a given question. Previous work has shown that leveraging relevant samples helps the LM to generate contextually relevant and factually correct context (Liu et al., 2021(Liu et al., , 2022. We therefore use a similarity-based retriever to search relevant samples S from the corresponding supporting repository, D. We use DPR (Karpukhin et al., 2020) in our framework. In our DPR setup, we represent the question and the samples in D as 768-dimensional dense vector representations, computed via the BERT-based bi-encoder networks. We rank the documents according to their similarity score, calculated as: (1) where ; denotes concatenation of the tokens of the question q j and the context c j . Finally, we get S = {(q 1 , c 1 ), . . . , (q m , c m )} which are the top-m retrieved samples for question Q.
We would like to emphasize that the selected samples from D are used as examples in the fewshot prompting to the pretrained LM to generate context, not as the source of external knowledge containing the answer.
Prompts Construction Given the question Q and the set of question-context pair samples S selected, we use few-shot prompting to condition pretrained LMs on the samples. We use similar few-shot prompting technique for closed-book QA as in (Brown et al., 2020), that considers multiple <question, answer> pairs. The template we used to construct prompts is: Q: ... A: .... Thus the constructed prompt P rompt(Q) for a given question Q becomes: P rompt(Q) =Q:q m \nA:c m \n . . . Q:q 1 \nA:c 1 \nQ:Q\n We use '\n' to separate the question, context and the samples. We investigated the order of samples to optimize the prompt and find that using the retrieved samples in reversed order of similarity yields better accuracies across all datasets 1 . We now pass P rompt(Q) through a pretrained LM to generate the context as follows: To generate a set of k contexts, {c 1 gen , ..., c k gen }, we increase the inference batch size to k and generate all the k contexts in parallel in one inference call to the LM. Thus, the overall latency remains the same as using a single context.
Answer Prediction
In the second stage, we select m answer prediction prompts S ′ = {(q 1 , a 1 , c 1 ), . . . , (q m , a m , c m )} from D and then we prompt the same LM using the generated context C gen from the first stage, along with the question Q and S ′ . The LM predicts a set of k answers A p = {a 1 p , a 2 p , ..., a k p } each corresponding to the k contexts in C gen . The final answer A is selected by majority voting on A p .
Sample Selection Constrained by the maximum sequence length of the LM, we can feed the LM only a few (c, q, a) samples. Thus, it could be difficult for the LM to learn how to predict the answer for the given question conditioned on the context, unless similar examples have been provided. For example, if we were asking the question 'who is the current director of the us mint?', the example that answering the question 'who is the fbi director of the united states?' from the provided context will be more helpful, than the example that is answering 'how many episodes are there in 'Dragon Ball Z'?' from the given context. We therefore use the same criteria for answer prediction as has been used for context generation. We use the same set of samples as selected in the first stage as described in Equation 1 and denote as
Prompt Construction
We are prompting LMs with few-shot examples to predict answer for the question conditioned on the generated context. To equip the LM with this capability, we constructed intuitive prompts for the selected examples and feed them into the LM. Specifically, the template We then feed P rompt(c i gen , Q) into the pretrained LM to predict the answer: where we use a i p to denote the i-th answer predicted by the LM. The k generated contexts in c gen will yield a set of answers A p = {a 1 p , ..., a k p }.
Context Marginalization
The large pretrained LM can generate impressively fluent and relevant context given input, it also has a tendency to generate factually incorrect statements, ranging from subtle inaccuracies to wild hallucinations (Shuster et al., 2021;Krishna et al., 2021;Su et al., 2022). Answers conditioned solely on hallucinated or erroneous statements are likely to be incorrect (Equation 3). Thus, we would like to remove the variability in the answer due to any particular generated context.
Ideally, we could marginalize over this unknown context by producing an answer for every possible context, weighting each answer by the probability of the context. Here we approximate this by generating a set of contexts, and selecting the final answer based on majority voting. Suppose there are T unique answers {A 1 p , ..., A T p } from the k predicted answer from Equation 3 where T <= k, then we select the J-th answer that receives the highest number of votes from the T different answers via: as the final answer A. As k gets larger, the final answer A will converge to the answer that would be produced marginalizing over all possible contexts. We refer to this majority vote over multiple generated contexts as context marginalization. NQ contains questions from Google search queries; TQA contains a collection of questions from trivia and quiz-league websites, and we use their unfiltered set; while questions of WQ were from Google Suggest API. For NQ and TQA, we use the processed data provided by Izacard and Grave (2021), in which each question-answer pair is accompanied by a 100-words Wikipedia passage
Baselines
We compare our CGAP framework with the following baseline methods for closed-book QA.
Standard Few-shot Prompting We use the standard few-shot prompting technique similar to GPT-3 (Brown et al., 2020) in our evaluation on the closed-book QA datasets as described in Section 3.1. We consider this technique as the few-shot baseline in all our experiments. The baseline that is experimented using 530 billion (530B) parameterized LM is refferred as LM-530B.
Implementation Details
To test how different model scales affect the performance of our approach, we train and experiment on a collection of dcoder-only LMs using the Megatron-LM framework (Shoeybi et al., 2019), with 357 million (357m), 1.3 billion (1.3b), and 530 billion (530b) (Smith et al., 2022) parameters, at both context generator and answer prediction stage. We use top-p sampling with a value of 0.9 to generate diversified contexts. However, to handle the deterministic generation (e.g. short answer), we use greedy decoding at the answer prediction stage, similar to (Chowdhery et al., 2022;Wang et al., 2022).
For the prompt configuration at both stages, we choose 10 samples, constrained by the maximum sequence length of the LMs. We use DPR checkpoint from Huggingface 2 to select samples from the supporting repository.
Evaluation
For evaluating the open-domain QA task, we followed the recent works (Rajpurkar et al., 2016; Lee
Results and Ablation Studies
We now show our main results as well as ablations to further analyze the effectiveness of our approach. Table 1 shows the EM score comparison between our CGAP-based method with existing closed-book baseline approaches 3 . We also compare with stateof-the-art open-book models at the upper section of the table.
Main Results
As we can see, our CGAP based method outperforms other existing closed-book methods by large margin, especially on NQ and TQA datasets. The CGAP also outperforms the standard few-shot prompting baseline LM-530B on all three datasets (at least by 13.3 EM point).
Furthermore, CGAP obtains highest score on TriviaQA. The scores are also very close to the state-of-the-art open-book method RAG on NQ and WebQuestions, but only lose few points on NQ to FiD. While FiD uses 100 retrieved passages for answer prediction, CGAP only uses 8 generated contexts for approximate context marginalization.
Ablation Studies
We conducted a systematic ablation study to further investigate the contribution of the context generation model and the effect of context marginalization.
3 GPT-3 API shows different results than reported in the paper (Brown et al., 2020). We therefore did not compare to it. Details are shown in Appendix A
Context Generation
While previous work (Roberts et al., 2020;Brown et al., 2020) demonstrated that the scale of the model sizes improves the answer accuracy of closed-book QA, there are also other findings showing that simply increasing the model size does not lead to substantive accuracy gains (Rae et al., 2021). Thus, we intend to investigate how will the context generation LM affect the answer accuracy.
We experimented by varying the LM sizes for context generation, and fix the answer generation LM. We used context generation LM sizes of 357m, 1.3B and 530B, and answer generation LM with 357m and 1.3B parameters. We also compare with standard few-shot prompting which has no context generation.
We plot the results in Figure 3. As we can see, there are huge accuracy gains from standard prompting, to CGAP method that has context generation. The accuracy increases by absolute 19.00% for NQ, 16.87% for TQA and 15.26% for WQ, when using 357M model for both standard prompting and CGAP approach. The answer accuracy continues to increase when we increase the LM size for context generation. Furthermore, we notice that the slopes of the accuracy gain curve using larger answer prediction model is steeper than using smaller one on all three datasets. This suggests the use of larger answer prediction LM to fully exploit the knowledge in generated context.
Context Marginalization
Since there will be some hallucinated content or erroneous statements in the generated context, we approximate context marginalization by sampling multiple contexts and selecting the final answer based on majority voting, as introduces in Section 2.3. Here, we investigate the performance gains brought in by context marginalization, and also the accuracy curves with varied number of sampled contexts used in the approximate marginalization k.
In Table 2, we show the accuracy comparisons w/ and w/o using marginalization (k=8), with different LM sizes. As we can see, context marginalization improves the answer accuracy consistently on the three datasets 4 , under all settings. Notably, there is much larger performance gains using marginalization when we scale up the model sizes to 530 billion parameters (i.e. increase EM score by 12.8% averaged on three datasets).
The larger the number of context samples k, the more accurately the majority vote reflects the true marginalization over all possible contexts. Therefore, we perform further ablation by changing the value k for 357M LM for both context generation and answer prediction. We plot the accuracy curves in Figure 4. We see that there are accuracy improvements when we use more context samples. As expected and curves plateau for larger values of k as the approximation approaches the true marginalization over all possible contexts. 4 We show a concrete example in Appendix B Table 11 5 Analysis Considering that it is the first time leveraging context generated by large pretrained LMs for ODQA, we also conducted further analysis.
We compare generated context with retrieved context in the two-stage, few-shot prompting based CBQA framework. It is a dominant paradigm to use retrieved context from external corpus together with the question for answer prediction for openbook QA (Chen et al., 2017;Lewis et al., 2020c;Izacard and Grave, 2021;Lazaridou et al., 2022).
Retrieved vs. Generated Context
In CBQA setting, we are not allowed to retrieve context from external knowledge sources. However, we can retrieve the contexts from the supporting repository based on their relevance to the given question. We use c r = {c 1 r , c 2 r , ..., c m r } to represent the top-m relevant context for question Q. It can be obtained via Equation 1.
Let the top-1 retrieved context be c top-1 r for question Q. We use c top-1 r to compare with the generated context, c gen . We use the same top-m prompts S ′ for answer prediction as introduced in Section 2.2. The answer a r p for the c top-1 r will be: where P rompt(c top-1 r , Q) can be obtained via Equation 2.
The comparison between c top-1 r and c gen is shown in Table 4. From the upper part of the table, we see that using c top-1 r gives slightly higher EM score than using c gen generated by 357M and 1.3B LMs. However, c gen gives higher EM scores than c top-1 r on all three datasets when we scale up the context generation LM size to 530B. This suggests the use of large pretrained LM for a better generated context. , with few-shot generated context c gen on closedbook QA task.
Multiple Retrievals vs. Context Marginalization
We notice that in Table 4, c top-1 r performs slightly better than c gen when using 530B LM for answer prediction. We argue that this might be caused by the hallucination in c gen . While we have shown in Section 4.2.2 that context marginalization could mitigate the problem and improve answer accuracy, we further facilitate c gen (530B) with context marginalization and compare with retrieved context.
For fair comparison, we perform majority voting using the top-k retrieved context c r , since Karpukhin et al. (2020) showed that the quality of the retrieved documents will also affect the final answer accuracy. Specifically, we replace c top-1 r with each retrieved context c i r in Equation 5 to predict answer a r(i) p (i = 1, ..., k), and use Equation 4 to select the most frequent answer as the final answer.
Furthermore, we replace c top-1 r with golden context c golden in Equation 5. This will be the upperbound of using retrieved/generated context in the two-stage, few-shot prompting CBQA task.
We show the results in Table 5. As we can see, using marginalization over c gen consistently outperforms c top-1 r , and also better than majority voting over multiple retrieved contexts c r for answer prediction on all three datasets. Notably, marginal-
AP
Context NQ TQA WQ Table 5: Comparison of using context marginalization (c 1 gen ,..., c k gen ), multiple retrievals (c 1 r ,...,c k r ), and golden context c golden on closed-book QA task.
ization over c gen yields higher EM score than using c golden when using 530B LM for answer prediction. We observed similar trends when experimented on 357M and 1.3B parameter models. In Table 3, we show a concrete example that compares using different context for answer generation for better understanding 5 .
Related Works
Open-domain QA is the task of answering general-domain questions (Chen et al., 2017), in which the evidence is usually not given. Models that explicitly exploit an external corpus are referred as open-book models (Roberts et al., 2020). They typically index the corpus and then retrieveand-read to extract the answer span from documents (
Conclusion
We propose a simple yet effective framework named CGAP for open-domain QA. CGAP performs Context Generation followed by Answer Prediction via two-stage prompting using large pretrained LMs. It does not rely on external knowledge sources, and does not need finetuning or add extra learnable parameters. To the best of our knowledge, we are the first to leverage generated context from large pretrained LMs for open-domain QA. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods and is par with openbook methods. We demonstrate our method up to 530B parameter models and showcase that larger models boost the accuracy by huge margins.
Limitations
As we show in the paper, CGAP has obtained satisfactory results on open-domain QA task. However, the method have limitations. The accuracy of CGAP will be affected by the size of LMs it uses, as we shown in Figure 3. In Section 4.1, our highest accuracy results reported in Table 1 used a large 530B pretrained LM, which is only accessible via API. Also, the generated context may contain hallucinated content. et al., 2017), for closed-book QA task. In order to compare with their reported results, we reimplement their method using the same few-shot configuration as described in the paper and query the OpenAI API.
References
Experimental Setups As OpenAI hasn't officially release information about their API model sizes, we deduce the sizes of OpenAI API models based on their performances from EleutherAI's blog 6 . Specifically, we query Ada and Babbage 6 https://blog.eleuther.ai/gpt3-model-sizes/ models' API, trying to reproduce the reported results for GPT-3 Medium (350M) and GPT-3 XL (1.3B) models, respectively. We use two prompt formats to query the OpenAI API. The first prompt format is the one described in the paper (Brown et al., 2020) (referred as GPT-3 format): randomly draw 64 question-answer pairs from the corresponding supporting repository, and use 'Q: ' and 'A: ' respectively as prefix before each question and answer, to build the conditioning prompts. We also use the prompt format from EleutherAI's language model evaluation harness github 7 (referred as EleutherAI). Furthermore, we experiment using the same prompting format as we used in our standard prompting baseline (LM-530B) in Section 3.2 (referred as Our format), and prompting the LM of size 357M and 1.3B to compare.
Results
We show the results of prompting GPT-3 under zero-shot, one-shot and few-shot settings in Table 6, Table 7 and Table 8 respectively. As we can see, no matter what prompting formats we use, the results reported in the GPT-3 paper (Brown et al., 2020) are almost always higher than our reproduced ones on all three datasets, over the two different LM sizes. The gaps become even larger at few-shot setting. Thus we conjuncture that we are not able to reproduce the results reported by Brown et al. (2020) using GPT-3 (175B) on the three QA datasets. So we did not include their reported results to compare with our CGAP method in Table 1.
Furthermore, we notice that the results based on our baseline's prompting configuration are always on par with the results from querying OpenAI API. Thus we believe that the LM-530B is a reliable and fair standard few-shot prompting baseline to compare with.
B Examples
We show three examples from NQ, TQA and WQ test set in Table 9, Table ?? and Table 10 respectively. In each table, we show the predicted answers from (1) standard prompting, (2) two-stage prompting using top-1 retrieved context c top-1 r , (3) CGAP w/o marginalization, and (4) CGAP. All those predicted answers are based on LMs of size 530B.
We also show an example illustrate CGAP with 8 generated context and their corresponding pre-dicted answer in Table 11. As we can see, the contexts that contains lot of factually inaccurate or irrelevant content (e.g. generated context 1, 2, 4, 5, 8), thus the corresponding answer is wrong/inaccurate. However, the context generation LM also generates contexts that are more relevant and factual (e.g. generated context 3, 6, 7), and they help the answer prediction LM generate a correct answer. Therefore, CGAP can predict the final answer correctly based on marginalization over generated contexts. 794 Table 12: P rompt(Q) Example. For the question "Who was President when the first Peanuts cartoon was published?" from TQA (Joshi et al., 2017), we selected 8 < q i , c i > samples from the supporting repository D, and construct the P rompt(Q) as above. to prompt LMs for c gen generation. | 2022-10-13T01:15:58.107Z | 2022-10-12T00:00:00.000 | {
"year": 2023,
"sha1": "0ef6fc10b3283e2730843e8a7cbaf342d46237e0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "7892087651845ef02b0e6e3e583a8ba18626e5f9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
56251385 | pes2o/s2orc | v3-fos-license | Pre- and postpartum effects of starch and fat in dairy cows: A review
This review discusses the effects of starch and fat before and after calving on metabolism, energy balance (EB), milk production, and reproduction in dairy cows. The shift in dairy cows from a pregnant nonlactating state to a non-pregnant lactating state induces physiological changes, which affect the metabolic and endocrinal axes to redirect body energy stores towards the mammary gland for milk production. Overfeeding high starch and fat levels during the dry period after calving may result in cows failing to adapt to the negative energy balance (NEB) because of major liver and rumen dysfunction. Alternatively, keeping dry cows on high-forage/low-energy diets adjusts dry matter intake (DMI) to optimize the rumen function and decrease the severity of the NEB during transition. These periparturient biological improvements in dairy cows showed real benefits such as fewer postpartum health complications (e.g. milk fever, ketosis, mastitis, metritis), decreased body condition loss and improved reproductive axis in the subsequent lactation. Adding dietary starch and/or fat to diets of dairy cows following parturition increased milk yield. In addition, milk protein of dairy cows increased with glucogenic diets, but decreased with lipogenic diets. Inversely, milk fat usually increases after feeding lipogenic diets, but it decreases when feeding glucogenic diets to dairy cows. Glucogenic and lipogenic nutrients can affect the cow’s metabolism and its EB status positively, as is evidenced by plasma non-esterified fatty acids (NEFA), β-hydroxybutyrate (BHB), glucose, amino acids, insulin, insulin-like growth factor-I (IGF-I), growth hormone (GH), gonadotropin hormones, and progesterone (P4) levels. These metabolites (NEFA, BHB, glucose, amino acids) and hormones (insulin, IGF-I, GH, P4) have been shown to affect folliculogenesis, ovulation, conception, and pregnancy success. Feeding a starchbased diet to dairy cows can lead to acidosis and increase glucose and insulin levels, while decreasing NEFA and BHB levels. Furthermore, an insulinogenic diet favours an early resumption of ovarian activity, but has adverse effects on the quality of oocytes. In contrast, keeping dairy cows on a fat-based diet elevates NEFA and BHB levels and decreases glucose and insulin levels. Additionally, a lipogenic diet increases the plasma P4 levels and improves the quality of oocytes. These evidences suggest that reproductive performances in dairy cows can be enhanced by feeding an insulinogenic diet until the resumption of the ovarian cycle then switching to a lipogenic diet from mating period onwards. Since long-term field studies on fertility are limited and the reproduction process in dairy cows is multi-factorial, further research is needed on the preand postpartum effects of starch and/or fat as well as their combinations on reproduction axis and thus to draw conclusions on reproductive performances.
Introduction
Carbohydrate and fat-based feedstuffs are major energy components that are usually used in the diets of dairy cows (Schroeder et al., 2004;Carmo et al., 2015). Glucogenic-based feedstuffs provide the fermentable energy and improve the protein/energy balance to potentially enhance the supply of rumen microbial protein synthesis (MPS) to the small intestine (Rearte & Pieroni, 2001;Bargo et al., 2003).
Glucogenic-based ingredients consist of cereal grains and their milling by-products, molasses, beet and citrus pulps, roots and tubers such as cassava and potatoes and their by-products, and dried reclaimed bakery products (McDonald et al., 2002;Mosavi et al., 2012;Steyn et al., 2017). Fat increases the energy density of the diet (Schroeder et al., 2004), and in particular enhances plasma cholesterol, the major precursor for steroidogenesis, including luteal P 4 synthesis in postpartum cows . Fat can be included in the form of ruminally inert sources such as hydrogenated fish fat, high melting point fatty acids, and calcium (Ca) salts of long-chain fatty acids or non-ruminally inert sources such as soybean oil and full fat rapeseed (Bargo et al., 2003). Manipulating the energy level and source in the diet of pre-and postpartum cows showed significant improvements in terms of decreased incidence of health problems (Beever, 2006), optimized rumen microbial activity (Jouany, 2006), increased amount of digested nutrients from the gastro-intestinal tract (GIT) (Bauman & Currie, 1980), decreased body condition loss (Drehmann, 2000;Butler, 2003), and increased milk responses (Grum et al., 1996;Ingvartsen & Andersen, 2000;Cavestany et al., 2009a;Reis et al., 2012;Damgaard et al., 2013;Roche et al., 2013;Hills et al., 2015). However, studies on the pre-and postpartum effects of starch-or fat-based diets on dairy cow reproduction parameters are contradictory, with some studies reporting a negative effect or lack of effect of it (Beam & Butler, 1997;Oldick et al., 1997;Oldick et al., 1998;McNamara et al., 2003;Van Knegsel et al., 2007c;Dyck et al., 2011;Gilmore et al., 2011), compared with other studies that reported a positive effect (Gong et al., 2002;Cavestany et al., 2009b;Garnsworthy et al., 2009;Reis et al., 2012;Little et al., 2016;Thatcher, 2017). The objectives of this review are therefore to discuss the effects of energy-based diets containing starch and/or fat during the dry and postpartum periods on the subsequent metabolism, milk production and fertility of dairy cows.
Energy sources during the dry period
A lactation period of 305 days, followed by a dry period of 56 to 60 days, has been regarded as a strategic management system for most dairy farms since the 1950s (Bachman & Schairer, 2003). The dry period is defined as a period of preparation, allowing body and conceptus growth in heifers and body restoration and conceptus growth in dry cows, in anticipation of the next lactation (NRC, 2001;Beever, 2006). Through homeorhetic controls, the imposition of pregnancy during this period favours the partition of specific nutrients (i.e. glucose and amino acids) not only for foetal growth (i.e. about 60% relative to the calf live weight at birth), but also for the growth of the foetal membranes, the gravid uterus, and the mammary gland (Bauman & Currie, 1980). However, the feed intake of heifers and dry cows usually declines in the late dry period relative to their energy requirements (Grummer, 1995;Huzzey et al., 2007), triggering the beginning of the NEB (Butler, 2003). The decrease in prepartum DMI can be attributed to digestive, hormonal, physiological and immunological factors related to this period, and to the rapid growth of the foetus, which takes up the abdominal space, thereby decreasing the rumen volume (Jouany, 2006;Wankhade et al., 2017). Everitt (1964) reported that the foetus of a ruminant is more vulnerable than that of many other species to maternal undernutrition stresses, which impede normal foetal growth. Thus, maternal adaptations during late pregnancy partitioned the nutrients in heifers and dry cows that are required for their own maternal growth and replenishment of protein and energy reserves to meet the foetal requirements (Bauman & Currie, 1980). Several studies have investigated and reviewed the possible benefits of starch-and fat-based ingredients in prepartum diets of dairy cows (Grum et al., 1996;Drackley, 1999;Agenäs et al., 2003;McNamara et al., 2003;Dann et al., 2005;Janovick & Drackley, 2010;Janovick et al., 2011;Damgaard et al., 2013). However, results are limited, and in some instances are conflicting, with certain studies showing a positive prepartum effect on EB status (Grum et al., 1996;Janovick & Drackley, 2010;Janovick et al., 2011;Damgaard et al., 2013), milk yield (Ingvartsen & Andersen, 2000;Cavestany et al., 2009a), milk composition (Cavestany et al., 2009b;Grum et al., 1996;Damgaard et al., 2013), and reproduction performance (Cavestany et al., 2009b), with others reporting the lack of effect on these traits (McNamara et al., 2003;Agenäs et al., 2003;Burke et al., 2010;Mann et al., 2015). Unrestricted feeding of a diet containing higher energy levels (starch or fat) to prepartum cows enhanced DMI, allowing them to consume too much energy relative to their nutritional requirements, compared with those being fed a lower energy density diet (Janovick & Drackley, 2010). Overconsuming starch increased the osmolality of the rumen contents and inflamed the rumen epithelium, resulting further in hepatic fat deposition (Beever, 2006) and a greater decline in DMI (Minor et al., 1998;Olsson et al., 1998). Furthermore, feeding prepartum high starch diets increased the production of volatile fatty acids (VFAs) and resulted in a decrease in pH below 5.5 and an accumulation of lactic acid in the rumen. This decline in DMI and ruminal pH exacerbates the EB deficit in cows (Jouany, 2006). Alternatively, the overconsumption of fats results in inhibitory effects on microbial fermentation and sensitivity to nutrient imbalances in the rumen, causing reduced DMI (Palmquist, 1994). Additionally, feeding high prepartum levels of dietary fat negatively affected the EB status, as is evidenced by increased plasma NEFA and BHB levels and decreased insulin levels (Leroy et al., 2008b;Damgaard et al., 2013), which result in longer anoestrous periods in the subsequent lactation (Giuliodori et al., 2011). In both energy feeding contexts, dry cows can possibly fail to adapt to the NEB stress when fed high prepartum levels of starch and fat due to the associated metabolic and rumen dysfunctions (Jouany, 2006;Janovick et al., 2011;Mann et al., 2015). Furthermore, a severe NEB during the prepartum period has been associated in the subsequent lactation with increased health problems (e.g. retained placenta, ketosis, abomasum displacement, lameness, mastitis, and endometritis) (Duffield et al., 2009;Ospina et al., 2010a;McArt et al., 2012), reduced reproductive success, and decreased milk production (Duffield et al., 2009;Ospina et al., 2010b;2010c).
The optimal prepartum dietary management strategy with reference to the types and levels of energy intake and control of DMI is still to be developed (Janovick & Drackley, 2010). Some studies indicated that an overconsumption of energy prepartum is detrimental to postpartum cow health and liver function (Grum et al., 1996;Rukkwamsuk et al., 1999), whereas others demonstrated that supplementing extra energy during the dry period is beneficial to transition success (Dann et al., 1999;Rabelo et al., 2005). Consequently, recent studies investigated the potential benefits of feeding fibre-based diets containing > 400g/kg of NDF and low digestible energy levels on DM basis during the dry period (Janovick et al., 2011;Mann et al., 2015). Such prepartum diets were reported to adjust the DMI, which optimized the rumen digestion and fermentation (Jouany, 2006) and decreased the mobilization of body reserves, as well as the deposition of lipid and tri-acyl glycerol (TAG) in the liver (Mann et al., 2015). Controlling the energy content in prepartum diets is usually achieved by adding bulky lower-quality forages such as chopped wheat straw or oat hay, which increase fibre content and limit the voluntary DMI (NRC, 2001), thereby regulating total nutrient consumption. Maintaining pregnant dry cows on high-forage/low-energy diets has shown significant improvements in their subsequent lactations in terms of fewer health problems (e.g. ketosis, abomasum displacement, and fatty liver syndrome), reduced body condition loss, and improved reproductive axis (Drehmann, 2000, Beever, 2006Jouany, 2006). Thus, the evidence revealed that feeding a prepartum forage-based diet containing low digestible energy level optimized the GIT and rumen microbial activity (Jouany, 2006), improved metabolic status, and reduced the risks of ketosis and fatty liver syndrome in periparturient dairy cows (Janovick et al., 2011;Vickers et al., 2013;Mann et al., 2015). However, long-term feeding trials that investigated the prepartum effect of energy levels and sources and their combinations on the postpartum milk responses and reproductive performances of dairy cows are limited, making it difficult to draw final conclusions.
Negative energy balance and postpartum-related disorders in dairy cows
Over the past few decades, a significant increase in milk yield has been observed in dairy herds (Leroy et al., 2008b;Roche et al., 2011) as a result of intense genetic selection, improved nutrition, and better cow management (Lucy, 2001;Thatcher et al., 2011). However, several studies have shown that the improvement in milk yield is associated with some negative consequences, such as increased occurrence of metabolic and infectious diseases and a decline in reproductive performance (Lucy, 2001;2007;Butler, 2003;Walsh et al., 2011;Wathes, 2012;Thatcher et al., 2017). As indicators of reproduction management efficiency, the calving interval and the number of artificial inseminations (AIs) per conception have increased substantially worldwide (Butler, 1998). In South African Holsteins for instance the intercalving period increased from 386 days in 1986 to 412 days in 2004 (Makgahlela, 2008).
During the transition period from a pregnant non-lactating state to a non-pregnant lactating state, dairy cows are confronted with numerous physiological challenges and stressors related to parturition and the onset of lactation (Evans & Walsh, 2012;McArt et al., 2013;Esposito et al., 2014). One of the main challenges is a rapid rise in nutrient requirements (Ingvartsen, 2006), essentially doubling overnight once milk production begins . In the week preceding calving, the cow's appetite decreases (Walsh et al., 2011) and the DMI has been reported to decline by approximately 30%, occurring within 24 hours before calving (Huzzey et al., 2007). Thus, cows enter into a NEB status and mobilize stored triglycerides from adipose tissues in an attempt to meet the energy requirements (Rukkwamsuk et al., 1999). The NEB starts a few days before calving until 70 to 84 days post partum, coinciding with the beginning the breeding season (Butler, 2003;Roche et al., 2009). The NEB impairs the general metabolic system in dairy cows and has been identified by a number of researchers (Butler & Smith, 1989;Garnsworthy & Webb, 1999;Butler, 2003;Jorritisma et al., 2003) as an underlying causal factor of poor lifetime milk production and reproductive performance Several reviews have been published regarding the effect of the EB status on reproductive efficiency of dairy cattle (Beam & Butler, 1999;Butler, 2000;Jorritisma et al., 2003;Van Knegsel et al., 2005;Wathes et al., 2007;Santos et al., 2008;Roche et al., 2011;Evans & Walsh, 2012;Leroy et al., 2014). The status of NEB alters the insulin level and the GH-IGF-I axis to decrease the bioavailability of circulating IGF-I (Wathes et al., 2007). Furthermore, it decreases the luteinizing hormone (LH) pulse frequency, the diameter and growth rate of the dominant follicle, the activity of the corpus luteum, and perioestrous hormone levels such as oestradiol and P 4 (Beam & Butler, 1997;1999;Butler, 2000). The effects of these EB-induced alterations on fertility have resulted in increased number of days from calving to the resumption of oestrus and days open and to decreased conception rates following fertilization and pregnancy survivals afterwards (Giuliodori et al., 2011;Roche et al., 2011).
When dairy cows experience a NEB, their immune system is likely to be compromised (Mallard et al., 1998;Wankhade et al., 2017). The level of impairment and the degree of reclamation of postpartum immune competence are influenced strongly by the extent and duration of the NEB around calving (Pyörälä, 2008;Wathes et al., 2009), making cows in a severe NEB more vulnerable to infections caused by pathogenic organisms (Goff, 2006;Wathes, 2012). Gröhn et al. (1995) studied the prevalence of postpartum diseases in multiparous cows in 25 Holstein herds in North America and found 7.4% incidence of retained placenta, 7.6% incidence of metritis and 4.9% incidence of ketosis. Jordan & Fourdraine (1993) surveyed 61 top milkproducing herds in North America and reported 3.7% incidence of ketosis, 9.0% incidence of retained placenta and 12.8% incidence of metritis. Other reports found that the effects of metabolic biomarkers (i.e. high NEFA and BHB) due to poor adaptation of lactating cows to the energy stress were associated with the occurrence of abomasum displacement, clinical ketosis, lameness, mastitis, and endometritis, which all can contribute to an increased risk of culling of affected animals (Seifi et al., 2011;Walsh et al., 2011;Evans & Walsh, 2012;Esposito et al., 2014). Metabolic and infectious diseases can lead to lower milk yields (Rajala-Schultz et al., 1999a;1999b), lower conception rates (LeBlanc et al., 2002;Hansen et al., 2004;Bisinotto et al., 2012), and increased incidences of involuntary culling (Gröhn et al., 1998;Esposito et al., 2014).
Energy partitioning in dairy cows
Feed constituents such as dietary fibre, starch and protein provide substrates for rumen microbial fermentation, which yields gases (with the main ones being methane or CH 4 , carbon dioxide or CO 2 and hydrogen or H 2 ), MPS, and VFA. Rumen VFAs provide energy in dairy cows and the main ones are acetate, butyrate, and propionate. Fat is hydrolysed into fatty acids and hydrogenated in the rumen. Ruminal bypass nutrients and microbial matter can be digested and absorbed in the small intestine, providing additional glucogenic and lipogenic as well as amino acid compounds to the animal (McDonald et al., 2002). These absorbed nutrients proceed in the liver through a succession of pathway reactions of the Krebs cycle involving oxygen (respiratory chain reactions) to produce the body energy fuel as adenosine triphosphate (Van Knegsel et al., 2005).
As parturition occurs and dairy cows shift into producing milk, the requirement for nutrients increases because of the onset of lactation and also of the initial depression of DMI around parturition (Walsh et al., 2011;Evans & Walsh, 2012). Requirements for glucose and metabolizable energy (ME) increase by two-to threefold after the onset of the lactation (Drackley et al., 2001). Also, an increase occurs in postpartum plasma GH levels, thus prioritizing high milk synthesis in the mammary gland (Chagas et al., 2007). In the liver, the improvement in plasma GH levels directly stimulates gluconeogenesis and indirectly antagonises the production of insulin, necessary for meeting the glucose demands for milk production (Lucy, 2004). As a result of low plasma glucose and insulin levels, body fat and, to a lesser degree, body protein stored as body reserves are mobilized (Van Knegsel et al., 2005), usually through homeostatic regulation (Roche et al., 2009;Thatcher et al., 2011). This mobilization results in a loss of body condition score and live weight (Jorritsma et al., 2003;Van Straten et al., 2008) as a physiological mechanism to overcome the energy deficit. Non-esterified fatty acids are consequently released from body fat reserves, with increasing NEFA levels in the bloodstream, suggesting an EB shortfall (Duffield, 2000;Wathes et al., 2007). The NEFA metabolites are directed into the mammary gland to supply milk triglycerides (Drackley, 2000) or utilized in the liver (Drackley et al., 2001;Vernon, 2002;Schulz et al., 2014). Following their uptake by the liver, NEFA can be utilized in three pathways. First, NEFA can be oxidized to carbon dioxide to supply energy as alternative energy fuel for other tissues, while most of the glucose is diverted for lactose synthesis in the mammary gland (Vernon, 2002). Second, it can be partially oxidized to produce ketone bodies, acetone, aceto-acetate and BHB, which may result in ketosis (Schulz et al., 2014;Esposito et al., 2014). Third, it may be esterified to triglycerides or phospholipids and stored in the liver as TAG, with the possibility of causing fatty liver syndrome (Drackley et al., 2001). This mobilization highlights that the metabolic effects of a NEB status in early lactation induce an imbalance in the ratio of plasma glucogenic and lipogenic compounds derived from feed nutrients and body reserves (Schulz et al., 2014). Hence, the physiological consequences of postpartum EB deficit causes low plasma glucose and insulin levels associated with high levels of plasma NEFA, BHB, acetone, acetoacetate, and liver TAG (Van Knegsel et al., 2005;Evans & Walsh, 2012). As lactating cows enter a state of NEB, physiologically they direct the limited available nutrients in their system to milk synthesis for the survival of living offspring. This prioritization occurs at the expense of the reproductive axis, thus limiting the dominant follicle to ovulate, be fertilized and cared for during an entire gestation (Leroy et al., 2008a). From this brief review, it appears that the brain, GIT, body reverses, foetus (before calving), and the udder, as well as the reproductive organ (after calving), are all components in the adaptation to EB status in dairy cows. In addition, the liver obviously plays a key role in coordinating metabolic responses in dairy cows in order to adapt and recover from NEB.
Several studies have indicated that dietary energy sources can be manipulated through inclusion of feedstuffs in the diet to prevent and/or treat NEB-related disorders Gong et al., 2002;Jorritisma et al., 2003;Van Knegsel et al., 2005;2007a;2007b;2007c;2007d;Gilmore et al., 2011;Thatcher et al., 2011). McGuire et al. (2004) reported that the improvement in DMI is the critical factor in dairy cows in meeting the energy needs for greater amounts of milk produced in early lactation without a more prolonged period of NEB. In addition, increasing levels of glucogenic or lipogenic dietary components in a diet of dairy cows change plasma energy biomarkers to reduce adverse metabolic and infectious disorders and improve milk synthesis and reproductive function. Lipogenic ingredients that stimulate the production of butyrate and acetate in the rumen are expected to increase the ratio of plasma lipogenic/glucogenic compounds (Van Knegsel et al., 2005). In addition, feeding dietary fat results in increased energy partition into milk and consequently limits the energy partition into body reserves (Van Knegsel et al., 2007a). In contrast, glucogenic nutrients (grains, cassava and potatoes and their by-products) are either fermented in the rumen to stimulate the production of propionate or bypass the rumen and are absorbed in the small intestine as glucose. Consequently, glucogenic nutrients can increase insulin and glucose levels, thus decreasing the ratio of plasma lipogenic/glucogenic compounds (Van Knegsel et al., 2005). As a result of improved insulin and glucose levels, dietary starch stimulates body fat deposition and energy partitioning into body tissue (Van Knegsel et al., 2007a). When dairy cows are fed a starch-or fat-based diet in excess of their daily nutritional requirements, as milk production begins to decline in the final third of the lactating period, they regain a positive EB. At this time, the EB recovery, as evidenced by an increase in plasma insulin and glucose levels, allows the stimulation of the enzyme acetyl-CoA carboxylase in the liver (Drackley, 2000). This hepatic enzymatic activation promotes the restoration of body fat through lipogenesis (Bauman & Currie, 1980) in anticipation of the next lactation (Friggens, 2003).
Effect of energy sources on metabolism of dairy cows
Cereal grains, such as maize, are fed primarily to provide energy to dairy cows and most of the digestible energy in cereal grains comes from starch (Ali et al., 2012). Levels of starch can range up to 30% on a DM basis of the diet in lactating dairy cows (Akins et al. 2014). Most of the starch is hydrolysed by various routes to pyruvic acid, which is then fermented in the rumen. The ruminal fermentation process increases the production of VFAs and greenhouse gases (CH 4 , CO 2 and H 2 ). The VFAs and greenhouse gases are absorbed through the rumen wall and lost by eructation, respectively. Starch also affects the protein/energy balance and the rumen MPS in ruminants (Rearte & Pieroni, 2001;Bargo et al., 2003). The rest of the starch, bypassing rumen fermentation, is digested by pancreatic enzymes and absorbed in the small intestine as glucose (Norberg et al., 2007). In dairy cows, the addition of starch to the diet decreases the energy loss through energy sparing from gluconeogenesis and results in a decrease in CH 4 production per unit of product through the increase in the efficiency of animal production (McDonald et al., 2002). Furthermore, dietary starch is efficient in alleviating the NEB, suggesting a reduced postpartum risk of metabolic disorders (Van Knegsel et al., 2007c). However, feeding high levels of starch can increase the risk of ruminal acidosis, diminish ruminal fibre digestibility, reduce the ruminal acetate/propionate ratio, and decrease the synthesis of milk fat in the udder (Bargo et al., 2003).
Dietary fat improves the energy density of the diet and increases the synthesis of milk fat in the mammary glands of dairy cows (Schroeder et al., 2004). It is almost entirely hydrolysed into fatty acids and hydrogenated in the rumen and subsequently absorbed from the small intestine (Doreau & Ferlay, 1994). Adding more than 8-9% of fat to the diet may result in milk fat and milk protein depression in the udder owing to its negative effect on DMI and rumen fermentation of fibre in particular (Schroeder et al., 2004). To overcome these complications and to improve the energy intake, interest has increased in feeding ruminally inert fats, such as Ca-salts of long-chain fatty acids, to lactating dairy cows (Schneider et al., 1988). The Casalts of long-chain fatty acids are energetically dense and consist of about 51.6% palmitic acid, 5.9% stearic acid, 35.4% oleic acid, and 6.2% linoleic acid (Schneider et al., 1988). In the rumen, these fats are insoluble and inert at ruminal pH variations (Chalupa et al., 1986) and decrease CH 4 production per DMI without any decrease in digestibility (Holter & Young, 1992). In the abomasum, the fats are broken down by hydrochloric acid to free fatty acids and Ca-ions. The rumen bypass of these fats consequently increases their absorption from the small intestine, potentially enhancing the supply of polyunsaturated fatty acids to the mammary gland (Purushothaman et al., 2008). Such synthesis of milk with modified fat composition has been associated with decreased risk of chronic diseases, including heart disease in humans (Lock & Bauman, 2004).
The inclusions of dietary starch and fat into diets of dairy cows have been demonstrated to be effective in reducing the extent and duration of NEB during early lactation (Williams & Stanko, 2000;Van Knegsel et al., 2007c;Garnsworthy et al., 2009). As nutrients are digested and absorbed through the GIT, a number of metabolic and hormonal signals released from the liver, pancreas, muscle and adipose tissues act on brain centres, regulating the DMI, EB and metabolism of dairy cows (Chagas et al., 2007). The signals, which can include glucose, fatty acids, insulin, IGF-I, glucagon, GH, ghrelin, leptin, and perhaps myostatin, trigger their receptors by means of positive and negative endocrinal feedback mechanisms to regulate DMI, body growth and reserves, milk synthesis and the reproductive axis (Chagas et al., 2007;Lucy, 2007;Garnsworthy et al., 2008a;Roche et al., 2009;Wathes, 2012;Esposito et al., 2014;Wankhade et al., 2017). At the ovarian level, the reproductive axis is regulated by the hormones of the hypothalamus (gonadotropin-releasing hormone (GnRH)), anterior pituitary (follicle-stimulating hormone (FSH) and LH), ovaries (P 4 , oestradiol and inhibins), and the uterus (prostaglandin-F 2α (PGF 2α )) through a system of positive and negative feedback signals governing the oestrous cycle in dairy cows (Forde et al., 2011). The ovarian follicular growth and development are characterized by consecutive follicular waves, that is, three in dairy cows and two in heifers per oestrous cycle. Each wave begins with the recruitment of a cohort of follicles from the established fixed number of primordial follicles during foetal development and finishes with the selection of a dominant follicle (Webb et al., 2004). While other recruited follicles undergo atresia, the dominant follicle continues to grow and mature in the preovulatory stage and eventually ovulates. When cows are in a NEB condition, NEFA and BHB are released from body reserves and used as an alternative energy fuel for other tissues (Vernon, 2002;Esposito et al., 2014;Schulz et al., 2014). Second, the somatotropic axis (consisting of GH, GH receptor and IGF-I) becomes uncoupled in the liver (Thatcher et al., 2010). Third, less ghrelin is released from the abomasum and more GH from the anterior pituitary gland (Chagas et al., 2007). Furthermore, less insulin, IGF-I and leptin are released from the pancreas, liver and adipose tissue, respectively (Leroy et al., 2008b). Lastly, these altered endocrinal signals further attenuate the LH pulse frequency and decrease the production of GnRH (Butler, 2003) and therefore suppress altogether the reproductive axis (Chagas et al., 2007). Such metabolic and hormonal depressions, as dictated by the degree and duration of the NEB, influence the ovarian function negatively in terms of the number of follicles, the rate of follicular growth and development, the size of the ovulatory follicle and the quality and viability of the oocyte (Lucy et al., 1991;Boland et al., 2001;Butler, 2003;Diskin et al., 2003;Lucy, 2003;Webb et al., 2004;Garnsworthy et al., 2008a). In contrast, improvements in these feedback-regulated metabolites (e.g. glucose, amino acids, fatty acids) and hormones (e.g. insulin, IGF-I and leptin) regulate the hypothalamic-pituitary-ovarian-uterine axis positively to enhance fertility outcomes of dairy cows (Leroy et al., 2008a;. Feeding diets that are designed to increase insulin levels during early lactation may increase the proportion of cows ovulating before 50 days' post partum (Gong et al., 2002;Van Knegsel et al., 2005). Adding dietary starch to the diets of dairy cows can improve insulin and glucose levels (Lammoglia et al., 1997) and reduce NEFA and BHB levels during the NEB period (Van Knegsel et al., 2007b) to eventually promote the resumption of the oestrous cycle (Garnsworthy et al., 2008b). However, high starch diets may suppress the appetite and thus DMI by inducing satiety and shorter meals (Thatcher et al., 2011). Furthermore, excessive insulin and IGF-I levels from high starch diets may overstimulate the ovary to negatively affect the developmental competence of oocytes (Leroy et al., 2008c). This overstimulation results in the production of inferior oocytes owing to uncoupled transcriptional factors (i.e. maternal messenger RNA and protein molecules) in the dominant follicle to acquire the full competence before ovulation (Armstrong et al., 2001). Poor transcription of these factors significantly reduces the quality and viability of the oocyte and after fertilization decreases the survival of the embryo prior to embryonic genome activation, which occurs at the 8-16 cell stage (Leroy et al., 2008b;. In contrast, the inclusion of dietary fat in a diet of dairy cows enhances the diet energy density stimulating milk synthesis, and yields higher NEFA and BHB levels associated with lower glucose and insulin levels (McNamara et al., 2003;Van Knegsel et al., 2005, 2007bMoallem et al., 2007). Furthermore, feeding dietary fat increases the number and size of follicles, and the oestradiol production of the preovulatory follicle (Lucy et al. 1991;Beam & Butler, 1997;Moallem et al., 2007), most likely via the induction of high cholesterol and IGF-I levels in follicular fluid and plasma (Van Knegsel et al., 2007a;Esposito et al., 2012). Vasconcelos et al. (2001) reported that an increased follicle size can have advantageous effects on both oocyte quality and corpus luteum function. The resulting high plasma cholesterol concentration improves PGF 2α and P 4 secretion Staples & Thatcher, 2005;Leroy et al., 2014), thus supporting embryo development and pregnancy survival (Ryan et al., 1992;Lammoglia et al., 1996;McNamara et al., 2003).
Obviously, manipulating the levels and types of energy feedstuffs containing dietary starch and fat can be a key tool in decreasing energy metabolic loss and optimizing the EB status of dairy cattle, while enhancing metabolic efficiency. This indicates that feeding starch-and fat-based diets to dairy cows can increase productivity and thus reduce CH 4 emissions per unit of production. However, a number of hormonal and metabolic signals are involved for successful reproduction of heifers and lactating cows, making physiological pathways with many inter-related factors complex (Chagas et al., 2007;Garnsworthy et al., 2008a).
Effect of energy sources on milk yield and milk composition
Increasing fat-and starch-based ingredients in the daily diet raised the milk production of dairy cows (Van Knegsel et al., 2005;Reis et al., 2012;Higgs et al., 2013;Roche et al., 2013). A possible explanation for the improved milk production can be attributed to the amount of energy intake, increasing the ME intake with both starch and fat ingredients (Bargo et al., 2003;Hills et al., 2015). Such an enhancement in ME intake was reported to affect lactation persistence positively (Hermansen, 1990;Reis et al., 2012). Supporting this response, previous studies reported enhanced milk production as a result of increased energy intake (Erickson et al., 1992;Chouinard et al., 1997;Moallem et al., 2000). However, other studies reported no effect on milk yield when feeding enriched starch-or fat-based diets or combinations (Garnsworthy et al., 2008b;Gilmore et al., 2011;Little et al., 2016). These researchers suggested that the lack of a significant effect on milk production could be attributed to the use of isocaloric diets in the studies.
Milk lactose percentage of dairy cows increased with the inclusion of dietary starch, but decreased with the addition of dietary fat (Van Knegsel et al., 2007c). However, other studies reported no effect on milk lactose percentage when starch or fat was added to diets of dairy cows (Van Knegsel et al., 2007a;Garnsworthy et al., 2008b;). The reason for these differences may be related to a limited capacity of the mammary gland to absorb increased glucose from the blood or to low plasma glucose available for lactose synthesis during early lactation (Piccioli-Cappelli et al., 2014). Milk protein percentage of dairy cows decreased with lipogenic diets (Erickson et al., 1992;Harrison et al., 1995;Chouinard et al., 1997). This inverse effect may be explained by the limitation in rumen microbial synthesis and gluconeogenesis with fats, leading to poor protein synthesis in the udder (Palmquist, 1988). However, glucogenic diets increased the milk protein percentage of dairy cows (Voigt et al., 2003), which may be attributed to greater plasma insulin levels (McGuire et al., 1995;Van Knegsel et al., 2007b), an enhanced MPS in the rumen (Carmo et al., 2015) and a greater mammary protein synthesis (Hills et al., 2015).
Milk fat percentage was usually enhanced after feeding lipogenic diets, but decreased when feeding glucogenic diets to dairy cows (Van Knegsel et al., 2007a;2007b;Garnsworthy et al., 2008b;Reis et al., 2012). However, overfeeding dietary starch or fat to lactating cows could lead to a depression in milk fat yield. Van Knegsel et al. (2007b) reported that an increase in insulin levels, induced by increased propionate from rumen digestion of starch, can promote glucogenesis over lipogenesis owing to low availability of fat precursors, to subsequently reduce the fat synthesis in the udder and milk energy output. Another report argued that the depression in milk fat content is possibly caused by an accumulation of trans fatty acids in the rumen because of the low pH with high starch diets (Kalscheur et al., 1997). Bauman & Griinari (2001) found that the decrease in milk fat content when overfeeding fat is generally attributed to altered rumen function, fat biohydrogenation and ruminal formation of trans-10 C18:1 fatty acids. Gama et al. (2008) pointed out that an increased supply of trans-10 cis-12 conjugated linoleic acid over other fatty acids to the udder was responsible for milk fat depression in dairy cows. This fatty acid has been recognized as a possible inhibitor of milk fat synthesis, decreasing the activity of lipogenic enzymes in the mammary gland (Baumgard et al., 2002).
Effect of energy sources on reproductive efficiency of dairy cows
Successful reproduction in dairy heifers and cows is the consequence of a chain of events, which consists of the establishment of oestrus in heifers and resumption of postpartum oestrous function in cows, the development and ovulation of a viable oocyte, conception, embryo development, implantation in the uterus, maintenance of pregnancy, and eventually calving (Garnsworthy et al., 2008a). A disturbance at any of these steps results in the failure of a successful conception and embryonic/pregnancy survival (Leroy et al., 2008a;. Because of this, the fertility of dairy cows is defined as a multi-factorial trait (Butler, 2003). The general decline in fertility has been attributed to a network of genetic, environmental, and managerial factors and their interactions, making it difficult to determine the exact reason for the deterioration in cow fertility (Walsh et al., 2011). So, for example, a decline in the fertility of dairy cows has transpired to reduced ability of the uterus to recover after calving, longer anovulatory periods and behavioural anoestrus, poor oestrous signs, irregular oestrous cyclicity, poor oocyte quality, poor fertilization, abnormal embryonic implantation and foetus development, uterine/placental incompetence, and pregnancy loss (Mwaanga & Janowski, 2000;Lucy, 2007;Wathes et al., 2007;Leroy et al., 2008c;Evans & Walsh, 2012).
Endocrine status, the interval from calving to first oestrus, conception rate, and pregnancy maintenance are all altered when reduced DMI and longer periods of NEB are manifested in cows (Mwaanga & Janowski, 2000). Increasing the amount of dietary starch and fat in the diet reduced the interval from calving to first ovulation and therefore initiated earlier postpartum cyclicity in cattle (Lammoglia et al., 1996;Gong et al., 2002;Santos et al., 2008;Burke et al., 2010). The early resumption of oestrous activity can be attributed to the improved EB status as the somatotropic axis synergises with the gonadotropins on ovarian cells, allowing the dominant follicle to ovulate and resuming the oestrous cycles afterwards. However, other studies reported no or negative effects of the energy intake level on the number of days from calving until the first oestrus (Beam & Butler, 1997;1998;Garcia-Bojalil et al., 1998;Oldick et al., 1997;Garnsworthy et al., 2009). Gong et al. (2002) reported increased conception rates following the first insemination when feeding dietary starch. In contrast, other investigations found no or negative effects of the energy intake level on the conception rate following the first insemination (McNamara et al., 2003;Garnsworthy et al., 2009;Gilmore et al., 2011). Furthermore, some studies found improved pregnancy rates when feeding dietary starch or fat to dairy cows (Burke et al., 2010;Reis et al., 2012), while others reported no or negative effects (McNamara et al., 2003;Dyck et al., 2011;Gilmore et al., 2011). However, important enhancements in conception rates were observed when feeding a diet that increased glucose and insulin levels in the early postpartum period and then switching to a diet that reduced insulin levels during the mating period, compared with other treatments (Garnsworthy et al., 2009). Furthermore, pregnancy rates for first and second services were enhanced when grass silage was supplemented with a similar concentrate fed to cows individually, based on the milk yield of the previous week, compared with those on a mixed diet containing grass silage and concentrate in a 50/50 ratio on a DM basis (Little et al., 2016). In contrast, Gilmore et al. (2011) found no improvements in pregnancy rates when feeding a glucogenic diet in early lactation to encourage the resumption of oestrus followed by a lipogenic diet to promote embryonic development, compared with other treatments. These researchers suggested that the lack of significance was due to the small number of animals used in the study.
Several causes could contribute to the inconsistency in effects of dietary starch and fat on the reproduction performance of dairy cattle in previous studies. First, the levels and types of dietary fat (chain length and degree of saturation of long-chain fatty acids) and starch (rate of fermentation in the rumen and proportion of rumen bypass starch) directly affected the profile of nutrients absorbed through the GIT and indirectly acted on the EB status, both of which probably influenced the ability to conceive and remain pregnant Van Knegsel et al., 2007a;Leroy et al., 2008c;Roche et al., 2011). Second, it is critical to distinguish between non-isocaloric and isocaloric diets in studies, since the energy density, defined by the nutrient content (starch versus fat), has been described as having significant effects on reproductive efficiency (Van Knegsel et al., 2005). Another source of variation could be differences in numbers of animals, protocols and interpretations of experimental results among studies (McNamara et al., 2003;Gilmore et al., 2011).
Usually, feeding dietary starch that promotes glucose and insulin levels (Garnsworthy et al., 2008b) favours an early resumption of the first postpartum ovulation (Gong et al., 2002), while decreasing the quality of oocytes (Armstrong et al., 2001) and the conception rate (Leroy et al., 2008b). Plasma NEFA and BHB levels are increased and insulin levels are decreased with dietary fat inclusion (Leroy et al., 2008c), resulting in a longer anoestrous period (Giuliodori et al., 2011). However, dietary fat improves the quality of oocytes and corpus luteum (Beam & Butler, 1997;Vasconcelos et al., 2001), while increasing the P 4 levels to enhance the pregnancy success . These results support the possible existence of nutritional signals associated with dietary energy levels and sources, dependently or independently of EB, which influence the reproduction axis through signals to the hypothalamus, pituitary, ovarian, oviductal, and uterine organs (Wathes et al., 2007). These observations suggest that the nutrient requirements for early resumption of ovarian cycles, follicle development and embryo development may be quite different in dairy cows, reflecting a potential advantage in diet alteration to ensure successful reproduction. This modification consists of feeding a glucogenic diet in early lactation to improve insulin levels for the resumption of oestrous activity, followed by a lipogenic diet before the breeding period to enhance cholesterol levels for oocyte quality and conceptus development. This feeding strategy has shown improved reproductive performance by feeding insulinogenic and lipogenic diets at different stages of the reproductive cycle (Garnsworthy et al., 2009). Despite all the progress made in this field, the physiological pathways explaining the link between EB indicators, hormonal and metabolic signals and their receptors, and pregnancy success remain, to a certain extent, unclear (Chagas et al., 2007). Additionally, feeding trials that investigated the interactions of energy levels and sources and their combinations from calving to mid or late lactation on reproductive performances are limited, thus making it difficult to draw final conclusions.
Conclusion
Inclusion levels and types of dietary energy sources, such as starch and fat, affect plasma metabolite profiles, milk production and fertility of dairy cows. Nutritional management before and after calving must facilitate successful metabolic adaptations in the liver and rapid increases of postpartum DMI, indispensable for improved milk production and efficient reproductive performance. This review demonstrated that there are definite physiological and metabolic links between the amounts and types of dietary energy nutrients absorbed through the GIT of dairy cows and their biological responses such as milk secretion and reproduction outcomes. In particular, relationships between metabolic (e.g. glucose, amino acids, fatty acids) and endocrinal (e.g. GH, insulin, IGF-I and leptin) signals and the reproductive system vary according to stage of the reproductive cycle. This suggest that the pregnancy rate could be optimized without compromising milk production with a two-diet strategy, consisting of a glucogenic diet until the resumption of the oestrous cycle and a lipogenic diet from the breeding period onwards. However, fertility before the establishment of oestrus in heifers or the resumption of oestrus in postpartum cows to the next calving is not only complex and multifactorial, but is in decline worldwide. In addition, bovine results on pre-and postpartum effects of energy sources and levels and their combinations on milk production and reproduction are limited under long-term field conditions. This is an area of research that requires detailed investigations. | 2018-12-19T19:16:15.353Z | 2018-01-30T00:00:00.000 | {
"year": 2018,
"sha1": "e7e779dbb1a1a4d89d8a69e589b34e7bacbc81be",
"oa_license": null,
"oa_url": "https://www.ajol.info/index.php/sajas/article/download/166214/155647",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cf464b18757043dcdbdd37d7e0de47eab237ad56",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
201758805 | pes2o/s2orc | v3-fos-license | Effects of antibacterial peptides on rumen fermentation function and rumen microorganisms in goats
Although many studies have confirmed that antimicrobial peptides (AMPs: PBD-mI and LUC-n) can be used as feed additives, there are few reports of their use in ruminants. The present study aimed to investigate the impact of AMPs on ameliorating rumen fermentation function and rumen microorganisms in goats. Eighteen 4-month-old Chuanzhong black goats were used in a 60-day experiment (6 goats per group). Group I was used as the control and was fed a basal diet, the group II were fed the basal diet supplemented with 2 g of AMPs [per goat/day] and group III were fed the basal diet supplemented 3 g of AMPs [per goat/day], respectively. Rumen fluid samples were collected at 0, 20 and 60 days. Bacterial 16S rRNA genes and ciliate protozoal 18S rRNA genes were amplified by PCR from DNA extracted from rumen samples. The amplicons were sequenced by Illumina MiSeq. Rumen fermentation parameters and digestive enzyme activities were also examined. Our results showed that dietary supplementation with AMPs increased the levels of the bacterial genera Fibrobacter, Anaerovibrio and Succiniclasticum and also increased the ciliates genus Ophryoscolex, but reduced the levels of the bacterial genera Selenomonas, Succinivibrio and Treponema, and the ciliate genera Polyplastron, Entodinium, Enoploplastron and Isotricha. Supplementation with AMPs increased the activities of xylanase, pectinase and lipase in the rumen, and also increased the concentrations of acetic acid, propionic acid and total volatile fatty acids. These changes were associated with improved growth performance in the goats. The results revealed that the goats fed AMPs showed improved rumen microbiota structures, altered ruminal fermentation, and improved efficiency regarding the utilization of feed; thereby indicating that AMPs can improve growth performance. AMPs are therefore suitable as feed additives in juvenile goats.
Introduction
The microbiota colonizing the rumen is an essential component of ruminant gastrointestinal tract(GIT) [1]. The microbial community in the rumen consists of bacteria (10 10 -10 11 cells/ mL), methanogenic archaea (10 7 -10 9 cells/mL), ciliate protozoa (10 4 -10 6 cells/mL), anaerobic fungi (10 3 -10 6 cells/mL) and bacteriophages (10 9 -10 10 particles/mL) [2]. A major function of the rumen microbiome is the fermentation of plant materials ingested by ruminant animals [3][4][5]. Rumen modulation is one of the most important methods for improving feed efficiency, ruminant health and performance in ruminant livestock production. Several antibiotic compounds, such as monensin, hainanmycin and virginiamycin, have been used to improve ruminal fermentation and the efficiency of nutrient utilization [6][7][8]. However, the overuse of antibiotics has raised concerns about product safety and environmental health, and the use of antibiotics as additives in animal feed is banned in the European Union (European Union, 2003).
Antimicrobial peptides (AMPs) are widespread in all living cells. They are endowed with antimicrobial [9], antifungal [10], antiviral [11], anti-parasitic [12] and antitumor activities [13]. Furthermore, immunoregulatory and antioxidant activities induced by AMPs have been shown to be mediated by the cationic charge, amphipathicity, amino acid composition and structure of these peptides [14]. AMPs act against target organisms either by membrane depolarization, micelle formation or the diffusion of AMPs onto intracellular targets [15][16][17][18]. Until now, few studies have reported on the use of AMPs as alternatives to feed antibiotics and growth promoters in ruminant nutrition. Nonetheless, AMPs have been associated with improved performance, nutrient retention and intestinal morphology, and to reduce the incidence of diarrhea in weanling piglets [19][20][21][22]. Peng et al. [23] demonstrated that dietary supplementation with crude recombinant porcine β-defensin 2 (rpBD2) has beneficial effects on the growth and intestinal morphology of weaned piglets, reducing the incidence of post-weaning diarrhea and the number of potential pathogens in the caecum. Therefore, AMPs are likely to serve as potential alternatives to antibiotics in livestock production [19]. Previous studies in our laboratory showed that adding AMPs (recombinant swine defensin and a fly antibacterial peptide mixed at a 1:1 ratio) to feed could improve the growth performance and immunity of weaned piglets [14,24]. Based on our previous findings and the reported bactericidal effects of AMPs, we hypothesized that dietary AMP supplementation could affect the rumen microbiota, and therefore ruminal fermentation. In the present study, we investigated the effects of AMPs on rumen fermentative function and rumen microbial community structure in Chuanzhong black goats.
Materials
The antimicrobial peptides (AMPs) used in the present study were provided by Rota BioEngineering Co., Ltd. (Sichuan, China). The AMPs were composed of recombinant swine defensin PBD-mI (DHYICAKKGGTCNFSPCPLFNRIEGTCYSGKAKCCIR), the net charge was caculated using protein calculator v3.4 (estimated charge at pH 7.00 = 4.0), and a molecular mass of about 5.4 kDa was obtained through a codon-optimized protein corresponding to mature defensin cDNA that was expressed and purified in Pichia pastoris yeast [25], and a fly antibacterial peptide LUC-n (ATCDLLSGTGVKHSACAAHCLLRGNRGGYCNGRAICVCRN), the net charge was caculated using protein caculator v3.4(estimated charge at pH 7.00 = 4.2) and a molecular mass of approximately 21.18 kDa was obtained through complementary DNA (cDNA) libraries constructed from micro-dissected salivary glands in whole maggots, that underwent transposon-assisted signal trapping, a technique selected for the identifi-cation of secreted proteins [26], at a blending ratio of 1:1 [14]. The purity of both components was estimated to be over 93%, which was purified by RP-HPLC and analysed by SDS-PAGE(Provided by Rota BioEngineering Co.,Ltd.). Each preparation was stored at dry, ventilated and lightproof place.
Animal handling
The experimental procedures performed on the goats and the care of the animals were approved by the Guide for Sichuan Agricultural University Animal Care and Use Committee, Sichuan Agricultural University, Sichuan, China, under permit no. DKY-B20100805, The young goats used in this study were healthy.
The AMPs used in this study were added to basal diets, using a portion of the basal diet as a carrier. Due to the previous study(+1g/kg of AMPs) by our research group [14], we intend to detect the effects of larger amount of the AMPs on rumen. The AMPs were mixed with carrier (basal diet) such that the addition of 2 and 3 g/kg of AMP with carrier equated to 30 and 45 mg/kg of dietary AMPs, respectively.
Eighteen uncastrated 4-month-old Chuanzhong black goats (Capra hircus; average weight 15.52±0.35 kg) were acclimated for 7 days before the experiment and received a basal diet (NRC,2007) only to ensure that the daily diet was fully consumed. All goats were caged, randomly organized into three groups, and were maintained at 25±2˚C with a 10 h light-14 h dark cycle. Group I was the control group; group II received a basal diet + 2 g of AMPs per head per day; and group III received a basal diet + 3 g of AMPs per head per day. The diet included concentrate (300 g per head per day) ( Table 1) and forage (fresh grass, Zoysia japonica, 300 g per head per day), after finishing the concentrate. The forage that was refused was collected and weighed every second morning (at 8:00) to record the intake of forage per group per day. Animals were housed with free access to water, and fed individually twice daily (at 09:00 and 18:00); the animals maintained their normal herd behavior. Of the goats fed diets containing AMPs, all consumed the complete daily concentrate diet under our daily supervision.
Sampling and DNA extraction
Rumen fluid samples were collected using a stomach tube on days 0, 20 and 60, prior to morning feeding; the first part of the rumen fluid was discarded to prevent interference from saliva. Three goats were selected from each treatment group for sampling (50 mL/goat). The rumen pH was measured immediately after collection using a portable pH meter [27] (PHB-4, Shanghai Leica Scientific Instrument Co., Ltd., Shanghai, China). Solid feed particles were removed from the rumen fluid by filtration through four layers of cheesecloth. Then, 5 mL of the ruminal fluid was mixed with 1 mL of 25% (w/v) meta-phosphoric acid and the mixture was stored at −80˚C for later analysis, including the analysis of volatile fatty acids (VFAs). Microbial genomic DNA was extracted from the rumen samples using a stool DNA kit (OMEGA Bio-Tek, Norcross, GA, USA), in accordance with the manufacturer's instructions [5].
Ruminal fermentation function analysis
The frozen samples were thawed at 4˚C and centrifuged at 3000×g for 10 min. The supernatant was mixed with the same volume of 20 mM 4-methyl N-valeric acid as an internal standard in preparation for total-VFA (T-VFA) analysis and chromatography according to Luo et al. [28]. The concentration of NH 3 -N was analyzed using visible-light spectrophotometry (Scientific BioMate 3s, Thermo). NH 4 Cl standards were prepared according to Broderick and Kang [29]. The microbial protein (MCP) in the rumen was analyzed by means of TCA protein precipitation [30]. The activities of carboxymethyl cellulase (CMCase), xylanase, pectinase and β-glucosidase were measured using the corresponding commercially available ELISA kits (R&D Systems). Protease activity was measured as follows: a reaction mixture containing 1 mL of casein and 4 mL of protease enzyme was incubated for 4 h at 38˚C. Then, the reaction was stopped by the addition of 10% trichloroacetic acid and the sample was centrifuged at 3500×g for 15 min. Next, 1 mL of supernatant was removed and mixed with 5 mL of 0.4 mol/L Na 2 CO 3 and 1 mL of Folin-Ciocalteu's phenol solution and incubated on the laboratory bench for 15 min. The hydrolyzed protein was measured using visual-light spectrophotometry at 680 nm. The concentration and activity of lipase and amylase were measured using commercially available reagent kits (NanJing JianCheng Bioengineering Institute, Nanjing, China).
Data analysis
Sequences obtained in this study were deposited in the NCBI Sequence Read Archive under BioProject numbers PRJNA398687, PRJNA398591 and PRJNA398697. Sequence reads were processed and analyzed by QIIME pipeline software (version 1.8.0) [31]. Chimeric sequences were removed to generate high quality sequences (UCHIME through the QIIME software). Using the UCLUST sequence alignment tool within the QIIME pipeline software, the high quality sequence was divided and aligned into operational taxonomic units (OTUs) with 97% sequence similarity. The most abundant sequences were compared with template regions in the Greengenes database (Release 13.8, http://greengenes.secondgenome.com/) (bacterial) and the NCBI (http://www.ncbi.nlm.nih.gov) database (ciliate protozoal) to acquire taxonomic information for each OTU and species composition information. Alpha diversity indexes (including the Simpson index and Shannon index) were obtained using QIIME pipeline software. R software was used to analyze microbial population structures. The results of these various analyses are expressed as the means and standard errors of the means (SEM). Statistical comparisons were made by one-way analysis of variance (ANOVA) using a statistical software package (SPSS 19.0, Business Machines Corporation, Armonk, NY, USA). Differences among treatments were regarded as significant at P < 0.05.
Growth performance
The growth performance for all groups of juvenile goats tested is listed in Table 2. Throughout the experimental period, the body weights were higher in the AMP-treated groups than in the control group. The average daily gain (g) was significantly higher (P < 0.05) in group II than in group III or the control group. No significant difference in average daily feed intake of forage was found between the AMP-treated groups and the control (P > 0.05).
Ruminal fermentation parameter.
The mean ruminal pH of samples from AMPtreated goats ranged from 6.81 to 6.92, which was within the normal physiological range. No significant difference in ruminal pH between the AMP-treated groups and the control group was observed (P > 0.05) ( Table 3).
Enzyme activity.
Xylanase, pectinase and lipase activities were higher in the AMPsupplemented goats than those in the control group (Table 4; P < 0.05). No differences in CMCase or protease activities were observed between AMP-treated goats and the control group (P > 0.05). The activities of β-glucosidase and amylase in group II were not significantly different from those in the controls (P > 0.05). However, the activities of these enzymes were significantly lower in group III than those in group II or the control (P < 0.05). (Table 5), and were lower in the AMP-supplemented goats than in the control group (P < 0.05). By contrast, the bacterial phyla Firmicutes, Verrucomicrobia, Tenericutes, Cyanobacteria and Fibrobacteres increased in the AMP-supplemented groups compared with the control (Table 5; P < 0.05), but Firmicutes and Verrucomicrobia increased significantly only at 60 days, and in group III only Tenericutes differed significantly from the controls (P > 0.05). No differences in the proportion of Bacteroides were observed between the AMPtreated goats and the control group (P > 0.05).
At the genus level, Prevotella dominated the assignable sequences, on average accounting for 29.21% of the total bacteria. The next most common genera were Butyrivibrio (6.38%), [Paraprevotellaceae]CF231 (5.82%), Fibrobacter (3.96%), Succinivibrio (3.04%) and Anaerovibrio (2.63%). The bacterial genera Fibrobacter and Succiniclasticum were more highly represented in AMP-supplemented goats than in the control group (Table 6; P < 0.05). The genera [Paraprevotellaceae]CF231, Succinivibrio, Selenomonas and Treponema were decreased in the AMP-supplemented goats compared with the control group (Table 6; P < 0.05); however, [Paraprevotellaceae]CF231 decreased significantly only at 60 days. No differences in the proportion of Prevotella, Butyrivibrio or Anaerovibrio were evident between AMP-treated goats and the control group (P > 0.05). The Simpson and Shannon diversity indexes showed no significant differences between AMP-treated goats and the control group (Tables 7 and 8).
Ciliate community structure.
A total of 325,008 ciliates reads were retained following filtering to exclude low quality reads, an average of 18,056 reads per rumen sample. Although all animal groups were fed the same diet, a high level of variation was observed between individuals in terms of ciliate community composition at the genus level. The only common characteristic was that Polyplastron and Ophryoscolex were the dominant ciliates in every sample (Table 9 and (Table 9; P < 0.05), but in group III animals only Isotricha differed significantly from the control (P > 0.05). No differences in Diploplastron or Dasytricha were observed between AMPtreated goats and the control group (P > 0.05).
Discussion
Recently, a large body of research has focused on developing alternatives to antibiotic feed additives. Among these alternatives, AMPs have gained increasing attention because of their broad-spectrum activity, speed of action and low propensity for the development of bacterial resistance [20,[32][33][34]. In general, the development of AMPs into feed addictives has been hampered by their potential for toxic side effects, suboptimal efficacy, and, most notably, the lack of cost-effective production systems.The present study demonstrates the effect of AMPs on different rumen bacteria and ciliates in juvenile goats, which can provide a theoretical basis for the future as alternatives to antibiotic. In this study, we report that dietary supplementation with AMPs improved the growth of juvenile goats. This was consistent with the finding of Yoon et al. [35] who observed an improvement in average daily gain and feed use efficiency in weanling pigs fed diets supplemented with AMP-A3. Similarly, Jin et al. [33][34] observed an improvement in average daily gain of weanling pigs fed diets supplemented with AMPs from Solanum tuberosum. Moreover, 2 g/head/day of AMPs improved the growth performance more effectively than higher doses (3 g/head/day), although the reason for this remains unclear.
Microbial community composition in ruminants has previously been linked with animal production traits [36]. In the present experiment, we found that Bacteroidetes, Firmicutes and Proteobacteria were the main phyla in all samples. At the genus level, Prevotella was the most abundant genus detected, followed by Butyrivibrio, [Paraprevotellaceae]CF231, Fibrobacter, Succinivibrio and Anaerovibrio. Many of these genera include organisms that are important cellulose and hemicellulose-degraders, indicating that the rumen bacterial community may be highly oriented towards fiber degradation. This community structure is similar to the inferred rumen bacterial community structure of sheep [37]. We also found that Polyplastron and Ophryoscolex were the dominant ciliate genera in all samples. The protozoal community composition was similar to the A type (dominated by Polyplastron, Ostracodinium, Dasytricha and Entodinium) [38]. However, other studies have identified Entodinium as the most dominant protozoal genus in ruminants [39][40][41][42]. This discrepancy between studies may be due to differences in diets. In this study, forage grass was the main fodder supplied, and as a result, the proportions of Polyplastron and Ophryoscolex were greater than those of Entodinium. Dehority and Odenyo [43] reported that the levels of Entodinium were considerably higher in animals fed concentrates and intermediate mixed feeds compared with those eating roughage. In addition, high-throughput sequencing might not reflect the true composition of rumen ciliates. Kittelmann et al. [44]reported that smallercelled genera, such as Entodinium, Charonina and Diplodinium tended to be underrepresented, while larger-celled genera, such as Metadinium, Epidinium, Eudiplodinium, Ostracodinium and Polyplastron tended to be overrepresented using the pyrosequencing approach, indicating that this may not an appropriate methodology in this case. In goats, growth is accompanied by a decrease in the amount of OTUs, which means a decline in the diversity of rumen bacteria to some degree. On day 60, the number of OTUs in all AMP-treated groups was higher than in the control group, despite a decrease in the abundance of Proteobacteria in two AMP-supplemented groups, which may be explained by the selective effects of AMPs on different bacteria. AMPs provide beneficial effects in host animals by improving their intestinal balance and optimizing the gut microecological conditions that suppress harmful microorganisms, such as Clostridium spp. and coliforms, and by favoring beneficial microorganisms, such as Lactobacillus and Bifidobacterium [20,[45][46][47]. A number of recent studies have suggested that dietary supplementation with an AMP, such as lactoferricin and the lactoferrampin fusion peptide, potato protein, antimicrobial peptide P5 or cecropin AD, reduced the total number of aerobes while simultaneously enhancing the total amount of anaerobes and beneficial lactobacilli, thus improving growth performance in weaning pigs [19][20][47][48]. In this study, we report significantly fewer Proteobacteria and significantly more Fibrobacteres in the AMP-supplemented groups. This finding may be explained by Fibrobacteres comprising anaerobic bacteria [49], whereas Proteobacteria comprises aerobic bacteria. Specifically, Proteobacteria includes a number of genera with pathogenic strains [50], and the antibacterial peptide may therefore have inhibited the pathogenic bacteria while enhancing the total number of anaerobes [20]. Dietary supplementation with AMPs increased some bacterial genera and the ciliate genus, whilst also reducing some other bacterial genera and the ciliate genera. Of these, Fibrobacter [51][52], Treponema [53], Ophryoscolex [54], Enoploplastron [55] and Polyplastron [38] are cellulose-degrading microbes and Succiniclasticum [56], Entodinium and Isotricha [38] are starch-degrading microbes. Selenomonas and Succinivibrio degrade both starch and cellulose, and Anaerovibrio [57] are fat-degrading bacteria. The function of [Paraprevotellaceae]CF231 is unclear, and the levels of Succiniclasticum, Selenomonas, Treponema, Enoploplastron and Entodinium were low in this study. Therefore, we speculate that, the increase in relative abundance of Fibrobacter and Ophryoscolex was responsible for the increase in the activities of xylanase and pectinase and the decrease in activity of β-glucosidase Effect of antibacterial peptides in the goat in group III, and that the decrease in relative abundance of Isotricha was responsible for the decrease in activity of amylase in group III, and the increase in relative abundance of Anaerovibrio was the cause of increased lipase activity. The fermentation products of Fibrobacter, Anaerovibrio, Ophryoscolex, Polyplastron and Isotricha are acetate, propionate and succinate, those of Succinivibrio are succinate, and those of Butyrivibrio are acetate and butyrate. Therefore, the increase in relative abundance of Fibrobacter, Anaerovibrio and Ophryoscolex may be the reason for the increase in acetate and propionate. The lack of change in Butyrivibrio may be associated with the lack of change in butyrate. Acetate, propionate and butyrate are the main components in VFAs, and account for 95% of the total volatile matter content [58]. Therefore, the cause of the increase in T-VFA may be the same as described above. However, a decline in T-VFA was observed after 20-60 days in both of the AMP-treated groups compared with the control, which may be related to the changing trend of Fibrobacter. VFAs, as end-products of fermentation by rumen microorganisms, provide 70%-80% of the calorific requirements of ruminants [59]. The improved growth performance in juvenile goats in the AMP groups might be due to an increase in T-VFAs. This conclusion is consistent with the results reported by Wang et al. [60], who showed that a ruminal infusion of soybean small peptide (100, 200, 300 g/day) increased the ammonia, propionate and T-VFA concentration, and improved nutrient digestion and ruminal fermentation in Luxi Yellow cattle. Similarly, Hino et al. [61] observed that 12.5-25 mg/L of aibellin enhanced propionate production without significantly affecting the production of T-VFAs, protozoal survival or cellulose digestion in vitro. By contrast, Patra et al. [62] reported that essential oils (garlic oil, clove oil, eucalyptus oil, oregano oil and peppermint oil) significantly decreased ammonia production, altered the abundance and diversity of archaea, and also exerted adverse effects on ruminal feed digestion and fermentation in vitro. The differences in results among studies might be due to variations in the types of additive used, the level of dietary supplementation or the mode of action of the additives.
In summary, this study showed that AMP supplementation maintained the rumen microecological balance, but increased the relative abundance of Fibrobacter, Anaerovibrio and Ophryoscolex, and reduced the relative abundance of [Paraprevotellaceae]CF231, Succinivibrio, Polyplastron and Isotricha. The supplements also improved the rumen microbiota structure, altered ruminal fermentation, and increased the utilization efficiency of feed, thereby improving the potential growth performance. These results indicated that AMPs can be used as a feed additive in juvenile goats. Aranha et al [63]reported that Nisin can inhibit sperm activity in humans, monkeys and mice,and thus we will explore that if longer term use of the AMPs used in the present study can influence the fertility of goats in the future researches. The cytotoxic effects of AMPs on host cells and the detailed mechanism(s) by which AMPs improves the rumen microbiota structure of juvenile goats requires further clarification. | 2019-09-01T14:58:53.818Z | 2019-08-30T00:00:00.000 | {
"year": 2019,
"sha1": "2feae623306a594ebf7b76a3ac83aba87aab8303",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0221815&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "09f898d7348e1e840f0d6212db986b326f0cc060",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
17303450 | pes2o/s2orc | v3-fos-license | Effectiveness and Safety of Newer Antidiabetic Medications for Ramadan Fasting Diabetic Patients
Hypoglycemia is the most common side effects for most glucose-lowering therapies. It constitutes a serious risk that faces diabetic patients who fast during Ramadan (the 9th month in the Islamic calendar). New glucose-lowering classes like dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagon-like peptide 1 receptor agonist (GLP-1 RA), and sodium-glucose cotransporter-2 (SGLT-2) inhibitors are efficacious in controlling blood glucose level with less tendency to induce hypoglycemia and thus may constitute a good choice for diabetic patients during Ramadan. This study reviews the safety and efficacy of newer glucose-lowering therapies during Ramadan. This study was accomplished through a careful literature search about studies that assess the benefit and side effects of these new glucose-lowering therapies during Ramadan during September 2015. Vildagliptin, sitagliptin, liraglutide, exenatide, and dapagliflozin were the only studied glucose-lowering therapies. All of the studied newer glucose-lowering therapies except dapagliflozin were associated with reduced risk to induce hypoglycemia. Gastrointestinal upset was common with the usage of liraglutide while increased thirst sensation was common with dapagliflozin. In conclusion DPP-4 inhibitors such as vildagliptin and sitagliptin may form a suitable glucose-lowering therapy option for Ramadan fasting patients.
Introduction
Fasting during Ramadan, the 9th month in the Islamic calendar, is not mandatory for patients with diabetes mellitus (DM), but many insist on fasting. This can create many health problems, especially if the fast is prolonged [1]. Glucoselowering therapies are cornerstone for treating all type 2 DM patients to ensure tight glycemic control to prevent acute complications like hyperosmolar nonketotic coma and chronic complications such as the micro-and macrovascular complications. Hypoglycemia is the most serious and fatal complication for fasting and for many treatment options for diabetes, such as insulin and some of the oral glucoselowering therapies, including sulfonylurea (SU) and meglitinides [2,3]. In the last decade new classes of glucoselowering therapies associated with reduced risk of inducing hypoglycemia have been introduced. These include incretin mimetics, such as dipeptidyl peptidase-4 (DPP-4) inhibitors, glucagon-like peptide-1 receptor agonist (GLP-1 RA), and the sodium-glucose cotransporter-2 (SGLT-2) inhibitors [4,5]. There have been few review studies of the use of these new glucose-lowering therapies during Ramadan. Most focus only on one class of glucose-lowering therapies [6,7]. One review discussed the benefits and drawbacks for many classes of newer glucose-lowering therapies but did not include information about SGLT-2 inhibitors. Furthermore, that study did not provide a conclusion on which medication is the best to be used during Ramadan by patients with type 2 DM [8]. This study reviews the safety and efficacy of newer glucoselowering therapies in order to identify those that are most suitable for patients with DM during the fasting month of Ramadan.
2 Journal of Diabetes Research inhibitors (canagliflozin, dapagliflozin, ipragliflozin, and empagliflozin), in combination with the essential keyword (Ramadan). EMBASE was not searched because of funding limitations. All study types (prospective observational, randomized blinded clinical trials and randomized openlabel trials) that examined the efficacy and side effects of these classes of glucose-lowering therapy on patients with type 2 DM during the fasting month of Ramadan were included. Reviews were excluded. Information from these studies were summarized in relation to study design, duration of study, number of participating patients, medications used, assessment criteria for medication safety and effectiveness, and final conclusions.
Results
A total of 16 studies were included as shown in Table 1. Full text was obtained in nine studies, abstract in four studies, and posters in three studies. Eight studies were randomized clinical trials (RCT) and eight were prospective observational studies. Information about each class of glucose-lowering therapies was summarized according to the medication used in each class and whether this medication was studied as monotherapy or as add-on therapy to other glucose-lowering therapies.
Dipeptidyl Peptidase-4 Inhibitors.
The dipeptidyl peptidase-4 (DPP-4) inhibitors are a new class of oral glucose-lowering therapies for type 2 DM treatment. They act by inhibiting the breakdown of GLP-1, increasing its systemic concentration which leads to a significant increase in endogenous insulin secretion and a decrease in glucagon secretion. They have a glucose dependent mechanism of action, leading to a lower incidence of hypoglycemia. In clinical practice DPP-4 inhibitors are associated with 0.6-0.7% reductions in glycosylated hemoglobin (HbA 1c ) without causing weight gain [9,10]. The first DPP-4 inhibitor, sitagliptin, was approved 10 years ago by United States government's Food and Drug Administration (FDA). Since then many DPP-4 inhibitors such as vildagliptin, saxagliptin, linagliptin, and alogliptin have been approved and are available in the pharmaceutical market [11]. Vildagliptin and sitagliptin are the most frequently studied DPP-4 inhibitors for control of type 2 DM during Ramadan. Unfortunately vildagliptin is not available in USA.
Vildagliptin
(1) Vildagliptin as Monotherapy. There are only few studies that assess the benefits of vildagliptin as monotherapy for type 2 DM patients maybe because of its unlicensed use as monotherapy. In nonfasting patients vildagliptin has nearly similar effectiveness to SU in lowering HbA 1c but with less risk to induce hypoglycemia [12]. In Ramadan-fasting type 2 DM patients, vildagliptin usage was assessed in three studies only. One of these studies was a small scale, multicenter, open-label, 4-week, observational study [13] for 97 Indian patients with type 2 DM who were fasting during Ramadan. Patients were divided into two groups at which in one group 55 patients were given vildagliptin and in the other 42 patients were given SU. The incidence of hypoglycemia (defined as blood glucose level less than 70 mg/dL; 3.9 mmol/L), which was assessed depending on patient symptoms and confirmed by measuring blood glucose level, was lower in the vildagliptin group than in SU group (0% versus 4.8%; = 0.104). HbA 1c was decreased in vildagliptin group while there was a slight increase in SU group (−0.43% versus 0.01%; < 0.05). More patients in the vildagliptin group achieved HbA 1c < 7.0% than in the SU treated group (16.4% versus 4.8%; = 0.055). Additionally, there was a significant difference in weight loss. Patients in the vildagliptin group lost an average of 1.2 kg while those in SU group lost an average of 0.03 kg ( < 0.001). Although vildagliptin was shown to be safer than SU in this study, this superior safety was lacking statistical significance, perhaps due to the small sample size. In another large, multiregional, observational study [14] that was conducted in Asia and the Middle East, 1315 type 2 diabetic Muslim patients were divided into two groups where 684 patients had received treatment with vildagliptin and 631 patients received SU (glibenclamide, glimepiride, gliclazide, or glipizide) as monotherapy or as add-on to metformin. Vildagliptin was significantly more effective in reducing HbA 1c than SU (−0.24% versus 0.02%; < 0.05). Also, vildagliptin was associated with significantly fewer episodes of hypoglycemic events (defined as patient reported symptoms and/or blood glucose level less than 70 mg/dL; 3.9 mmol/L) in comparison with the SU therapy (5.4% versus 19.8%; < 0.05). This large study confirmed that vildagliptin had significantly higher effectiveness and safety when compared to SU. These two studies showed that the risk of hypoglycemia in patients using vildagliptin is around onethird to that in patients using SU.
In summary, the use of vildagliptin 50 mg twice daily as monotherapy for Ramadan fasting patients is more effective than SU to control blood glucose level (through HbA 1c reduction) and body weight. It is also safer than SU by its less risk to induce hypoglycemia.
(2) Vildagliptin as Add-On Therapy. There are many studies examining vildagliptin as add-on therapy for both fasting and nonfasting patients. In nonfasting patients it was found that vildagliptin when used as add-on therapy to metformin has comparable efficacy to different SU (glimepiride and gliclazide) but with less risk to induce hypoglycemia [15,16]. In Ramadan fasting patients, three studies assess vildagliptin in Ramadan fasting type 2 DM patients. The 1st study in this regard was a small scale study [17] that was conducted in London and included 52 patients with type 2 DM who were already using metformin (2 g/day); these patients were randomized equally into two groups where half of them were given vildagliptin 50 mg daily and the other half were given gliclazide 160 mg twice daily in addition to their primary therapy. Hypoglycemic events (defined as blood glucose <63 mg/dL; 3.5 mmol/L with or without symptoms) significantly occurred less frequently for patients in the vildagliptin group than in the gliclazide group (7.7% versus 61.5%; ≤ 0.001). The effect of both gliclazide and vildagliptin was similar on HbA 1c and body weight. The lack of significant to SU [17] Large, prospective observational study Vildagliptin was associated with fewer episodes of severe hypoglycemia but with similar glycemic control to SU/glinide [18] Large, multicenter, prospective observational study Vildagliptin using patients suffered from hypoglycemia less frequently than those using SU. The reduction in HbA 1c was greater but not significantly different with vildagliptin than in SU [19] Multiregional, large scale, randomized double blind study Vildagliptin had similar effectiveness to lower HbA 1c but with less hypoglycemic risk than gliclazide [20] DPP-4 inhibitors Sitagliptin as add-on therapy Pilot prospective observational study Sitagliptin usage was not associated with hypoglycemic attacks [22] Large, multinational randomized study Sitagliptin was associated with significantly less risk of hypoglycemia than glibenclamide and glimepiride but similar risk to gliclazide [23] Large, multicenter, randomized study Sitagliptin was associated with significantly less risk of hypoglycemia than glibenclamide and glimepiride but higher hypoglycemic risk than gliclazide [24] GLP-1 RA Exenatide as add-on therapy Pilot observational study No risk of hypoglycemia even if pre-Ramadan dose of exenatide is not adjusted during Ramadan [27] Observational study Exenatide was associated with less risk of hypoglycemia than gliclazide [28] 4 Journal of Diabetes Research Liraglutide was associated with significant reduction in body weight and nonsignificant but greater reduction in HbA 1c than SU Liraglutide was less significantly inducing hypoglycemia than SU [29] Large, open-label, multinational randomized trial Liraglutide was less likely to produce confirmed hypoglycemic attacks compared with SU. Moreover, patients on liraglutide experienced significantly greater weight loss and had significantly greater improvements in HbA 1c than those on SU [30] SGLT-2 inhibitors
difference in the effectiveness between vildagliptin and gliclazide in this study may be due to short period (24 days) of follow-up besides the small sample size which further compromises statistical significance. Another small prospective, 16-week, multicenter study [18] was conducted in UK for patients who were using either vildagliptin 50 mg twice a day (30 patients) or gliclazide (41 patients) as add-on therapy to metformin during Ramadan fasting. There were no hypoglycemic events (defined as blood glucose level less than 70 mg/dL; 3.9 mmol/L) for patients in the vildagliptin group while 44.4% of the patients in the SU group suffered from hypoglycemic events. There was significantly greater HbA 1c reduction for those taking vildagliptin; however this higher effectiveness may be attributed to higher adherence rate by patients using vildagliptin than for those using gliclazide [19].
On the other hand the high missing rate among patients using gliclazide is a confirmation to the finding of this study in that vildagliptin despite regular usage resulted in fewer hypoglycemic events.
In 2014 a randomized, open-label, clinical trial [20] was done on 69 patients who were on a combination of metformin and SU (glimepiride or gliclazide). Patients were divided into two groups: a control group in which patients were maintained on their usual treatment regimen with dose adjustment for the fasting period and a study group at which patients were switched from SU to vildagliptin 50 mg twice daily in combination with metformin. There was no difference in the effect of vildagliptin and SU on the calculated change in HbA 1c . The incidence of hypoglycemia, which was confirmed by measuring blood glucose level, during Ramadan was higher in the SU group (26 episodes versus 19 episodes; = 0.334). The number of patients who had medication noncompliance because of fasting discomfort was higher in the SU than in the vildagliptin group.
All the previously discussed small scale studies [17,18,20] concluded that vildagliptin as add-on therapy to metformin was at least as effective as SU with a lower risk to induce hypoglycemia for type 2 DM patients during Ramadan.
Another large study was a prospective, observational, 14-week study design [21] for 198 stable patients on dual oral therapy for ≥2 months and with HbA 1c ≤ 8.0%. 83 patients were in the metformin-sulfonylurea/glinide (IS) cohort and 115 patients were in the metformin-vildagliptin cohort. Hypoglycemic episodes (defined as blood glucose ≤ 70; 3.9 mmol/L) were confirmed in 30.8% of the IS cohort and 23.5% in the vildagliptin cohort ( > 0.05), while severe hypoglycemia and/or unscheduled medical visit due to hypoglycemia occurred in 10.4% of the IS cohort and in 2.6% of vildagliptin cohort ( = 0.0029). Glycemic control remained stable in both cohorts. Compliance with fasting was higher, as well as adherence to drug therapy in vildagliptin cohort, with ≥5 missed doses for 15.4% of IS, compared to 8.5% only in patients using vildagliptin. In this study the nonsignificant difference in the total hypoglycemic episodes between vildagliptin and SU/glinide which was different from the finding of previous studies at which vildagliptin was less likely to induce hypoglycemia than SU, maybe due to the usage of glinide in some patients but unfortunately their number was vague which keeps the final conclusion about the hypoglycemic risk of glinide in comparison with vildagliptin unknown.
Another study, VIRTUE study [22] was a multicenter, prospective, 16-week observational study that enrolled 244 Pakistani patients with type 2 DM. All included patients were already treated with vildagliptin ( = 121) or SU ( = 121; 67% on glimepiride, 14% on gliclazide, 18% on glibenclamide, and 1% on glipizide) as add-on to metformin or as monotherapy for at least 4 weeks. Patients in the vildagliptin group experienced at least one episode of hypoglycemia (defined as blood glucose measurement ≤ 70.2 mg/dL; 3.9 mmol/L) less frequently than patients in the SU group (5.8% versus 14.2%; < 0.033). The reduction in HbA 1c was greater with vildagliptin than in SU (−0.3% versus −0.1%; < 0.054). A reduction of 0.3 kg in body weight was seen with vildagliptin treatment versus 0.2 kg weight gain in the SU group. Overall adverse events (hypoglycemia, nausea, vomiting, abdominal pain, and abdominal discomfort) were less frequently reported in vildagliptin cohort than in the SU group (15.7% versus 17.4%; = 0.729). Hypoglycemic events were significantly less common in vildagliptin than in SU group (5% versus 13.2%; = 0.024). GIT side effects including abdominal pain, nausea, and vomiting are more common in vildagliptin than in SU group (10.8% versus 2.5%; < 0.05); these side effects may be symptoms for acute pancreatitis which is a rare but serious side effect of vildagliptin therapy [23].
However, large, multiregional, randomized studies are required to confirm the safety of vildagliptin on GIT system and/or pancreatitis when used as add-on therapy to metformin for Ramadan fasting patients since this study was just an observational study on a limited number of Pakistani patients only.
More recently a STEADFAST study [24] was a multiregional, randomized, double-blinded study for 557 patients with type 2 DM who were previously treated with metformin and any SU were randomized to receive vildagliptin 50 mg twice daily or gliclazide plus metformin. The percent of patients reporting confirmed hypoglycemia (blood glucose less than <70.2 mg/dL; 3.9 mmol/L) were lower with vildagliptin than the gliclazide (3% versus 7.0%; = 0.039).
There was a nonsignificant difference in effectiveness of vildagliptin versus gliclazide according to the adjusted mean change in HbA 1c ( = 0.165). Also there was a nonsignificant difference between vildagliptin and gliclazide on body weight change ( = 0.987). Overall safety (measured by adverse effects on all body organs) was similar between the treatments. The randomized design of this study and its large sample size lead to a less biased conclusions and a thus form a strong evidence for the lower risk of hypoglycemia with comparable safety of vildagliptin when compared to gliclazide. This finding occurs in contrast to the findings of VIRTUE study which assume that GIT upset is higher by vildagliptin than in SU (the majority of patients were on glimepiride) while in STEADFAST study vildagliptin has similar incidence of GIT side effects to gliclazide and since the incidence of GIT side effects is higher by glimepiride than gliclazide [25]; then it can be concluded that vildagliptin has at least comparable safety to SU.
Journal of Diabetes Research
In summary, the usage of vildagliptin in fasting patients as add-on therapy to metformin has comparable safety and effectiveness to SU but with a lower tendency to induce hypoglycemia.
Sitagliptin
(1) Sitagliptin as Add-On Therapy. There is a complete absence for studies that evaluate the effect of sitagliptin as monotherapy for patients with type 2 DM during Ramadan but there are many studies on the usage of sitagliptin as add-on therapy to other glucose-lowering therapies, specifically metformin. In a pilot prospective study [26] involving 15 patients, a combination of sitagliptin with metformin was safe (not associated with hypoglycemic events) for Ramadan fasting patients with type 2 DM; however, the small sample size and the funding of this study by a pharmaceutical company that manufactures sitagliptin make it difficult to draw an accurate conclusion regarding the benefits of using sitagliptin during Ramadan.
Sitagliptin had been also studied in large, randomized, open-label multinational study [27] that was done on 1021 type 2 DM patients who intended to fast during Ramadan and were already treated with stable doses of SU (35% glibenclamide, 35% glimepiride, or 30% gliclazide) and metformin for at least 3 months before screening. Patients were randomized to either receive sitagliptin 100 mg/day plus metformin ( = 507) or remain on their prestudy treatment ( = 514). There was a nonsignificant difference in the incidence of symptomatic hypoglycemia (based on patient reported symptoms and confirmed by blood glucose level less than 70 mg/dL; 3.9 mmol/L) for patient in the gliclazide group when compared with those in the sitagliptin (6.6% versus 6.7%; > 0.05) group, while a significant difference in the incidence of hypoglycemia occurs between patients using sitagliptin and those using glibenclamide and glimepiride. Furthermore, sitagliptin appeared to induce side effects to a lesser extent than SU (3 versus 9 patients, resp.); all the side effects in sitagliptin group were not serious and include constipation, vomiting, or hyperglycemia, while three patients in SU group developed serious problems like ischemic stroke, acute pancreatitis, and urinary tract infection. In this study although there was a nonsignificant difference in the incidence of hypoglycemia between gliclazide and sitagliptin, gliclazide was associated with lower risk of hypoglycemia when compared to sitagliptin; unfortunately authors in that study did not classify patients who are using gliclazide according to the used dosage form (sustained and immediate release), so it is difficult to conclude that whether this lower incidence of hypoglycemia by gliclazide when compared to sitagliptin is related to specific dosage form of gliclazide or due to gliclazide itself.
Another multicenter, randomized study [28] is involving 848 Ramadan fasting patients with type 2 DM, who were already treated with a stable dose of SU (65% glimepiride, 22% glibenclamide, and 13% gliclazide) with or without metformin (86% and 14%, resp.) for ≥3 months and had HbA 1c ≤ 10%. Patients were divided into two groups: 421 patients were switched from SU to sitagliptin 100 mg once daily and 427 patients remained on SU. The proportion of patients who recorded ≥1 symptomatic hypoglycemic event during Ramadan was lower for patients in sitagliptin than in SU group (3.8% versus 7.3%; = 0.028). The incidence of symptomatic hypoglycemia was the lowest in patients using gliclazide (1.8%), then sitagliptin (3.8%), then glibenclamide (5.2%), and finally glimepiride (9.1%). The proportion of patients experiencing adverse effects other than hypoglycemia was 10.0% versus 7% in the sitagliptin and SU group, respectively. The major limitation for this study was the absence of medication efficacy assessment through measuring glycemic control and body weight. The assumption of this study in that sitagliptin is safer than SU in regard to hypoglycemia was not accurate since incidence of hypoglycemia is lower in gliclazide than in sitagliptin. The finding of this study may provide confirmatory evidence to the finding of Al Sifri et al. [27] which found a less likely risk of hypoglycemia by gliclazide when compared to sitagliptin. Furthermore, failure to assess the effect of sitagliptin versus SU on glycemic control and weight for fasting patients in these studies can be considered as main limitation in drawing a reliable conclusion regarding the usage of sitagliptin during Ramadan [26][27][28].
In summary, the studies that evaluate the usage of sitagliptin for patients with diabetes during Ramadan found that it is associated with reduced risk of hypoglycemia when used as add-on therapy to metformin compared to two SUs (glimepiride and glibenclamide) and a slightly higher risk of hypoglycemia than gliclazide; this difference may be attributed to the higher degree of selectivity in pancreatic receptor stimulation by gliclazide [29] while in nonfasting patients the risk of hypoglycemia was lower in sitagliptin than SU [30].
Other rare side effects including gastrointestinal (GIT) side effects (vomiting, constipation, and abdominal pain) and central nervous system (CNS) side effects (headache, dizziness, and decreased concentration) appeared to occur at higher percent in sitagliptin than in SU treated patients. One of the major limitations in all of these studies about the usage of sitagliptin during Ramadan is that they did not focus on sitagliptin effect to control blood glucose level, so further studies are needed in this regard to find out whether sitagliptin benefit is limited to less risk of hypoglycemia or extends beyond that to include a better glycemic control than SU for patients with type 2 DM during Ramadan.
Other DPP-4 Inhibitors.
Till the time of collecting the data for this review there are no any study (2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015) that had been evaluating the effect and side effects of using DPP-4 other than vildagliptin and sitagliptin like linagliptin, saxagliptin, and alogliptin during Ramadan. It seems that the differences are negligible regarding efficacy and incidence of hypoglycemia among all DPP-4 inhibitors [11]. So it may be reasonable for researchers to investigate the benefits of other new DPP-4 inhibitors in countries at which vildagliptin and sitagliptin are not available. to and activating the GLP-1 receptor, resulting in glucose dependent increase in insulin secretion and decrease secretion of glucagon, so they are effective in decreasing blood glucose levels associated with reduced risk of hypoglycemia. They also act to delay gastric emptying and increase satiety; thereby they are effective in reducing body weight for diabetic patients. The approved agents in this class include exenatide, liraglutide, albiglutide, and lixisenatide. All of these agents are administered by subcutaneous injections with mainly GIT side effects such as nausea, vomiting, and diarrhea in addition to injection-site reaction [31,32]. Exenatide and liraglutide are the most frequently studied GLP-1 RAs for patients with diabetes during Ramadan.
Exenatide as Add-On Therapy.
Although there are many studies that examine the effect of exenatide as addon therapy in nonfasting patients [33,34], which showed that exenatide was effective to lower HbA 1c with less hypoglycemic risk, only few studies examined exenatide usage for Ramadan fasting patients. One pilot study [35] for 34 patients with type 2 DM who are using different pharmacological treatment (insulin and oral glucose-lowering therapies) and are wishing to fast during Ramadan was observed. Two patients were using exenatide prior to Ramadan. At the end of Ramadan, it was found that neither of the two patients had experienced a hypoglycemic event (defined as blood glucose less than 70 mg/dL; 3.9 mmol/L) even without exenatide dose adjustment. However, it is so difficult to ascertain this result because of the limited sample size. In another study, exenatide when used as add-on therapy to metformin [36] was associated with reduced risk of hypoglycemia when compared to a combination of metformin and gliclazide for patients with type 2 DM who fast during Ramadan. One limitation of this study is inability of the author to retrieve the full article.
In above 2 studies regular exenatide was assessed while the usage of sustained release exenatide during Ramadan was not assessed.
In summary, regular exenatide is not associated with hypoglycemia when used for type 2 DM patients during Ramadan; however it is difficult to recommended the use of exenatide for fasting patients until further studies performed because the current studies are small scaled studies, with major focus on hypoglycemic side effects without focusing on the efficacy of exenatide to control blood glucose level during Ramadan.
Liraglutide as Add-On Therapy.
Liraglutide was shown to have comparable efficacy to SU in lowering HbA 1c but with less risk of hypoglycemia when used in nonfasting patients [37,38].
During Ramadan many studies assess the benefits and drawbacks for liraglutide usage; one of the earliest studies in this regard was the Treat 4 Ramadan trial, which was a randomized, controlled clinical trial [39] comparing liraglutide to SU (gliclazide 88%, glimepiride 10%, or glibenclamide 2%) as add-on therapy to metformin in 99 adult patients with type 2 DM in UK. After 12 weeks, patients in the liraglutide group and not those in the SU group had a reduction in HbA 1c (−0.3% versus 0.02%; = 0.06). Liraglutide resulted in greater and significant reductions in both weight and diastolic blood pressure (BP) for patients with DM than SU. Self-recorded episodes of hypoglycemia (blood glucose ≤ 70.2 mg/dL; 3.9 mmol/L) were significantly lower with liraglutide ( < 0.0001). The major limitation in this study was the reliability of the method that was used to calculate hypoglycemia (patient self-record method).
LIRA-Ramadan study was longer and larger than the previous Treat 4 trial. LIRA study was an open-label, multinational randomized clinical trial [40] involving 343 people (172 on liraglutide and 171 on SU) for a 33-week duration. This study included type 2 DM patients with an intent to fast during Ramadan, with HbA 1c 7-10%, and being treated with a combination of metformin and SU (at maximum tolerated dose). Study participants were randomized to either switch from SU to liraglutide 1.8 mg once daily or continue pretrial SU. Patients in liraglutide group were more likely to achieve HbA 1c target of <7% with no confirmed hypoglycemic events (defined as blood glucose less than 70 mg/dL; 3.9 mmol/L) compared with SU (53.9% versus 23.5%; < 0.0001). Moreover, people treated with liraglutide experienced significantly greater weight loss ( < 0.0001) and greater improvements in HbA 1c (−1.24% versus −0.65%; < 0.0001) than those treated with SU. The incidence of patients experiencing adverse events (AE) during Ramadan was similar in the liraglutide and SU groups (23.7% versus 20.9%; > 0.05); meanwhile gastrointestinal side effects (nausea, diarrhea, vomiting, abdominal pain, and abdominal distension) occurred more commonly with liraglutide treatment (10.5% versus 3.7%). The results of this study were more reliable and add a confirmation to the findings of Treat 4 trial because of its randomized design with larger sample size at which in both studies liraglutide showed a significantly better efficacy than SU with a lower incidence of hypoglycemia.
In summary, liraglutide usage during Ramadan for patients with type 2 DM may be reasonable because it is associated with better glycemic control, improved body weight, and less hypoglycemic episodes when compared with SU; however, it should be used with caution because of its GIT side effects, which may negatively affect a fasting patient, since GIT problems may occur more frequently during Ramadan [41].
Other GLP-1 RA.
Studies for other GLP-1 RA like albiglutide and lixisenatide during Ramadan were lacking which may be attributed to their recent approval by FDA and it may be possible to find such trials in the near future.
Sodium-Glucose Cotransporter-2 Inhibitors.
Sodiumglucose cotransporter-2 (SGLT-2) inhibitors are the most recent class of oral glucose-lowering therapies that are used for treating patient with type 2 DM. Medications of this class include dapagliflozin, canagliflozin, ipragliflozin, and empagliflozin. These drugs act to lower blood glucose level by decreasing renal glucose threshold through their effect to induce a competitive inhibition on the SGLT-2 in the kidney which is responsible for reabsorption of 90% of filtered glucose by the kidneys and thus block the reabsorption of glucose. The risk of inducing hypoglycemia is low with SGLT-2 inhibitors because of their insulin-independent action and hence forms an attractive class for managing patients with type 2 DM during Ramadan. However, caution is recommended while using these medications because of their ability to cause dehydration, especially in the setting of absence of fluid intake, which occurs during fasting hours of Ramadan [42,43]. Recently FDA added many warning on such class of medication because of their risk to induce ketoacidosis and increase risk of foot and leg amputation, serious urinary tract infections, acute renal failure, and osteoporosis [44].
Dapagliflozin as Add-On Therapy.
There are some studies that assess dapagliflozin as add-on therapy in nonfasting patients; in all of these studies dapagliflozin was shown to have comparable efficacy to SU with lower hypoglycemic risk [45].
To date, there is only one study that evaluates dapagliflozin during Ramadan which was a 12-week, randomized, openlabel study [46] for 110 patients with type 2 DM who were already using metformin and SU. Patients were divided into two groups: in the 1st group 58 patients were switched from SU to dapagliflozin 10 mg once daily while in the 2nd group 52 patients were remained on their pretrial treatment. Dehydration was defined as a loss of 1.8% of body weight/13 hours of fasting daily. Dehydration was further assessed by using urine and blood tests, with physical examination and a specific set of questions to the patients about their medical history while using this medication. There was no significant difference in the incidence of dehydration between dapagliflozin and SU (73.1% versus 81.6%; = 0.258); this may be because already most of Ramadan fasting persons without regard to their disease or medication status suffer from dehydration due to long period (12-22 hour) of abstinence from foods and water during Ramadan [47].
There were significantly more patients in the dapagliflozin group (43.1% versus 23.1%; = 0.026) than in SU group complained from thirst sensation. Additionally, there was a significantly higher mean for haematocrit level ( = 0.009), urine osmolarity ( = 0.001), and blood ketone ( = 0.002) in dapagliflozin group; however, there was a lack of information regarding the development of ketoacidosis in participated patients of this study. Furthermore, there was a significantly lower mean of urinary sodium ( < 0.005) in dapagliflozin group when compared to SU group; however authors of that study postulated that dapagliflozin does not pose a higher risk of dehydration during Ramadan. Assessment of dapagliflozin effectiveness to reduce HbA 1c and its risk to induce hypoglycemia was the major limitation in this study which may be because the main aim of this study was to assess the safety of dapagliflozin on excessive water excretion.
In summary, there are a limited number of trials regarding dapagliflozin and other SGLT-2 inhibitors in type 2 DM patients during Ramadan. The available data showed that although dapagliflozin is not associated with increased risk of dehydration, it increases thirst sensation for patients with type 2 DM during Ramadan, which may negatively affect patient compliance to continue dapagliflozin usage during Ramadan. So it is recommended not to use SGLT-2 inhibitors for fasting patients with type 2 DM until performing further studies that compare the effect of dapagliflozin or other SGLT-2 inhibitors with other oral antidiabetic medications on controlling blood glucose level, their risk of inducing hypoglycemia, and patient compliance.
Limitations of the Current Study. There are many limitations in this review like the usage of free search engines for literature search and the inability to fully retrieve some articles.
Conclusion
Although many glucose-lowering therapies with noninsulin dependent mechanisms of action have been approved recently, only few of them (vildagliptin, sitagliptin, exenatide, liraglutide, and dapagliflozin) have been studied during Ramadan. The hypoglycemic risk was assessed for all of the above medications except dapagliflozin; nearly all of the assessed medications were associated with reduced risk of hypoglycemia when compared with SU when used during Ramadan. DPP-4 inhibitors such as vildagliptin and sitagliptin may form a suitable glucose-lowering therapy option for Ramadan fasting patients, since they are unlike liraglutide less likely to cause GIT upset, and unlike SGLT-2 inhibitors are not associated with increased thirst sensation. Effectiveness of sitagliptin to control blood glucose level for Ramadan fasting patients was not assessed in any study in contrast to vildagliptin which was shown to be effective to control blood glucose level and body weight when used as monotherapy or even as add-on therapy to metformin; accordingly vildagliptin seems to be the most suitable glucose-lowering therapy choice for diabetic patients who are wishing to fast during Ramadan. Further studies on the use of other new glucose-lowering therapies during Ramadan are recommended; furthermore it is recommended to do studies that directly compare the advantages and disadvantages between these new glucose-lowering therapies. | 2018-04-03T01:54:16.679Z | 2016-08-24T00:00:00.000 | {
"year": 2016,
"sha1": "b02788a49a43dff9da4c9fbe51ebd72847548e82",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jdr/2016/6962574.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f0097b0b3af329a22067eee811dd96297aae8b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246606770 | pes2o/s2orc | v3-fos-license | The Influence of Social Determinants of Health on the Provision of Postpartum Contraceptives in Medicaid
Disparities continue to exist in the timely provision of postpartum contraception. This study aimed to identify prevalence and factors associated with postpartum contraception provision among women enrolled in Medicaid. A retrospective cohort study was conducted using the 2014 National Medicaid data, linked to county-level social vulnerability index (SVI) data. Women aged 15–44 with a live birth in 2014 were included. Multivariable logistic regression was used to predict 3-day provision of long-acting reversible contraception (LARC) and 60-day provision of most effective or moderately effective contraceptives (MMEC). Overall, 3-day LARC provision was 0.2% while 60-day MMEC was 36.3%. Significantly lower odds of receiving MMEC was found among women aged 15–20 (adjusted odds ratio [aOR] = 0.87; 95% CI:0.86–0.89) compared to women 20–44 years as well as among Asian women (aOR = 0.69; 95% CI:0.66–0.72) and Hispanic women (aOR = 0.73; 95% CI:0.72–0.75) compared to White women. The provision of postpartum contraception remains low, generally, and needs attention in communities experiencing poor maternal outcomes.
Introduction
The United States (US) is one of the few developed countries with an increase in maternal mortality rate (MMR) since 1990, with MMR increasing from 8.0 deaths per 100,000 in 1990 to 20.1 deaths per 100,000 live births in 2019 [1,2]. Pregnancy-related deaths occur most frequently during the postpartum period, and many are preventable [3,4]. Maternal mortality is often considered the 'tip of the iceberg,' with many additional women experiencing serious complications that have lifelong impact [5][6][7]. Severe maternal morbidity has a variety of negative outcomes, from increased rates of receiving a blood transfusion, hysterectomy, or ventilation to increased lengths of hospital stays and higher use of health services [8,9].
Within this context, postpartum contraception use can play a critical role in increasing the interpregnancy interval (IPI), which is essential for avoiding adverse maternal outcomes. Previous research has shown that short IPIs were associated with an increase in risk of adverse outcomes such as preeclampsia, uterine rupture, preterm birth, low-birth weight, and maternal death [10][11][12][13][14]. The American College of Obstetrics and Gynecology (ACOG) recommends that women should wait at least 6 months and up to 18 months between a live birth and the next conception in order to decrease the risk of poor maternal outcomes [15].
The provision of postpartum contraception immediately after delivery, and prior to discharge, has been shown to be most effective in reducing unintended pregnancies, and the ACOG recommends using contraceptives immediately after birth to increase the duration of the IPI [15]. Compared to individuals who receive postpartum contraception six to eight weeks after delivery, immediate intrauterine device (IUD) insertion can prevent an additional 88 unintended pregnancies over 2 years per 1000 women [16]. In addition, immediate provision of postpartum intrauterine contraception has been shown to be cost effective from the state's perspective, with an estimated cost savings of $2.94 for every $1 spent on a state-financed IUD program [17].
The use of effective postpartum contraception methods remains low in the US, with a recent study showing that only about 53% of women reported using more effective methods (such as implants, IUDs, pills, rings and patches) and only 1 in 5 women report using most effective contraception such as implants and IUDs [18,19]. This is surprising as most women report an intention to use postpartum contraception [20]. In addition to barriers such as low income, low health literacy, and poor education, the Medicaid-eligible population needs special attention for the use of postpartum contraception because rates of unintended pregnancy in this population are significantly higher. Factors such as income, education, health literacy, and other socioeconomic factors are often referred to as social determinants of health (SDoH) and have been shown in previous research to be closely related to key preventive health behaviors [21].
It is important to understand the uptake of postpartum contraception, particularly in a Medicaid population that is vulnerable to several SDoH. Investigation of the factors associated with uptake can also help develop effective interventions to encourage use of postpartum contraception, taking into account maternal needs and wishes. The objectives of this study were (1) to evaluate the overall prevalence and patterns of state-level provision of postpartum contraception in Medicaid and (2) to assess the impact of SDoH on the rate of provision of postpartum contraceptives among women enrolled in Medicaid.
Materials and Methods
A retrospective cohort study design was used to evaluate the patterns of state-level provision of contraceptives and the impact of SDoH on the provision of postpartum contraceptives among women in Medicaid in 2014. The study protocol was approved by the Institutional Review Board (IRB) at the University of Mississippi, and the use of Medicaid data were covered under a data use agreement with the Centers for Medicare and Medicaid Services (DUA# RSCH-2017-51606).
Data Sources
This study was conducted using the Medicaid administrative claims data from 17 states (CA, GA, ID, IA, LA, MI, MN, MS, MO, NJ, PA, SD, TN, UT, VT, WV, WY) for the year 2014. Medicaid is a public entitlement program jointly funded by the federal government and states to provide health insurance to individuals with low income. Each state operates its own Medicaid program under broad guidance from the federal government, which gives states considerable flexibility in how they design and administer the program. However, federal law requires states to provide certain mandatory benefits, including inpatient and outpatient hospital services, physician services, home health services, nursing facility services, etc. As of June 2021, 83.2 million Americans were enrolled in Medicaid, including eligible low-income adults, pregnant women, children, elderly adults and individuals with disabilities [22]. Medicaid administrative claims data contain de-identified information pertaining to over 25 million Medicaid beneficiaries. The data contain an inpatient claims database, an outpatient claims database, a pharmacy claims database, and a beneficiary master file that provides information about demographics, eligibility, and zip code of residence. To identify SDoH factors, each beneficiary's zip code was used to determine the county of residence and linked to county-level Social Vulnerability Index (SVI) data from the Centers for Disease Control and Prevention (CDC) for the year 2014 [23]. Previous studies have demonstrated that the SVI is a robust predictor of several health indicators such as poor outcomes after surgical episodes [24,25], preventative health behaviors such as physical activity [26,27] and vaccination rates [28], and even outcomes from the COVID-19 pandemic [29,30].
Study Population
The study population was identified based on the Office of Population Affairs (OPA) Contraceptive Care-Postpartum Women Ages 15-44 quality measure [31]. Per the OPA Contraceptive Care-Postpartum (CCP) measure, individuals were included if they were aged 15-44 years as of 31 December 2014, with a live birth in 2014 and were continuously enrolled in Medicaid for at least 60 days from the date of delivery. The date of live delivery was defined as the index date and identified using ICD-9 diagnoses and procedure codes provided by OPA as part of the CCP measure. Deliveries that did not end in a live birth and deliveries that occurred during the last two months of 2014 were excluded. For the second objective, additional exclusion criteria were applied. Individuals who had multiple deliveries, as well as deliveries occurring after 30 September 2014, were excluded so as to capture continued eligibility beyond 60 days postpartum.
Postpartum Contraception Provision
The provision of contraceptives was identified using the OPA CCP measure [31]. This included most or moderately effective (MMEC) FDA-approved methods of contraception and long-acting reversible methods of contraception (LARC) within 60 days of delivery. The most effective methods of contraceptives included female sterilization, contraceptive implants, or intrauterine devices or systems (IUD/IUS). Moderately effective contraceptives included the use of injectables, oral pills, patch, ring, or diaphragm, and LARCs included the use of contraceptive implants and IUD/IUS. The use of contraceptives was identified using national drug codes (NDCs) provided by OPA as part of the CCP measure.
Theoretical Framework
In order to estimate the impact of SDoH on CCP, the Healthy People 2020 Social Determinants of Health framework was utilized, as applied by the CDC SVI [23]. The Healthy People 2020 framework emphasizes the collective impact and influence of determinants such as physical and social environment, individual behavior, health services and biology and genetics on the health outcomes of a population [32]. The CDC SVI employs this framework to assess the relative vulnerability of every US Census tract or county to external stresses such as natural or human-caused disasters or disease outbreak. The higher the ranking, the more vulnerable the geographic region. The SVI ranking is derived from 15 social factors, based on variables captured by the American Community Survey (ACS) data, arranged into four themes: socioeconomic status, household composition and disability, minority status and language, and housing type and transportation. Additional information on the variables used and the four themes are provided in Figure 1. Data from the SVI were classified according to whether a county was ranked in the top quartile, the bottom quartile, or the middle for each of the four themes before they were linked to patient records.
In addition to the SVI, in an effort to assess the impact of state Medicaid policies around the continued provision of Medicaid eligibility during the postpartum period, beneficiaries were flagged if they had more than 60 days of continuous enrollment beyond the delivery date. Additionally, the models predicting the use of postpartum contraceptives also adjusted for demographic factors such as age and race of the individual. Age and race were obtained from the Medicare beneficiary master file. Similar to how the CCP measure is used, beneficiaries age was classified as 15-20 and 21-44. Race/ethnicity categories included White, Black, Hispanic, Asian and Other/Unknown race.
Statistical Analysis
Baseline descriptive statistics were estimated and the prevalence of postpartum contraceptive use (MMEC/LARC) was also estimated in each available state to facilitate comparisons to a national rate. Multivariable logistic regression models were used to assess the relationship between SDoH factors and the provision of LARC and MMEC during the 3-and 60-day postpartum periods, respectively, and to estimate adjusted odds ratios (aOR) and 95% confidence intervals (CI). All analyses were conducted using SAS 9.4 (SAS Institute, Cary, NC, USA), and an a priori significance level of p < 0.05 was used for all analyses.
Results
A total of 438,936 women were included in the study after applying all inclusion and exclusion criteria (Figure 2), with a majority of study participants aged 21-44 years (85.9%). The composition of the study participants by race was as follows: 44.0% White women, 24.9% Black women, 2.7% Asian women, 19.0% Hispanic women, and 9.3% other/unknown race (Table 1). Table 2 shows the provision of postpartum contraception. The overall provision of LARC during the 3-day postpartum period was 0.2%, while the overall rate for MMEC during the 60-day postpartum period was 36.3%. In general, MMEC rates during the 60-day postpartum period were slightly higher in women aged 21-44 years (36.6%) compared to women aged 15-20 years (34.3%, p < 0.001), in White women (40.5%) compared to Black women (38.4%), Asian women (27.0%) or Hispanic women (28.3%) (p < 0.001). There was significant variation in MMEC across the 17 states included in the study ( Table 2). The top three states with the highest rates for MMEC during the 60-day postpartum period were Mississippi (MS) (48.5%), Louisiana (LA) (48.2%) and Tennessee (TN) (48.1%). A majority of the other states had rates between 37% and 45%. While rates for California (CA) (24.8%) and New Jersey (NJ) (30.5%) were lower compared to the national rate of 36.3%, the measure rate for West Virginia (WV) was much lower, at 7.1%. For LARC provision during the 3-day postpartum period, the states with the highest rates included Iowa (IA) (0.7%), Pennsylvania (PA) (0.5%) and Vermont (VT) (0.5%).
Discussion
The results of this study add to the existing evidence around the provision of timely postpartum contraception in the US. LARC provision during the 3-day postpartum period was 0.2%, while only about one in three women received MMEC during the 60-day postpartum. In addition, the provision of postpartum contraception was closely tied to sociodemographic and SDoH factors, and significant variation in postpartum contraception provision was observed across states.
The MMEC rates from this study using data from 2014 are slightly lower than more recent estimates from the Medicaid Adult Core Set, which reported an overall rate of 40.4% in 2020 [33]. In addition, while LARC rates have improved from 2014, at 2.2% in 2020, rates are still low [33]. These rates are surprising for several reasons. First, there have been various initiatives at the federal level aimed at improving access to postpartum contraception. In 2014, the Center for Medicaid and CHIP Services (CMCS) launched the Maternal and Infant Health Initiative in partnership with states and Medicaid providers, with the goal of increasing the use of MMEC among women in Medicaid and CHIP, especially during the postpartum period [34]. In addition, one of the Healthy People 2020 goals was to increase the proportion of women delivering a live birth who used an MMEC by 10% [35]. There have also been several state-level initiatives, with most states publishing guidance around reimbursement for immediate LARC insertion [36].
However, the 2020 rates from the Medicaid Adult Core Set show that there has been little improvement in use of postpartum contraception from the rates found in the current study. This is corroborated by a recent study which found a very small increase of only about two percentage points in LARC uptake after Ohio's Medicaid expansion [37]. The lack of improvement in postpartum contraception uptake may be due to a lack of knowledge about availability and recent policy changes that have significantly reduced the cost of postpartum contraception. Low rates of contraceptive use may also be due to hesitancy on the part of providers to avoid the impression of being paternalistic or coercive in their attitude towards recommending postpartum contraception. While it is important that the use of postpartum contraception be the woman's choice, it is equally important that there is a conversation about the topic. Pregnant women should be made aware of these options during prenatal counseling, and although it should not be forced on them, they should be able to understand the importance of postpartum contraception use and make an informed decision based on the options available to them and their own reproductive needs.
This study found interesting variations in sociodemographic, SDoH, and geographic rates of postpartum contraception provision. In general, women aged 15-20 years had higher odds of receiving LARC. These results have been corroborated by other studies, which have found a positive association between younger age and use of more effective postpartum contraception [38]. Conversely, the finding from this study that Black women had significantly higher odds of receiving LARC during the 3-day postpartum period compared to White women is different from a study by Thiel de Bocanegra et al., which found that Black women had lower odds of receiving LARC compared with White women [39]. The differences observed may be due to several reasons. First, the Thiel de Bocanegra et al. study focused on California Medicaid, while this study included data from 17 states. In addition, LARC provision for this study was measured over a 3-day period, while the Thiel de Bocanegra et al. study examined LARC over a 99-day period.
Of great interest is the finding from this study that a woman's residing in the most vulnerable counties was significantly associated with provision of both LARC and MMEC. However, variable associations were observed between CDC's themes of social vulnerability, LARC provision and MMEC uptake. For example, women residing in the most vulnerable counties in terms of housing and transportation had greater odds of LARC provision, whereas those living in counties with vulnerable socioeconomic status (which includes living below poverty, unemployment, low income, no high school diploma) had fewer odds of LARC provision. Similarly, women living in the most vulnerable counties in terms of socioeconomic status, household composition and disability status (age 65 years or older or 17 years or younger, older than age 5 with a disability, or single parent households) had higher odds of MMEC uptake in the 60-day postpartum period, whereas those residing in the most vulnerable counties in terms of minority status and language as well as housing and transportation had lower odds of MMEC uptake. To our knowledge, this is the first study to examine postpartum contraception provision in the context of social vulnerability. The variable findings in this study are likely explained by a complex interplay of the various social determinants that drive healthcare decisions and outcomes. For example, it is possible that residing in locations with transportation issues may lead individuals to choose immediate LARC insertion after delivery so as to avoid traveling to a clinic for follow up appointments. Vulnerability in terms of household composition, which may be related to greater caregiving needs or lack of availability of childcare, may in turn motivate uptake of postpartum contraceptives. In addition, among Medicaid beneficiaries, the ability to pay for contraceptive medications continues to be a significant challenge. This is particularly relevant in 2014, this study' time period, because it was prior to when most states provided reimbursement for immediate LARC insertion. While further research is needed to explore the interplay of these indicators of vulnerability, individual race or ethnicity, and postpartum contraceptive use, it is clear that women residing in counties with greater vulnerability need immediate attention from healthcare providers and policymakers alike in order to improve the quality of maternal care and outcomes from pregnancy.
Implications for Policy and/or Practice
While this study used data from 2014 to estimate use of postpartum contraception, the relationships found here may still be valuable. For instance, the American College of Obstetricians and Gynecologists (ACOG) recommends immediate postpartum LARC insertion (within 10 min after delivery) [40], and the low rates of post-partum contraceptive use both in this study and that found in recent literature [33] suggest that providers and patients may not be adhering to the ACOG recommendations. This study highlights the importance of counseling in addition to access. It is not sufficient that reimbursement for services is available; patients should be made aware of these services. While this study did not evaluate receipt of such services, clinicians should endeavor to incorporate prenatal counseling around postpartum contraception use as part of routine prenatal care. In addition, given the significant variation observed across states in postpartum contraception provision, states with lower rates should consider targeted interventions and policies to improve access to postpartum services, to ensure that women residing in vulnerable regions are aware of postpartum services and able to access these services as needed.
Limitations
There are several limitations inherent in the use of administrative claims database. Administrative claims data do not capture services paid in cash, so any use of postpartum contraception, especially of moderate contraception that may have been obtained outside the hospital and paid in cash will not be captured, meaning actual use may be underestimated. In addition, due to limitations in administrative claims data, which does not collect information about SDoH factors, individual level factors could not be assessed, and therefore the analysis was conducted at the county level. As such, the results from this study are susceptible to ecological fallacy and inferences about individual behavior should be made with caution. Further, the data used in this study are from 2014 prior to policy changes that many states may have implemented with respect to reimbursement for maternal care, in general, or contraceptive use, specifically. Therefore, the rates presented in this study may not be representative of the rates of contraceptive use today. While several states had chosen to expand Medicaid in 2014, this study elected not to account for Medicaid expansion status because Medicaid enrollment was directly captured at the individual level. Nevertheless, there may be other differences between individuals living in expansion vs. non-expansion states that may not be fully captured by whether or not a state chose to expand Medicaid. Future research should examine the impact of Medicaid expansion on postpartum contraception over and above Medicaid enrollment. Finally, the results of this study are only representative of the Medicaid population beneficiaries from the 17 states used in the analysis and may not be generalizable to all states.
Conclusions
Immediate and timely provision of effective postpartum contraception have been shown to be very effective in reducing closely spaced pregnancies. However, the provision of effective postpartum contraception remains low in the US, with significant sociodemographic, SDoH, and geographic variations. Targeted interventions by federal and state healthcare providers and payers are needed to ensure that women are aware of the availability of postpartum contraception and are able to access it as needed. | 2022-02-06T16:12:21.832Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "5ea23d82356aa05657b21cda52a15238bd8eeabe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/10/2/298/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc18a18724ce051df6a4230109e9978bbef5e43f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
796779 | pes2o/s2orc | v3-fos-license | An assessment of the effect of hepatitis B vaccine in decreasing the amount of hepatitis B disease in Italy
Background Hepatitis B (HBV) infection is an important cause of morbidity and mortality and it is associated to a higher risk of chronic evolution in infected children. In Italy the anti-HBV vaccination was introduced in 1991 for newborn and twelve years old children. Our study aims to evaluate time trends of HBV incidence rates in order to provide an assessment of compulsory vaccination health impact. Method Data concerning HBV incidence rates coming from Acute Viral Hepatitis Integrated Epidemiological System (SEIEVA) were collected from 1985 to 2006. SEIEVA is the Italian surveillance national system that registers acute hepatitis cases. Time trends were analysed by joinpoint regression using Joinpoint Regression Program 3.3.1 according to Kim's method. A joinpoint represents the time point when a significant trend change is detected. Time changes are expressed in terms of the Expected Annual Percent Change (EAPC) with 95% confidence interval (95% CI). Results The joinpoint analysis showed statistically significant decreasing trends in all age groups. For the age group 0–14 EAPC was -39.0 (95% CI: -59.3; -8.4), in the period up to 1987, and -12.6 (95% CI: -16.0; -9.2) thereafter. EAPCs were -17.9 (95% CI: -18.7; -17.1) and -6.7 (95% CI: -8.0; -5.4) for 15–24 and ≥25 age groups, respectively. Nevertheless no joinpoints were found for age groups 15–24 and ≥25, whereas a joinpoint at year 1987, before compulsory vaccination, was highlighted in 0–14 age group. No joinpoint was observed after 1991. Discussion Our results suggest that the introduction of compulsory vaccination could have contribute partly in decreasing HBV incidence rates. Compulsory vaccination health impact should be better investigated in future studies to evaluate the need for changes in current vaccination strategy.
Background
HBV infection is an important cause of morbidity and mortality. The World Health Organisation (WHO) estimates that two billion of people worldwide have a serological evidence of past or present HBV infection [1].
The prevalence of chronic HBV infection is low (<2%) in the general population in Northern and Western Europe, North America, Australia, New Zealand, Mexico, and Southern South America. The prevalence of chronic HBV infection is intermediate (2%-7%) in South Central and Southwest Asia, Israel, Japan, Eastern and Southern Europe, Russia, most areas surrounding the Amazon River basin, Honduras, and Guatemala. The prevalence of chronic HBV infection is high (>8%) in all Countries in Africa, Southeast Asia, the Middle East (except Israel), Southern and Western Pacific islands, the interior Amazon River basin and certain parts of the Caribbean (Haiti and the Dominican Republic) [2].
In Italy, the prevalence of HBV infection is set under 2% from the beginning of the twentieth. The most important routes of transmission are sexual intercourse, intrafamiliar contacts and i.v. drug use [3]. The HBV infection trend is changed through the years. There were two important downward tendencies in the serum prevalence of infection, one at the beginning of the eighties, related to the improved socio-economic conditions and to the reduction in family numerousness [4], and one at the end of the eighties, after the spreading of HIV infection and before compulsory vaccination.
In 1985, the Acute Viral Hepatitis Integrated Epidemiologic System data (SEIEVA) was established [5]. The national surveillance system underlined an impressive reduction of the incidence of HBV infection from 12/ 100,000 to 5.1/100,000 through the 1985-1991 period, reporting the highest number of cases among individuals 15-24 years old and among males [6]. From the starting of compulsory vaccination campaign, in 1991, there was another downfall in HBV incidence with a reduction of 40% from 1988-91 to 1991-99. The incidence reduction was of 66% among 0-14 years old individuals and 59% among 15-24 years old ones [6].
The compulsory vaccination was mainly introduced by the high risk of chronic evolution of the infection in children. SEIEVA data demonstrated a stabilisation of the epidemiological trend of infection with a mean incidence of 1.65 cases for 100,000 in the last 6 years available [7]; this trend was also demonstrated in other European nations [8].
Anti-hepatitis B vaccine has still some aspects, such as the immunity memory length and the failure rate, to go in deep [9,10].
Our study aims to evaluate the epidemiology of HBV infection in Italy and to provide an assessment of compulsory vaccination health impact by studying time trends through the use of the joinpoint regression. This statistical technique highlights the time points that divide periods characterised by different time trends. It should so represent an innovative approach to in depth investigate the HBV incidence rates decreasing trends described by other authors [6,8,[11][12][13][14] and to give some additional insight to the vaccine impact.
Data and setting
HBV incidence rates, reported by the SEIEVA, were collected from 1985 to 2006 [7]. SEIEVA is a surveillance system that covers 57% of Italian population and aims to investigate epidemiology of viral acute hepatitis. The system is coordinated by the National Institute of Health and each Local Health Unit (LHU) can join voluntary the system.
Data regarded incidence rates for 100,000 and were stratified by age (0-14; 15-24; ≥ 25). Rates were computed dividing the number of cases by the total population of each joining LHU. In the surveillance system the diagnosis of acute hepatitis B was posed if a serologically confirmed positivity for IgM anti-HBcAg was found.
Since SEIEVA data were available before mass vaccination introduction for the period 1985-1991, study of time trend changes was made possible.
In Italy another national database on HBV infections (SIMI) exists from Italian Public Health Ministry [15]. However, this database is not exclusively devoted to this type of infection, but covers all notifiable infectious diseases. We were not allowed to perform the same evaluation, done with SEIEVA surveillance, since SIMI data were available from 1996 only.
Statistical analysis
The analysis on SEIEVA data was carried out for three different age groups (0-14; 15-24; over 25 years) and for all ages together. Incidence rates time trends were analysed by joinpoint regression according to Kim's method [16].
The following formula was used for the logarithmic transformation of incidence rates: where x represents the calendar years, b is the regression coefficient and y the incidence rate.
A joinpoint represents the time point when a significant trend change is detected. Time changes are expressed in terms of Expected Annual Percent Change (EAPC) with respective 95% confidence interval; significance level of time trends is also reported. The null hypothesis was tested using a maximum of three changes in slope with an overall significance level of 0.05 divided by the number of join-points in the final model.
For the analysis we used the Joinpoint Regression Program, Version 3.3.1 [17].
Results
In the 1985-2006 period a strong reduction of hepatitis B incidence rates in all age groups was observed ( Figure 1). SEIEVA data showed the highest incidence rates of hepatitis B in individuals belonging to the 15-24 and ≥25 age groups. The incidence rate reduction goes from 6.00 to 0.02 for 100,000 in the age class 0-14, from 41.00 to 0.50 in the group 15-24 years and from 7.00 to 2.30 in individuals of 25 years or more, in the period 1985-2006. Considering all the age groups, the incidence rate decreased from 12 for 100,000 to 1.6 for 100,000.
The joinpoint analysis showed a statistically significant decrease of HBV infection incidence rates too, in particular in 0-14 and 15-24 age groups.
Time trend changes are illustrated in Figures 2, 3, 4 and 5.
Discussion
Hepatitis B incidence rates decreased in each age group throughout the period considered.
Joinpoint regression for 15-24 age group According to the joinpoint analysis of SEIEVA data, a statistically significant change in HBV incidence rates time trend was found, before the introduction of compulsory vaccination, for the age group 0-14 (up to 1087). In par-Joinpoint regression for all ages Figure 5 Joinpoint regression for all ages. http://www.virologyj.com/content/5/1/84 ticular, a smaller decrease of HBV incidence rates in the following period (1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005) than in the first one (1985)(1986)(1987) was observed. Nevertheless, in this age group, prevalence rates of HBV serological markers were estimated to be low in low/intermediate endemic areas for the infection [19]. Moreover, the decrease of HBV incidence rates before compulsory vaccination could be related to the strongly recommendation of HBsAg screening for pregnant women during the last trimester of pregnancy since 1984 [20]. In low/intermediate endemic areas, such as Italy as a whole, and such as the other Southern Mediterranean European regions, horizontal transmission is the main way of acquiring infection thus determining the highest HBV incidence rates among adults [21]. Improved sanitation, obtained with the use of universal precautions in medical settings and blood screening, social, behavioural and demographic changes and sexual educational campaigns seem yet to have been effective to reduce horizontal transmission in these countries and there are some evidences that the highest HBV incidence rates have to be expected in adults older than 50 [19,22]. These same changes could be positively associated to the decrease of HBV incidence rates observed among people from 15 to 24 years of age and in 25 years or older people. HBV incidence rates have progressively decreased through the years in all age groups, even if EAPC was smaller in over 25 years old than in the other groups. The incidence rate reduction in over 25 years people could be also partly attributed to the herd immunity induced by the high coverage rate of children immunisations [23].
Joinpoint regression for 25+ age group
The introduction of compulsory vaccination has determined a reduction of HBV incidence rates and this decrease, according to our analysis, could have been influenced not only by primary prevention sustained by vaccination stategies. This could be also sustained from the evidence of a joinpoint at the year 1992. After this year there was a smaller decrease in HBV incidence rates than before. Moreover, the vaccination of high risk adults, such as injection drugs users and persons at risk of sexual transmission, should be promoted. In fact, there are evidences that these groups of adults, despite of recommendations, are not used to be vaccinated [24,11,12]. This is also confirmed by EAPC value.
Our study has some strenght and limitations. As far as concerns the former ones, this is the first time that the Joinpoint regression model was used in assessing the time trends of a particular infectious in Italy, thus allowing to give some insight to effectiveness of a specific vaccination campaign. The principal limit of our study was concerning the internal validity: unfortunately there is a lack of data before 1985, that could have helped us to better estimate time trend changes in HBV incidence rates. Finally, another problem is related to external validity: the use of SEIEVA data system that could not completely represent the national epidemiological setting. Nevertheless, it should be considered that the wide distribution of LHUs allows standard approaches and procedures to anti-HBV vaccination [13].
This study still underlines the importance of a correct and exhaustive data collection in a surveillance system to realise survey on efficacy of public health interventions.
Since the results of this study could be considered preliminary, we would suggest to carry out new evidences about anti-HBV vaccine health impact to evaluate the possible need to modify current vaccination strategy. | 2016-05-04T20:20:58.661Z | 2008-07-24T00:00:00.000 | {
"year": 2008,
"sha1": "13965555c42ac097951917a1085e99d848481e69",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-5-84",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "843eca5a89862cad7c6e4df06d7b07de65e9d6d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
25726844 | pes2o/s2orc | v3-fos-license | Phaseolus acutifolius Lectin Fractions Exhibit Apoptotic Effects on Colon Cancer: Preclinical Studies Using Dimethilhydrazine or Azoxi-Methane as Cancer Induction Agents
Phaseolus acutifolius (Tepary bean) lectins have been studied as cytotoxic molecules on colon cancer cells. The toxicological profile of a Tepary bean lectin fraction (TBLF) has shown low toxicity in experimental animals; exhibiting anti-nutritional effects such as a reduction in body weight gain and a decrease in food intake when using a dose of 50 mg/kg on alternate days for six weeks. Taking this information into account, the focus of this work was to evaluate the effect of the TBLF on colon cancer using 1,2-dimethylhydrazine (DMH) or azoxy-methane/dextran sodium sulfate (AOM/DSS) as colon cancer inductors. Rats were treated with DMH or AOM/DSS and then administered with TBFL (50 mg/kg) for six weeks. TBLF significantly decreased early tumorigenesis triggered by DMH by 70%, but without any evidence of an apoptotic effect. In an independent experiment, AOM/DSS was used to generate aberrant cryptic foci, which decreased by 50% after TBLF treatment. TBLF exhibited antiproliferative and proapoptotic effects related to a decrease of the signal transduction pathway protein Akt in its activated form and an increase of caspase 3 activity, but not to p53 activation. Further studies will deepen our knowledge of specific apoptosis pathways and cellular stress processes such as oxidative damage.
Introduction
Plant lectins are proteins that are able to bind to free or cell membrane carbohydrates, showing a high specificity for the recognition of surface glycosylations. Given these properties, they have been used for several disease diagnoses and as potential therapeutic agents, including against cancer [1][2][3]. Several studies have shown that traditional-use plant lectins exhibit biological effects such as cell agglutination, reactive oxygen species generation, and mitogenic or cytotoxic effects on different cells
Results and Discussion
In previous work, we showed that protein inhibitors from Tepary beans did not exhibit cytotoxic effects but the TBLF is able to induce cell death in a differential manner on different cancer cells [11]. Prior to testing the effect of TBLF against colon cancer in rats, we confirmed that the non-lectin proteins (NLP) contained in the TBLF were not responsible for the cytotoxic effect. The ion exchange chromatogram obtained from the TBLF is shown in Figure 1. All the fractions with no agglutination activity were labelled as non-lectin protein (NLP); fractions with agglutination activity were pooled and labelled as lectin. The electrophoretic profile shows that the characteristic lectin band was separated from all the protein and the separation method showed good results in reproducibility. The lectin fraction was insufficient to perform the cytotoxic analysis but it was possible to test the effect of the NLP on cell proliferation. As cell harvesting was not achieved by using trypsin, we used an image analyzer to measure the cytotoxicity of the NLP. Image variables showed good correlation with cell proliferation by direct counting: cell circularity (−0.972, p < 0.001), Feret's diameter (0.854, p = 0.015) and cell perimeter (0.899, p = 0.006) but the cell area did not show correlation (0.578, p = 0.174). Taking this, we used only the significant image parameters to determine that lectins are the molecules mainly responsible for the cytotoxic effect since NLP exhibited a low cytotoxicity when compared with the TBLF ( Figure 1C).
Two in vivo experiments were performed to study the effects of TBLF treatment on induced colon cancer. The first set of experiments consisted of inducing tumors in colonic tissue using DMH. Previous work showed that TBLF did not exhibit toxic effects in Sprague Dawley rats when orally administered in a dose of 50 mg/body weight kg; the only adverse effect observed until then was a 10% reduction of body weight gain [25]. Our results showed that TBLF caused a statistically significant decrease in body weight gain of about 10% and DMH administration caused a decrease of 5% (p < 0.0001, F = 21.68 and df = 71). The DMH/TBLF treated group showed a body weight gain reduction of up to 20%, with a recovery of 10% at the end of the treatment (Figure 2A). These indicated a slight detrimental effect of DMH on body weight gain, but as TBLF naturally provokes a lessening in this parameter, it appeared that the co-administration of DMH/TBLF had a potentiated effect through the treatment that concluded with a partial recovery of body weight. In terms of food intake, there were variations in food consumption throughout the experiment; a decrease of 25% for TBLF and DMH/TBLF treatments even showed a complete recovery at the end of the experiment ( Figure 2B) but there is no statistically significant difference (p = 0.4765, F = 0.8383 and df = 71). These results suggested that the detrimental effects on body weight gain and food intake were mainly related to the TBLF effect as an anti-nutritional factor. Besides anti-nutritional effects, TBLF could also exhibit effects on the immune system by mainly increasing granulocytes, and decreasing lymphocytes without effects on hepatic, renal or pancreatic markers [25], suggesting no toxic effects.
Molecules 2017, 22, 70 3 of 17 lessening in this parameter, it appeared that the co-administration of DMH/TBLF had a potentiated effect through the treatment that concluded with a partial recovery of body weight. In terms of food intake, there were variations in food consumption throughout the experiment; a decrease of 25% for TBLF and DMH/TBLF treatments even showed a complete recovery at the end of the experiment ( Figure 2B) but there is no statistically significant difference (p = 0.4765, F = 0.8383 and df = 71). These results suggested that the detrimental effects on body weight gain and food intake were mainly related to the TBLF effect as an anti-nutritional factor. Besides anti-nutritional effects, TBLF could also exhibit effects on the immune system by mainly increasing granulocytes, and decreasing lymphocytes without effects on hepatic, renal or pancreatic markers [25], suggesting no toxic effects. lessening in this parameter, it appeared that the co-administration of DMH/TBLF had a potentiated effect through the treatment that concluded with a partial recovery of body weight. In terms of food intake, there were variations in food consumption throughout the experiment; a decrease of 25% for TBLF and DMH/TBLF treatments even showed a complete recovery at the end of the experiment ( Figure 2B) but there is no statistically significant difference (p = 0.4765, F = 0.8383 and df = 71). These results suggested that the detrimental effects on body weight gain and food intake were mainly related to the TBLF effect as an anti-nutritional factor. Besides anti-nutritional effects, TBLF could also exhibit effects on the immune system by mainly increasing granulocytes, and decreasing lymphocytes without effects on hepatic, renal or pancreatic markers [25], suggesting no toxic effects. Tumor induction with DMH is shown in Figure 3. Histopathological findings showed different stages of tumor progression that were classified as inflammation, premalignant damage, low-grade damage and high-grade damage [27]. TBLF increased inflammation in colon tissues, lectins are considered anti-nutritional factors [2,28] because they can resist gut digestion for several hours, or even days [25]. They bind to intestinal membrane glycoproteins or glycolipids that affect nutrient absorption [7,29], and also activate the immune response [25,30], thereby provoking inflammation. Previous studies have shown that TBLF has exhibited a resistance to gut digestion, since agglutination activity was determined in faeces for at least 72 h with immune response activation [25], suggesting that they can interact with intestinal and colon epithelia, leading to an inflammation process.
DMH provokes different degrees of precancerous or cancerous injuries. Our results showed that DMH/TBLF treatment decreased 70% of premalignant lesions. Other lectins tested in different models have shown negative effects on tumorigenesis [31,32]. When proliferation and apoptotic markers were studied by immunohistochemical analyses, no differences were observed between the control rats and any of the treated groups ( Figure 4). These results may be related to the tumor progression stage as the main activity of TBLF was observed in early tumorigenesis. Changes in glycocalyx are essential for lectin recognition, and it is well known that tumor progression is related to modifications in membrane glycosylation. Cancer cells display different aberrant membrane glycosylation patterns depending on the type of cancer and the tumor stage [21], therefore, TBLF was most likely unable to recognize low-and high-grade tumor cells. Tumor induction with DMH is shown in Figure 3. Histopathological findings showed different stages of tumor progression that were classified as inflammation, premalignant damage, low-grade damage and high-grade damage [27]. TBLF increased inflammation in colon tissues, lectins are considered anti-nutritional factors [2,28] because they can resist gut digestion for several hours, or even days [25]. They bind to intestinal membrane glycoproteins or glycolipids that affect nutrient absorption [7,29], and also activate the immune response [25,30], thereby provoking inflammation. Previous studies have shown that TBLF has exhibited a resistance to gut digestion, since agglutination activity was determined in faeces for at least 72 h with immune response activation [25], suggesting that they can interact with intestinal and colon epithelia, leading to an inflammation process.
DMH provokes different degrees of precancerous or cancerous injuries. Our results showed that DMH/TBLF treatment decreased 70% of premalignant lesions. Other lectins tested in different models have shown negative effects on tumorigenesis [31,32]. When proliferation and apoptotic markers were studied by immunohistochemical analyses, no differences were observed between the control rats and any of the treated groups ( Figure 4). These results may be related to the tumor progression stage as the main activity of TBLF was observed in early tumorigenesis. Changes in glycocalyx are essential for lectin recognition, and it is well known that tumor progression is related to modifications in membrane glycosylation. Cancer cells display different aberrant membrane glycosylation patterns depending on the type of cancer and the tumor stage [21], therefore, TBLF was most likely unable to recognize low-and high-grade tumor cells. As our results showed effects of TBLF on early tumorigenesis, a second set of experiments were performed using AOM/DSS in order to induce aberrant cryptic foci (ACF). AOM/DSS, alone or in combination with TBLF, showed a negative effect on body weight by up to 25%, which recovered by up to 5% at the end of the treatment ( Figure 5A); the results were statistically significant (p = 0.0004, F = 7.33, df = 67). Food intake showed significant differences in AOM/DSS and AOM/DSS-TBLF As our results showed effects of TBLF on early tumorigenesis, a second set of experiments were performed using AOM/DSS in order to induce aberrant cryptic foci (ACF). AOM/DSS, alone or in combination with TBLF, showed a negative effect on body weight by up to 25%, which recovered by up to 5% at the end of the treatment ( Figure 5A); the results were statistically significant (p = 0.0004, F = 7.33, df = 67). Food intake showed significant differences in AOM/DSS and AOM/DSS-TBLF treated groups ( Figure 5B), and was statistically different (p = 0.0001, F = 12.6, df = 67). TBLF showed a similar effect on body weight gain as previously observed in this experiment, and a decrease in food intake had effects related to the anti-nutritional effects of different lectins.
Colon histopathology assays showed normal colonic tissue architecture in the control rats [27], while rats treated with the TBLF showed an atrophic effect on intestinal epithelia with decreased length and fusion of the colonic structure ( Figure 6). These results were consistent with other studies, for example, Daprà et al. [23], observed that legume lectins provoked lymphocyte infiltration in the intestinal tissue as well as villi shortening and widening in trout (Oncorhynchus mykiss). As villi affectation by lectins can induce changes in organ architecture, this was related to the decrease in weight gain and in food consumption. The effect of AOM/DSS treatment induced inflammation in 15% and ACF in 60% of the rats' colon tissue and disrupted the entire architecture [27]. The co-administration AOM/DSS-TBLF reduced the ACF by 50% and partially restored the tissue architecture, but atrophy was still observed. This result showed the effect of TBLF in the early stages of colon tumorigenesis; and was consistent with studies reporting anticancer effects using lectins such as Concanavalina A lectin in other animal models [4,32], and Euchema serra agglutinin in mice [4]. Such results agree with the fact that glycosylation changed by tumor progression affected the TBLF affinity for cancer cells in different stages. a similar effect on body weight gain as previously observed in this experiment, and a decrease in food intake had effects related to the anti-nutritional effects of different lectins. Colon histopathology assays showed normal colonic tissue architecture in the control rats [27], while rats treated with the TBLF showed an atrophic effect on intestinal epithelia with decreased length and fusion of the colonic structure ( Figure 6). These results were consistent with other studies, for example, Daprà et al. [23], observed that legume lectins provoked lymphocyte infiltration in the intestinal tissue as well as villi shortening and widening in trout (Oncorhynchus mykiss). As villi affectation by lectins can induce changes in organ architecture, this was related to the decrease in weight gain and in food consumption. The effect of AOM/DSS treatment induced inflammation in 15% and ACF in 60% of the rats' colon tissue and disrupted the entire architecture [27]. The coadministration AOM/DSS-TBLF reduced the ACF by 50% and partially restored the tissue architecture, but atrophy was still observed. This result showed the effect of TBLF in the early stages of colon tumorigenesis; and was consistent with studies reporting anticancer effects using lectins such as Concanavalina A lectin in other animal models [4,32], and Euchema serra agglutinin in mice [4]. Such results agree with the fact that glycosylation changed by tumor progression affected the TBLF affinity for cancer cells in different stages. Different studies have focused on elucidating the signaling pathways triggered by lectins, where it has been observed that lectins from different sources share some mechanisms of action on cell proliferation, survival, and apoptosis induction [33]. Immunohistochemical analysis of proliferating cell nuclear antigen (PCNA) allowed us to observe that TBLF caused a significant reduction in cell proliferation when it was administered after the carcinogen (Figure 7). These results were consistent with in vitro and in vivo studies of different lectins on cancer [7,8,32,34,35]. Different studies have focused on elucidating the signaling pathways triggered by lectins, where it has been observed that lectins from different sources share some mechanisms of action on cell proliferation, survival, and apoptosis induction [33]. Immunohistochemical analysis of proliferating cell nuclear antigen (PCNA) allowed us to observe that TBLF caused a significant reduction in cell proliferation when it was administered after the carcinogen (Figure 7). These results were consistent with in vitro and in vivo studies of different lectins on cancer [7,8,32,34,35]. The Akt pathway was evaluated due to its participation in proliferation-apoptosis regulation and as it is commonly deregulated in colon cancer [36,37]. The Akt pathway increases cell proliferation by phosphorylating GSK-3, a blocking agent of Cyclin-D, and decreases apoptosis by phosphorylation of caspase 9. No changes between the control and TBLF treated animals were observed; however, AOM/DSS exhibited an increase in p-Akt as reported in colon cancer models [38], and also a slight increment of p-caspase 9. Nevertheless, p-GSK-3 did not show differences with respect to the control rats. These results showed the participation of the Akt pathway by AOM/DSS The Akt pathway was evaluated due to its participation in proliferation-apoptosis regulation and as it is commonly deregulated in colon cancer [36,37]. The Akt pathway increases cell proliferation by phosphorylating GSK-3, a blocking agent of Cyclin-D, and decreases apoptosis by phosphorylation of caspase 9. No changes between the control and TBLF treated animals were observed; however, AOM/DSS exhibited an increase in p-Akt as reported in colon cancer models [38], and also a slight increment of p-caspase 9. Nevertheless, p-GSK-3 did not show differences with respect to the control rats. These results showed the participation of the Akt pathway by AOM/DSS in colon tissue. When rats were treated with AOM/DSS-TBLF, a decrease in p-Akt was observed with respect to the AOM/DSS-treated animals, suggesting that TBLF may block the Akt pathway. Given those results, no increase in p-GSK-3 and p-caspase 9 was expected; however, p-GSK-3 increased significantly, perhaps due to the participation of other routes such as the protein kinase A (PKA) or protein kinase C (PKC) pathways [39].
To assess whether cell death by apoptosis induction occurred, the presence of p53, Bcl-2, caspase 3, cytochrome-c was evaluated. Neither AOM/DSS, nor TBLF alone provoked an increment of caspase 3, but the co-treatment of AOM/DSS-TBLF showed a significant increase. Furthermore, the AOM/DSS-TBLF treatment showed an increase in cytochrome-c, and a decrease in the antiapoptotic protein Bcl-2. A significant increase of p53 was observed in the AOM/DSS treatment (p < 0.05) (Figure 8). The specific mutations of AOM/DSS are well known, especially in KRas with high frequency [40,41], which deregulates the corresponding signaling cascade and allows sustained proliferative activity [42]. However, AOM/DSS treatment in colon cancer models was not related to mutations on the p53 gene, but an increase has been observed in the expression of this protein [38]. Studies in colitis and colon cancer models using AOM/DSS as the carcinogen have shown an increase of both the expression and amount of p53; however, no mutations have been observed [43,44]. Although the p53 protein is typically involved in repairing damage or inducing apoptosis in cells depending on DNA damage, a mechanism called apoptosis-induced proliferation (AiP) has been described [38,45] where an exacerbated apoptotic process leads to a release of mitogens that induce cell proliferation. This process has been implicated in intestine cell growth in mice, which was promoted by an increase in p53 activity [45]. In the same way, dying xenograft cells in mice treated with radiotherapy showed an increase in caspase 3 activity, which promoted tumor repopulation [46]. Therefore, induces the AiP process as part of its action mechanism in the development of colon cancer. The AOM/DSS-TBLF group showed no statistically significant differences in p53 when compared with the control group, suggesting an inhibitory effect of TBLF on AOM/DSS cancer induction.
In order to corroborate the immunohistochemical results, gene expression of TP53, Bcl-2, caspase 9, GSK, and Akt was determined in colonic tissues (Figure 9). A significant increase in TP53, Bcl-2, and Akt expression were observed for AOM/DSS treatment, but no differences were found for AOM/DSS-TBLF-treated rats with respect to the control. Caspase 9 gene expression was significantly increased by AOM/DSS-TBLF treatment, suggesting apoptosis induction by TBLF. These results fully agree with the immunohistochemical analyses previously described and taken together, these results suggest the independent p53 apoptotic activity of TBLF on early precancerous colonic tissue. It has been reported that several lectins induce apoptosis through caspase activity in different cancer cell lines [47] and in some animal models [32,44]. Mistletoe lectins (Viscum album L. var. Coloratum) have shown anticancer effects by inducing apoptosis by caspases in melanoma [44]. Eucheuma serra lectins have exhibited a caspase-mediated apoptotic effect in colon adenocarcinoma cells (Colon26) and in BALB/c mice with colon cancer [8]. The Akt-mediated signaling pathway has also been studied and the results showed that lectins such as Canavalia brasiliensis and Viscum album It has been reported that several lectins induce apoptosis through caspase activity in different cancer cell lines [47] and in some animal models [32,44]. Mistletoe lectins (Viscum album L. var. Coloratum) have shown anticancer effects by inducing apoptosis by caspases in melanoma [44]. Eucheuma serra lectins have exhibited a caspase-mediated apoptotic effect in colon adenocarcinoma cells (Colon26) and in BALB/c mice with colon cancer [8]. The Akt-mediated signaling pathway has also been studied and the results showed that lectins such as Canavalia brasiliensis and Viscum album coloratum could induce cell death by deregulating this pathway [8,19,48]. The effect of Con-A was found to be dependent on the decrease in p-Akt, an increase in cytochrome-c, and an increase in caspase 9 activity [19,43]. In contrast, Korean mistletoe lectins (VAL) have the ability to induce p53-independent apoptosis, causing a decrease in Bcl-2 levels, and an increase in cytochrome-c and telomerase inhibitory activity on hepatic carcinoma [49]. Here, our study showed for the first time that Tepary lectins were able to induce apoptosis in colon precancerous lesions by a p53-independent way, suggesting Akt pathway blockade. coloratum could induce cell death by deregulating this pathway [8,19,48]. The effect of Con-A was found to be dependent on the decrease in p-Akt, an increase in cytochrome-c, and an increase in caspase 9 activity [19,43]. In contrast, Korean mistletoe lectins (VAL) have the ability to induce p53independent apoptosis, causing a decrease in Bcl-2 levels, and an increase in cytochrome-c and telomerase inhibitory activity on hepatic carcinoma [49]. Here, our study showed for the first time that Tepary lectins were able to induce apoptosis in colon precancerous lesions by a p53-independent way, suggesting Akt pathway blockade. Figure 9. Evaluation of the gene expression of TP53, Bcl-2, caspase 9, GSK-3, and AKT in rat colon tissues administered with AOM/DSS and treated with TBLF. Rats were intraperitoneally administered for two weeks with AOM and a further two weeks with DSS in their daily drinking water to develop ACF. Next, they were administered with TBLF (50 mg/kg) on alternate days for six weeks via intra-gastric cannula. The results are shown as a relative expression to the constitutive gene HPRT and as a proportion of the control ± SD. (*) indicates significant difference p ≤ 0.05 and (**) indicates significant difference p ≤ 0.01by one-way ANOVA with Dunnett post hoc.
Tepary Bean Lectin Fraction (TBLF) Extraction
Tepary bean (Phaseolus acutifolius) seeds were obtained from a local market in Hermosillo, Sonora, México. A sample of Tepary bean was deposited and identified in the herbarium of Dr. Jerzy Rzedowski of The Natural Sciences Faculty, Autonomous University of Querétaro, Santiago de Querétaro, Mexico. The TBLF was obtained as described previously [17,50]. In brief, ground bean seeds were degreased and an aqueous extract was obtained. A selective sequential precipitation (40%-70% ammonium sulfate) was performed and the protein dialyzed and separated by molecular weight exclusion chromatography using a Sephadex G-75 column (Sigma-Aldrich Co. LLC, St. Louis, MO, USA). Absorbance at 280 nm was measured for protein profile determination using a Beckman DU-65 spectrophotometer (Beckman Coulter Inc., Brea, CA, USA), agglutinant activity was determined [51] and the samples were lyophilized and stored for further use at −20 °C.
Confirmation of the Cytotoxic Role of Lectins in the TBLF
TBLF was subjected to ion exchange chromatography using an Econo Pack High Q cartridge (ECONO, Bio-Rad Laboratories, Inc, Hercules, CA, USA.) at a flow rate of 1 mL per min using a NaCl (0.1 to 1.0 M). A total of 120 fractions were collected and the presence of agglutination activity was Figure 9. Evaluation of the gene expression of TP53, Bcl-2, caspase 9, GSK-3, and AKT in rat colon tissues administered with AOM/DSS and treated with TBLF. Rats were intraperitoneally administered for two weeks with AOM and a further two weeks with DSS in their daily drinking water to develop ACF. Next, they were administered with TBLF (50 mg/kg) on alternate days for six weeks via intra-gastric cannula. The results are shown as a relative expression to the constitutive gene HPRT and as a proportion of the control ± SD. (*) indicates significant difference p ≤ 0.05 and (**) indicates significant difference p ≤ 0.01 by one-way ANOVA with Dunnett post hoc.
Tepary Bean Lectin Fraction (TBLF) Extraction
Tepary bean (Phaseolus acutifolius) seeds were obtained from a local market in Hermosillo, Sonora, México. A sample of Tepary bean was deposited and identified in the herbarium of Dr. Jerzy Rzedowski of The Natural Sciences Faculty, Autonomous University of Querétaro, Santiago de Querétaro, Mexico. The TBLF was obtained as described previously [17,50]. In brief, ground bean seeds were degreased and an aqueous extract was obtained. A selective sequential precipitation (40-70% ammonium sulfate) was performed and the protein dialyzed and separated by molecular weight exclusion chromatography using a Sephadex G-75 column (Sigma-Aldrich Co. LLC, St. Louis, MO, USA). Absorbance at 280 nm was measured for protein profile determination using a Beckman DU-65 spectrophotometer (Beckman Coulter Inc., Brea, CA, USA), agglutinant activity was determined [51] and the samples were lyophilized and stored for further use at −20 • C.
Confirmation of the Cytotoxic Role of Lectins in the TBLF
TBLF was subjected to ion exchange chromatography using an Econo Pack High Q cartridge (ECONO, Bio-Rad Laboratories, Inc., Hercules, CA, USA) at a flow rate of 1 mL per min using a NaCl (0.1 to 1.0 M). A total of 120 fractions were collected and the presence of agglutination activity was determined. Two pools were obtained, one with all the fractions with no agglutination activity (non-lectin protein, NLP) and the other one containing the lectins. The two pools were dialyzed, lyophilized and stored at −20 • C. Cytotoxicity of NLP was determined using 3T3/v-mos cells [11]. Briefly, cells were plated in 24-well plates (1 × 10 4 cells/well) in Dulbecco's Modified Eagle's Medium (DMEM) with 10% foetal bovine serum (FBS). After 24 h, the medium was changed to DMEM with 2% FBS for cell cycle synchronization for a further 24 h. Subsequently, the cytotoxic effect of the NLP was determined and compared to a TBLF positive control, using a protein concentration of 0.4 mg/mL for both treatments in DMEM with 0.5% of bovine serum albumin (BSA). Cells were incubated for 8 h and the cytotoxic effect was determined by image analysis (IMAGEJ 1.50b, National Institutes of Health, Bethesda, Rockville, MD, USA) using an inverted microscope (Carl Zeiss Axiovert A1, Carl Zeiss Inc., Oberkochen, Germany) at 10× resolution to measure the following morphometric parameters: area, perimeter, Feret's diameter and circularity. As a first step, images of the treated cells were processed converting the colored-image to an 8-bit format (grey scale image). Then, an automatic threshold by an algorithm default was applied to obtain a binary image [52]. Finally, cells were highlighted with the tool "fill" found in the same software. Correlation with cell proliferation by direct counting was determined for each morphometric parameter and for the cytotoxic evaluation only parameters with significant correlation were used. The criterion to discriminate between living cells and dead cells was based on the statistical evaluation of morphometric parameters using ANOVA, Tukey and Fisher tests (p ≤ 0.05) using MINITAB 17 (Minitab Inc., Harrisburg, PA, USA) with respect to control cells where a confidence interval for each significant parameter was selected: perimeter (90.0 to 96.8 µm), Feret's diameter (32.0 to 34.0 µm) and circularity (0.31 to 0.35), cells whose parameter values did not belong to these intervals were classified as dead cells.
Experimental Animals and Cancer Induction
Five-week-old, male Sprague Dawley rats were obtained from the Neurobiology Institute-UNAM, Juriquilla, México. The animals were maintained with food and water ad libitum, under a circadian cycle of 12 h light and 12 h dark at 25 • C. All experiments started after a week of adaptation and were conducted observing the procedures of the Official Mexican Norm [29] for the use of laboratory animals and with the approval of the Bioethics Committee of the Neurobiology Institute, UNAM, México. Two independent experiments were performed ( Figure 10). The first experiment (n = 10 per group) induced colon tumors by subcutaneous injection of 40 mg/body weight kg of 1,2-dimethylhydrazine (DMH, D161608-100G, Sigma Aldrich, St Louis, MO, USA), in 0.5 mL of saline solution (SS, 0.9% NaCl) twice a week for eight weeks, followed by two weeks without treatment and six weeks of TBLF (50 mg/kg each third day). The second experiment (n = 7 per group) was designed to provoke aberrant cryptic foci (ACF) using 10 mg/body weight kg of 2-azoximethane (AOM, A5486-100MG, Sigma Aldrich, St Louis, MO, USA) in 0.5 mL of 0.9% SS by a weekly intraperitoneal injection in weeks one and three, followed by one week of 2% dextran sodium sulfate (DSS) in their daily drinking water as a cancer promoter. After five weeks without treatment, TBLF was administered in a dose of 50 mg/kg in 0.5 mL SS via intra-gastric cannula [25] on alternate days for six weeks. Weekly body weight changes and average food intake were monitored in both experiments. Euthanasia was performed by decapitation a week after the last dose of TBLF. The organs were dissected under the supervision of a veterinarian and fixed in 10% formaldehyde, and the intestines were perfused with the same solution to achieve the best preservation. Tumor classification was performed by a veterinarian pathologist [27]. a dose of 50 mg/kg in 0.5 mL SS via intra-gastric cannula [25] on alternate days for six weeks. Weekly body weight changes and average food intake were monitored in both experiments. Euthanasia was performed by decapitation a week after the last dose of TBLF. The organs were dissected under the supervision of a veterinarian and fixed in 10% formaldehyde, and the intestines were perfused with the same solution to achieve the best preservation. Tumor classification was performed by a veterinarian pathologist [27]. . Two independent experiments were performed. Cancer induction with DMH was achieved after eight weeks of treatment to generate colon tumorigenesis while AOM induction took two weekly doses to generate aberrant cryptic foci. TBLF was administered for six weeks each third day and sacrificed a week later (black arrow).
Histopathology Analyses
The tissues were dehydrated and embedded in paraffin using a Histoquinete (Leica TP1020, Leica Camera AG, Wetzlar, Germany). Tissue cuts of 5 µm were obtained and mounted in gelatin-coated slides in hot water. Tissues were rehydrated and stained with hematoxylin and eosin, then dehydrated again using decreasing concentrations of alcohol and sealed with entellan and a coverslip. The analyses were performed under a microscope (Zeiss Axio Vert-A1, Carl Zeiss Inc., Oberkochen, Germany) using 5-10× magnification based on the morphology of normal colonic tissue. Lieberkühn crypts normally appear as simple tubular glands and can be classified according to morphological changes, inflammation process and structure loss for classifying damage as premalignant, low-grade damage, high-grade damage and neoplasia [27].
Immunohistochemical Analyses for Proliferation and Apoptosis
Dehydrated samples were embedded in paraffin blocks and cut into 5 µm thick positively charged Thermo Fisher slides (Thermo Fisher Scientific, Waltham, MA, USA) using a microtome and subsequently rehydrated. To unmask the epitopes, samples were placed in a water bath at 100 • C for 20 min in 0.1 M citric acid solution, pH 6.0. Endogenous peroxidases were quenched with 2% H 2 O 2 for 30 min in phosphate buffer saline with 1% tween 20 (PBST). Blocking was performed with 1% bovine serum albumin (BSA) in PBST for 1 h and incubated with primary antibodies (PCNA, caspase 3, p53, Bcl-2, p-Akt, p-GSK3, p-caspase 9, and cytochrome-c overnight (16 h approx.) at 18 • C. Secondary antibodies were added and incubated for 2 h at 18 • C. Staining was achieved using diaminobenzidine 1:8000 w/v and H 2 O 2 1:2500 in phosphate buffer saline (PBS). The reaction was stopped using ddH 2 O. Hematoxylin was used as a contrast agent; samples were dehydrated and mounted for observation under a microscope. Photographs were obtained and an optical densitometry analysis was performed using Image J ® software (v.1.5, National Institutes of Health, Bethesda, Rockville, MD, USA). Data were reported as arbitrary units ± SD.
Statistical Analyses
A one-way ANOVA by Tukey or Dunnett post hoc (p < 0.05) was used to perform the data analyses. The results were presented as average ± SD. Analyses were performed using the IBM SPSS Statistics for Windows (Version 22.0, IBM Corp., Armonk, NY, USA).
Conclusions
TBLF affected DMH-and AOM/DSS-induced tumorigenesis in the colon where only premalignant lesions or ACF were affected. The DMH experiment showed that TBLF decreases the incidence of premalignant lesions but no evidence of apoptotic mechanisms was observed. On the other hand, our results suggested that AOM/DSS effects were related to the induction of the Akt pathway and AiP process since an increase in cellular proliferation was observed together with high levels of p53; however, more work is required to have a better understanding of such effects. The co-administration of AOM/DSS-TBLF reversed p53 and PCNA to basal levels, suggesting an anti-proliferative effect of TBLF, and apoptosis was confirmed by the increase of caspase 9 gene expression, a decrease of Bcl-2, and the increment of caspase 3 and cytochrome-c proteins. The decrease of p-Akt after the AOM/DSS-TBLF treatment appears to be related to a blocking effect of TBLF; however, it is necessary to explore this pathway further. These results allow us to propose that TBLF induces apoptosis in the early stages of colon malignancy in a p53-independent way. Further research will focus on the signaling pathways triggered and/or altered by TBLF in colon cancer, and will propose an evaluation of the ROS-mediated effects, the extrinsic and intrinsic pathways of apoptosis, as well as other possibly triggered cell death processes to unravel the anticancer potential of TBLF as a therapeutic agent against colon cancer. | 2017-10-08T22:11:11.212Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "cdda595ac53d60965fcfb35b3ee289aa497c765f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/22/10/1670/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b89e854ea21ebf8f760f40643511971d5f563e96",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
220470645 | pes2o/s2orc | v3-fos-license | Bundle interventions including nontechnical skills for surgeons can reduce operative time and improve patient safety
Abstract Objective This study aimed to determine if introducing nontechnical skills to surgical trainees during surgical education can reduce the operation time and contribute to patient safety. Design Quality improvement initiatives using the KAIZEN as a problem-solving method. Setting Department of surgery in a referral and educational hospital. Participants Surgical team and quality management team. Intervention The KAIZEN was used as a problem-solving method between 2015 and 2018 to reduce the operation time. First, baseline measurement was performed to understand the current situations in our department. To achieve continuous improvement, periodical feedback of the current status was obtained from all staff. Bundles, including nontechnical skills, were established. Briefing and debriefing were performed by the surgical team. Main Outcome Measures Excessively long operation rates with a standard procedure. Results We included 1573 operations in this initiative. Excessively long operation rates were reduced in all types of surgeries, from 27.1% to 15.2% for herniorrhaphy (P = 0.005), 58.3–40.0% for gastrectomy (P = 0.03), 50.0–4.1% for total gastrectomy (P = 0.12), 65.6–45.0% for colectomy (P = 0.004), 67.8–43.2% for high anterior resection (P = 0.02) and 69.6–47.9% for low anterior resection (P = 0.03). The adherence to briefing and debriefing were improved, and majority of the surgeons favored the bundle elements. Conclusions The KAIZEN initiative was effective in clinical healthcare settings. In the event of scaling-up this initiative, the educational program for physicians should include project management strategies and leadership skills.
Introduction
Surgery is an important aspect of healthcare. Many life-threatening illnesses can be exclusively treated by surgical intervention including malignancies such as gastrointestinal cancer and infectious diseases such as acute peritonitis. However, potential surgery-related adverse events pose considerable risks for patients [1]. Half of all surgeryrelated adverse events are preventable; additional efforts are needed to improve the healthcare system and reduce the risks associated with surgery [2].
Surgery-related adverse events are generally considered to occur due to deficiencies in surgical knowledge, surgical techniques or patient management. However, improving these factors is not enough to reduce patient harm because surgery-related adverse events continuously occur [3][4][5]. Both tangible and intangible effects derived from a complex healthcare system are considered to exist.
Recently, the importance of nontechnical skills (NTSs) has become prominent [6]. NTSs are defined as behavioral aspects of performance that are derived from cognitive skills related to situational awareness and interpersonal skills, which are necessary for making decisions. Clinicians without adequate NTSs cause more adverse events [7][8]. The World Health Organization (WHO) has proposed a curriculum for patient safety that includes NTSs [9]. In the United States, the American College of Surgeons and the Association of Program Directors in Surgery provide the education curriculum for surgeons, including team-based skills [10]. Some studies have focused on improving communication in the operating room to improve patient safety by utilizing safety checklists during surgery [11]. However, NTSs or team-based training is rarely implemented in most countries, including Japan [12]. Therefore, effective initiative in the implementation of NTSs is necessary to improve patient outcomes.
Various studies have reported the existence of a surgical learning curve; this implies that several changes during the initial or early stages require surgeons to acquire proficiency with reasonable outcomes. This implies that, during the early phase, surgical outcomes are changing, the operation time is longer and the risk of morbidity and mortality is significantly higher [13][14][15][16]. These studies have assessed the influence of these factors on patient outcomes and focused on the experience of the surgeons, differences between minimally invasive and conventional surgery and different types of surgery. Since longer operations during learning curve are reportedly associated with a poor prognosis and negatively contribute to patient safety, it is important to establish educational programs to reduce operation time [17]. However, few studies have reported initiatives for reducing the risk of poor prognosis and complications for unexperienced surgeons. [18] This study aimed to determine if the introduction of the NTSs involved in surgical procedures and surgical education would reduce the operation time, minimize complications and contribute to patient safety. This paper describes the process of our bundle implementation and its effects on the patient cohort.
Intended improvement
Quality improvement initiatives were performed following the decision to improve the surgical quality of our hospital. This initiative was based on the KAIZEN strategy and the TeamSTEPPS approach as a cultural change strategy [19]. Kaizen is a comprehensive approach employed in improving the quality of work in many fields and currently applied in healthcare. [20] Though the term 'Kaizen' is used in various contexts, KAIZEN in this paper was used as a problem-solving strategy involving several sequential steps such as understanding the current problem with data, setting clear targets, analyzing factors with context, developing countermeasures, implementing countermeasures and confirming effects. According to these strategies, initiative was implemented in sequence, as follows: we shared a sense of crisis, developed a goal for our department, developed the bundles (including NTS elements), evaluated the provision feedback with run-charts and re-developed the provision.
Setting
Our institution is an educational and referral hospital, and the WHO surgical checklist is standard for all operations [21]. In Japan, general surgical training lasts for 3 years after the completion of a 2-year fundamental postgraduate study; trainees must become
Baseline period and understanding of the current situation
Baseline measurements were taken in April 2016 and were based on the data obtained between April 2015 and March 2016. During that period, 240 herniorrhaphies, 84 partial gastrectomies, 50 total gastrectomies, 93 colectomies, 59 high anterior resections and 46 low anterior resections were performed. The mean operative times were 73.6 min for herniorrhaphies, 322.7 min for partial gastrectomies, 367.1 min for total gastrectomies, 271.5 min for colectomies, 311.3 min for high anterior resections and 385.9 min for low anterior resections. Following the baseline measurements, we decided to improve the incidence rate of excessive operations and initiated interventions. We focused on the shared mental model based on the cultural change strategy, including the sense of impending crisis and our goal to improve patient outcomes in the surgical department. The mental models were shared throughout this initiative.
Intervention period 1 (PDSA1)
The measurement data were disclosed at our departmental conferences and a quality indicator conference in our hospital. Staff members in our department were not satisfied with their operative times. Therefore, we facilitated a better understanding of the necessity to improve initiatives and strategies. This improvement strategy was discussed with all staff. During this period, discussions were focused on technical skills. It was difficult to achieve immediate improvements in surgical skills as the required technical education program had already been implemented. To improve surgical quality, the operation time for each type of surgery was analyzed and shared with all staff in the department for 3 months.
During plan-do-study-act (PDSA1), some improvements were detected in the operation time. Although some staff appeared to conduct preoperative briefings and improved their operative times, the measurements were not standardized. The director of the department declared the following principles for our department to confirm the shared mental model: (i) surgery is for the patient; (ii) priorities: the patient is the first priority, surgical education is the second priority; and (iii) excessive surgery may have a negative impact on patients. Consequently, presenting measurement data to the department had some positive effects on changing the performance of the trainees and instructors.
To visualize the gap between the actual performance and the target, the following questions were asked for every type of surgery: (i) 'How long are your own operation times?' (assumption time) and (ii) 'What are the standard operative times in our department?' (target time) (Table 1). There was a huge gap between the baseline measurement times and the assumption and target times for the surgeons. This gap was immediately shared with our staff to motivate them in improving their operative times.
Intervention period 2 (PDSA2)
A run-chart of operative time for each type of surgery was made between PDSA1 and PDSA2 ( Supplementary Figures 1-6). The possible reasons for these trends were discussed using the charts. This initiative did not focus on the individual skills of each surgeon because prolonged surgeries were observed for all surgeons, including the instructors/experts. Instead, system thinking was used to improve patient safety and reduce operative time. The bundle developed in this initiative is summarized in Table 2. It includes situational awareness, decision-making, communication, teamwork and leadership, with shared mental models among the surgical team [22]. Operation time was classified into four grades, based on the type of surgery. Intraoperative management criteria and guidelines were established for instructors to enhance decision-making. The principles, standard operative times, and specific techniques were declared in conference and displayed on a bulletin board in our office.
Preoperative briefing and postoperative debriefing
The preoperative briefings and postoperative debriefings refer to dialogues exchanged by the surgical team before and after surgery for every patient. Both briefings occurred outside the operating room. Preoperative briefings included reviewing medical charts, discussing standard procedures and crucial points of the case, as well as formulating the details of the operative procedure, including support from additional staff and dividing the roles for the trainees and instructor. The purpose is to make surgeons aware of the surgical indications, operative procedures and their order, division of roles and any nonroutine steps. The surgical team could evaluate and share the knowledge of techniques and the level of achievement of the trainees and instructor. In this process, a shared mental model and enhanced teamwork were accomplished. Shared standard operative times and grading systems led to intraoperative situational awareness and decision-making for the resource management team. This is different from the briefing that is conducted using the WHO checklist in the operating room just before surgery [21]. Briefing and debriefing are required for every surgical case, even if the surgeries are routine.
Measurements
In this study, we included herniorrhaphy, partial or total gastrectomy, colectomy, high anterior resection and low anterior resection, which are considered standard procedures in general gastroenterological surgery. In addition, small variations in standard surgeries, such as gastrectomy with cholecystectomy, were included. Expansive operations, such as total pelvic excision, abdominoperineal resection for rectal cancer and total gastrectomy combined with pancreatectomy or thoracotomy, were excluded from this study. The data were collected from electronic medical records. Operation time was defined as the time from the first surgical incision until the skin was completely closed. The average operation time and the incidence of excessive operation times (to illustrate the effects of NTSs) were calculated monthly. Since no guidelines or standards for operative times have been published, the standard operative times were established by consensus after consideration of both the structure and environment of our hospital. Operative time was graded from A to D according to our criteria (Supplementary Table 1). Notably, excessive operation time was defined as grade D.
Data collection
The data were collected by the Total Quality Management Center in our hospital. Every 3 months, the data were reported back to our department. Morbidity, mortality and postoperative hospital stay were retrospectively analyzed as a balance measurement. After PDSA2, the adherence of the bundle was evaluated by a questionnaire, using a 5-point Likert scale.
Statistical analyses
All statistical analyses were performed using JMP ® The standard time is defined according to the type of surgery; a 'time out' announcement is made to the surgical team just before surgery (communication) Preoperatively decide whether a trainee will participate during the procedure; consider the experience of both the instructor and trainee (decision-making and situational awareness) Intraoperative bundle Be aware of the operative times, as it reflects the situation (situational awareness) In the cases of excessive operative time, the instructor must change the operator or ask for support from other surgeons (decision-making) Postoperative bundle Perform postoperative debriefing for the entire surgical team Consider the causes and outcomes of excessive cases Operative time grade Grade A Surgery was performed within a fair amount of time despite the trainee participation. These operative times confer minimal risk to patient safety Grade B Surgery was performed within the standard time, and the instructor conducted some of the procedures during surgery. These operative times confer minimal risk to the patient safety Grade C Surgery was performed within an excessive amount of time, and the instructor changed the operator to prevent further risks to patient safety Grade D Surgery was performed within a very excessive amount of time, and the instructor changed the operator and called for support from other surgeons. The surgical team created an occurrence report and discussed the incident the interquartile range (25-75%), P values <0.05 were considered statistically significant and all P values were two-tailed.
Patient characteristics
We included a total of 1573 operations and 42 surgeons (25 trainees and 17 instructors) in this initiative. Characteristics and outcomes of all surgeries are shown in Table 3. The frequencies of each type of surgery were comparable, although the frequency of herniorrhaphy gradually decreased from baseline to PDSA2. With regard to colectomy, the number of minimally invasive surgeries gradually increased. There were no between-group differences in patient sex and age.
Morbidity and mortality
Morbidity significantly improved in colectomy and low anterior resection (both P = 0.03), and mortality occurred in 10 cases. No significant differences in morbidity and mortality were detected among the operation groups. In the cases of low anterior resection, the morbidity rate gradually decreased. No significant differences in the re-operation rates were observed among the groups.
Operation times and excessively long operation rates
Compared to baseline measurements, improved operation times were observed for all surgery types (Table 3). Statistically significant differences were observed between baseline and PDSA2 for all types of surgery, except for total gastrectomy. The most significant difference between baseline and PDSA2 was observed in the cases of low anterior resection (P = 0.02); the operation time decreased by approximately 60 min. Excessively long operation (defined as grade D) rates were similarly reduced compared to baseline measurements, as follows: 27.1-15.2% for herniorrhaphy (P = 0.005), 58.3-40.0% for gastrectomy (P = 0.03), 50.0-34.1% for total gastrectomy (P = 0.12), 65.6-45.0% for colectomy (P = 0.004), 67.8-43.2% for high anterior resection (P = 0.02) and 69.6-47.9% for low anterior resection (P = 0.03) (Fig. 1). Improvements in grade D rates were statistically significant for all types of surgery, except for total gastrectomy.
Process measure and estimation of the surgeon
The adherence of the bundle was evaluated by questionnaires using a Likert score, as shown in Fig. 2. The adherence of briefing and debriefing improved in PDSA2 compared to PDSA1; however, only the improvement in briefing was significant (P = 0.01 and P = 0.15, respectively). Awareness of intraoperative situations and decisionmaking were evaluated only during PDSA2; situational awareness was well conducted, whereas decision-making was fairly well conducted. Inclusion of the bundles in this project was assessed for all surgeons. A majority (76.4%) of surgeons approved the effect of the bundle, including the decision-making element, and none opposed continuation of the initiative.
Discussion
During this study, a perioperative bundle was proposed to improve operative times in our department. The results demonstrated stepwise improvements in operative times after PDSA1 and PDSA2. The median operative time reduced by over 60 min in the low anterior resection group and by 50 min in the gastrectomy and high anterior resection groups. According to the Japanese Society of Gastroenterological Surgery, gastroenterological surgery is classified into three groups of difficulty [23]. Herniorrhaphy has a low difficulty; gastrectomy, colectomy and high anterior resection have moderate difficulty; and total gastrectomy and low anterior resection have high difficulty. Notably, improvement was observed in all surgical types, regardless of the difficulty. These data suggested that the bundles and use of NTSs could be effective for improving patient outcomes and safety for any type of surgery, regardless of the difficulty. Morbidities associated with colectomy and low anterior resection significantly reduced as the initiative continued. Although the power of the effect in this study was undoubtedly remarkable, the exact mechanisms and effect of NTSs underlying this improvement remain unclear because of the multifactorial effect of NTSs. Mundschenk et al. reported that trainees had no sufficient time to prepare for cases and individually learned to prepare for surgery, without the help of an instructor [24]. In the surgical hierarchy, trainees did not feel comfortable admitting the extent of preparation to their instructor; in some cases, they hesitated being taught by the instructor, even when inadequately prepared. In these situations, there is a threat of inadequate performance, which may put the patients at risk. Implementation of pre-and postoperative briefings could enhance communication and overcome the surgical hierarchy, because briefings generated changes in climate, behavior and systems in our surgical department.
Quality improvement initiatives in healthcare are generally challenging, even if their components are based on strong evidence; previous studies have reported ineffective or unsustainable initiatives [25][26][27]. There are several aspects to be transformed in implementing new manners, such as effectiveness, acceptability and penetration [28]. These results demonstrate clinical effectiveness. Because clinical professionals have the autonomy to improve their performance, visualization of clinical effectiveness reinforced surgeons/trainees continuing implemented practices. Therefore, periodic feedback of the data regarding clinical effectiveness to professionals can enhance the effectiveness, acceptability and penetration of the intervention. Acceptability is an important facet of this project. Most surgeons preferred briefing and debriefing practices, despite the additional workload, as shown in Fig. 2. These data similarly reflected the successful penetration of the surgeons and director of the surgical department. Although countermeasures to improve current situations were developed according to NTSs, it was developed by our own staff. This process is included in the KAIZEN method, and it could enhance the acceptability of the initiative. KAIZEN presented a potential to enhance acceptability and penetration, which could lead to success in all initiatives. The project leader of this study was trained by the ASUISHI program and collaborated with TOYOTA, which is a training program that utilizes the KAIZEN method for physicians and can lead to improvements in the quality and safety of healthcare in Japan [29]. When scaling-up quality improvement initiatives, educational programs that include KAIZEN as a problemsolving method and implementation science are helpful for physicians in facilitating the achievement of quality improvement.
This study has several limitations that should be addressed. Adherence of the bundles was decent in this study; however, the behavioral transformation of the surgeons while using the bundles was not directly observed. Although several perioperative NTS rating systems have been reported, these rating systems require observational staff and training to rate NTSs [30][31][32]. Because they were not suitable for clinical settings, we could not directly evaluate NTSs. More suitable and facilitated measurements of NTSs are necessary to clarify the potential effects of NTSs for improving surgical quality.
Feasibility and sustainability are similarly of concern in this study. Although the effectiveness of the bundle has been confirmed, the briefings in this study required a significant amount of time for both trainees and instructors. In many countries, physicians have an enormous amount of work, with the workload almost at its limit. [24] The time spent on briefings was not quantified as described earlier.
Meanwhile, further study is needed to establish the feasibility. However, as this initiative required no special industrial or communication equipment, we believe it could equally be implemented in all countries and regions.
Conclusion
These results demonstrated that operative times decrease following quality improvement interventions using bundles with NTSs. Quality improvement initiative with an appropriate method is important for achieving patient safety and has the potential to initiate innovations, improve local healthcare problems and remodel the hierarchy and culture of healthcare systems.
Supplementary material
Supplementary material is available at INTQHC Journal online. | 2020-07-09T09:10:33.877Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "f54ab22c41128030d31fff60997f596f3cf0d75a",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/intqhc/article-pdf/32/8/522/34169909/mzaa074.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d83f51969c7bc0871e22af4e865d3b3abe4d5d63",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59603535 | pes2o/s2orc | v3-fos-license | The year in review: progress in brain barriers and brain fluid research in 2018
This editorial focuses on the progress made in brain barrier and brain fluid research in 2018. It highlights some recent advances in knowledge and techniques, as well as prevalent themes and controversies. Areas covered include: modeling, the brain endothelium, the neurovascular unit, the blood–CSF barrier and CSF, drug delivery, fluid movement within the brain, the impact of disease states, and heterogeneity.
Editorial
There continues to be much interest in brain barrier and brain fluid research. Many important papers (too many to be cited here) have been published in the field in 2018. The purpose of this editorial is to highlight some recent advances and themes for the readership of Fluids and Barriers of the CNS. As always, the selection of papers discussed is idiosyncratic and the discussions are necessarily brief. The journal welcomes more in-depth reviews as well as novel research articles in any of the areas discussed.
In vitro blood-brain barrier (BBB)/neurovascular unit (NVU) modeling
Engineering the BBB/NVU in vitro is challenging: models differ in complexity and their ability to replicate different in vivo parameters. The utility of such models varies with the specific questions being addressed. However, there is a question as to what basic parameters such models should exhibit. DeStefano et al. have recently proposed a series of benchmarks for in vitro BBB models [1] that might form a basis for such decisions or lead to a discussion of validation parameters.
One of the recent advances in BBB modeling is the use of human-induced pluripotent stem cells (iPSCs) to derive different cells of the neurovascular unit (e.g. endothelial cells, pericytes, astrocytes). Such models are now being pursued by multiple groups (e.g. [2][3][4][5][6][7]). The use of human iPSCs may aid in the translation of basic science to the clinic. In addition, it is possible to generate such models from patient-derived iPSCs allowing examination of the impact of patient-specific mutations [8]. Thus, Lee et al. [5] found that brain endothelial cells derived from iPSCs from patients with childhood cerebral adrenoleukodystrophy have impaired barrier properties and accumulate lipid droplets, effects that were rescued by treatment with a block copolymer.
While there are many studies that have examined the role of different cell types (particularly endothelial cells, astrocytes and pericytes) on the characteristics of in vitro NVU/BBB models, the importance of extracellular matrix has generally received less attention. Katt et al. [4] and Al-Ahmad et al. [2] recently showed the importance of different extracellular matrix molecules on iPSC-derived BBB models. BBB modeling is a fast developing field (reviewed in [9]). One of the goals of multiple groups is the development of a 'barrier-on-a-chip' . Such chips may be very useful for drug testing and alternative designs are reported [10][11][12].
Computer modeling
As well in vitro modeling, there have been a number of important studies using computer (in silico) modeling in 2018, particularly in relation to the movement of fluid through the proposed glymphatic system. Thus, Faghih et al. [13] and Rey and Sarntinoranont [14] modeled fluid movement within the perivascular space. Their findings throw skepticism on the importance of perivascular bulk fluid flow, an essential component of the glymphatic system. Another example of in silico modeling are efforts to predict brain:blood unbound concentration ratios for different drugs and, thus, reduce the expense of developing brain therapies [15,16].
Animal modeling
The use of non-mammalian models to examine the blood-brain, blood-retinal and blood-CSF barriers continues to gain interest [17]. The relative ease and lower cost of genetic manipulation and brain visualization has spurred the use of zebrafish for barrier studies [18,19], while others have started examining relevant barriers in drosophila (fruit flies). Thus, Zhang et al. [20] examined the perineural glia of drosophila that form the insect equivalent of a 'blood'-brain barrier and showed a circadian rhythm in barrier function.
Brain endothelium
Brain endothelial junctions: importance, structure and regulation Brain endothelial tight junctions (TJs) are an essential component of the NVU/BBB, helping protect the brain from potentially harmful blood-borne compounds. TJ disruption can contribute to brain injury in a number of types of brain injury or disease. In 2017, Menard et al. [21] found that social stress in mice, a model of depression, reduced expression of claudin-5, that reducing claudin-5 experimentally could induce depression-like symptoms and that treatment with a known anti-depressant increased claudin-5 expression. These results suggest an important impact of changes in the NVU/BBB on brain function and animal behavior. Cheng et al. [22] recently examined a prolonged learned 'helplessness' model of depression in rats and found that BBB disruption contributed to an inability to recover from the imposed stress. Another study examining the basis of psychiatric disease (schizophrenia) suggests a link between neural cell and cerebrovascular development [23]. Furthermore, Todd et al. [24] recently used focused ultrasound to locally 'open' the BBB in rats and found this treatment disrupted both local and inter-hemispheric functional connectivity within the brain, again indicating an important role of NVU/BBB integrity in overall brain function. While adoption of focused ultrasound is progressing for enhancing brain drug delivery, safety concerns for this technology also need to be addressed [25,26], including subtle behavioral effects. Overall, the link between the cerebrovasculature and behavior is greatly understudied.
There have been a number of important studies in 2018 on the regulation of brain endothelial TJs in health and disease with some focusing on cell signaling pathways. For example, Isawa et al. [27] showed the importance of β1-integrin and extracellular matrix interactions in regulating endothelial tight junctions. Other recent studies documented the importance of lysophosphatidic acid, Wnt, PI3K/Akt and tumor necrosis factor-α signaling in regulating brain endothelial TJs [28,29]. Wang et al. [30] identified microRNA-130a as an important regulator of BBB function in cerebral ischemia and observed that this microRNA downregulates occludin expression by inhibiting Homebox A5 expression. That study is one of several demonstrating the importance of microRNAs in up-or down-regulating brain endothelial permeability (e.g. microRNA-143, -146a, -149-5p, -21, -96 and -155 [31][32][33][34][35][36]). Interestingly, in addition, Ma et al. [37] found that claudin-5 affects the expression of long non-coding RNAs (lncRNAs) in brain endothelial cells thereby regulating BBB properties in brain metastases.
Most current work on brain endothelial TJ regulation employs animals or animal-derived cells. There is a concern that species differences, e.g. in humans, in regulation may exist. Wang et al. [38] examined the potential role of one protein, periaxin, present in brain endothelial cells in humans but not in multiple other species. They found that periaxin strengthens barrier permeability and attenuates the expression of inflammatory mediators.
In terms of brain endothelial junction proteins, most attention has traditionally focused on the role of TJ proteins, although the importance of adherens junction proteins in barrier formation and function is recognized [39]. A recent study highlights the importance of brain endothelial gap junctions, and particularly connexin-43, in the barrier changes that occur in cerebral cavernous malformations type III by remodeling the TJs [40].
There is much interest in developing methods to reduce or increase TJ protein expression in brain endothelial cells to either enhance drug delivery or improve barrier function (e.g. in disease). One approach has been the use of claudin-targeted peptides that induce claudin internalization/degradation. Sladojevic et al. [41] reported that cerebral ischemia induces the novel expression of claudin-1 in the cerebral endothelium and that reducing its expression, with a claudin-1 targeted peptide, improves endothelial barrier permeability long-term.
Brain endothelial transcellular transport
Brain endothelial cells possess a wide range of transporters that are important in facilitating the entry of compounds into brain (such as the glucose transporter, GLUT1) or preventing their entry (such as multidrug resistance protein 1, P-glycoprotein). One difficulty in assessing the importance of different transporters in nutrient or xenobiotic deposition is that they often have overlapping substrate specificities and the use of quantitative proteomics to assess transporter expression continues to grow. An example is a recent study by Al-Feteisi et al. [42] who reported on rat brain microvessel transporters. This approach can also be used to compare transporter expression across different barrier tissues, such as the arachnoid-epithelial cell barrier [43].
The relative importance of BBB-mediated efflux of different compounds from brain parenchyma versus alternative egress routes (particularly the perivascular pathway) is the subject of a major review by Hladky and Barrand [44]. This is a very important and controversial topic impacting normal brain function, multiple disease states (e.g. β-amyloid clearance in Alzheimer's disease) and drug delivery. Because brain to blood efflux (e.g. via ATP-biding cassette (ABC) transporters) has an important role in regulating both the brain concentrations of endogenous compounds and xenobiotics, there is great interest in how such transport is regulated. Recent examples include reports by Hartz et al. [45], who found that preventing P-glycoprotein ubiquitination could be used to decrease β-amyloid levels in a mouse model of Alzheimer's disease; Xie et al. [46] who provide evidence that micro-RNA-298 regulates P-glycoprotein; and Shin et al. [47] who showed that estrogen represses breast cancer resistance protein expression and activity in brain endothelial cells after ischemia. One neglected aspect of blood-brain transport studies is whether there may be sex differences. Brzica et al. [48] recently found that there are differences in adult male and female rats in the expression and activity of organic anion transporting polypeptide 1a4 in rat brain microvessels. Whether similar differences exist in humans is an important question because of the implications for drug delivery.
Another transcellular route across the cerebral endothelium is vesicular trafficking. This is particularly important for proteins, including potentially therapeutic antibodies. A better understanding of trafficking mechanisms and vesicle fate (e.g. transcytosis vs lysosomal degradation) will aid in such therapeutic studies. Haqqani et al. [49] recently delineated antibody trafficking between different endosomal compartments. Such mechanisms may differ between species and Ribecco-Lutkiewicz et al. [6] recently found that a human iPSC-derived BBB model can be used to study receptor-mediated transcytosis triggered by antibodies. An important question is how brain endothelial transcytosis may be impacted by particular disease states? Sadeghian et al. [50] recently found that cortical spreading depolarizations triggered caveolin-1 dependent endothelial transcytosis.
One important aspect of blood-brain transcellular transport mechanisms is that they may be 'hijacked' by certain viruses allowing brain penetration. The recent Brazilian Zika virus outbreak caused microcephaly and other neurological conditions. Alimonti et al. [3] used an in vitro BBB model derived from human iPSCs to examine mechanisms of virus transport. They found evidence for transcellular penetration involving a receptor tyrosine kinase, AXL, suggesting potential targets for therapeutic interventions.
Neurovascular signaling
The cells of the NVU (including, endothelial cells, astrocytes, pericytes neurons and perivascular macrophages) play an important role in regulating cerebrovascular function, including permeability [51]. Most attention has focused on the role of secreted peptides/proteins (e.g. [52]) and, more recently, lipids such as sphingolipids [53], by astrocytes and pericytes in such regulation. However, the importance of other signaling mechanisms is becoming clear. This includes a role of extracellular vesicles/exosomes (reviewed in [54]) and microRNAs (see above). Extracellular vesicles are released by a wide range of cell types and can contain a variety of signaling molecules including proteins, microRNAs and mRNAs. Another intriguing possible signaling mechanism has been suggested by Errede et al. [55] who described pericyte-derived tunneling nanotubes that may be involved in pericyte to endothelial cell signaling during normal and pathological angiogenesis.
While much attention has focused on the impact of pericyte-, astrocyte-and neuronal-derived signals on the brain endothelium, there are also important endothelialderived signals that affect parenchymal cells. An example of such pathways was recently described by Segarra et al. [56] who found that endothelial disabled1 (Dab1) is an important regulator within the NVU. Deletion of endothelial Dab1 reduces laminin-α4 secretion and thereby decreases integrin-β1 signaling in astrocytes that, in turn, regulates both neuronal migration and BBB function.
Endothelial glycocalyx and the vascular basement membrane
Two components of the NVU that receive relatively little attention are the endothelial glycocalyx and the vascular basement membrane. Kutuzov et al. [57] examined the penetration of different sized tracers into brain and found that the glycocalyx on the luminal membrane of the cerebral endothelium is a significant barrier to the penetration of large molecular weight dextrans. There is also recent evidence that preserving the glycocalyx after cardiac arrest in rats helps preserve barrier function and reduce brain edema formation [58].
As noted above, in vitro evidence indicates the importance of extracellular matrix components in determining vascular wall properties [4]. The neurovascular basement membrane is secreted by multiple cell types (e.g. endothelial cells, pericytes and astrocytes). The molecular complexity of the vascular basement membrane and its structural compartments within the vascular bed are now delineated by Hannocks et al. [59] using cellular and molecular markers. This group of investigators propose that the capillary endothelial cell-and perivascular cell-derived proteins are physically separated, potentially providing a route for perivascular fluid flow [59,60] (see below).
Choroid plexus and CSF
The role of the choroid plexus in neuroinflammation continues to garner interest, including mechanisms for combatting infection [61,62]. Interestingly, the choroid plexus produces high levels of the 'anti-aging' protein klotho and recent evidence indicates that klotho has a major role in regulating the brain/immune interface at the choroid plexus [63]. Other results have shown that the choroid plexus forms an important niche for T cell activation within the brain [64].
Progress continues to be made on identifying ion transporters at the choroid plexus and determining their role in regulating fluid movement and CSF composition [65][66][67]. For example, Preston et al. [68] examined the role of the transient receptor potential vanilloid 4 (TRPV4) cation channel at the choroid plexus epithelium and report that activating this channel caused an immediate increase in transepithelial ion flux. Modulating this channel may be a novel way of controlling choroid plexus ion transport and, potentially, CSF secretion. Another notable finding is that a significant portion of CSF production may result from molecular transfer of water via the action of a Na/K/2Cl cotransporter (NKCC1) expressed on the CSF-facing membrane of the choroid plexus [69]. This novel mechanism, in concert with aquaporins, may be responsible for nearly half of the water flux during CSF production.
Several neurological conditions result in changes in CSF composition and, therefore, CSF is widely used in disease biomarker studies. For example, the use of CSF biomarkers in Alzheimer's disease, cerebral amyloid angiopathy and other neurodegenerative diseases, is emerging, although there are still areas of technical and conceptual controversy (see reviews in [70][71][72][73]). While the use of blood-borne biomarkers may have major advantages, there are still hurdles to overcome for that approach [74].
Changes in choroid plexus function with disease may contribute to changes in CSF composition. Stopa et al. [75] examined changes in the choroid plexus transcriptome in patients with neurodegenerative disorders (Alzheimer's disease, frontotemporal dementia and Huntington's disease) and found both common and disease-specific changes. These might be involved in brain damage but also brain protection and repair.
While most focus has been on CSF proteins as biomarkers, there are starting to be growing numbers of studies examining RNA and DNA in CSF. For example, Dos Santos et al. found evidence that a panel of microR-NAs in CSF could be used to detect early Parkinson's disease [76]. There has also been interest in using CSF DNA to examine the presence of mutations or transcript copy number variations associated with brain tumors [77].
Arachnoid membrane
Compared to the NVU/BBB and the blood-CSF barrier at the choroid plexus, the blood-CSF barrier at the arachnoid membrane has received very little attention. Zhang et al. [43] recently used quantitative proteomics to compare transporter expression in the leptomeninges vs. choroid plexus in rat. While some transporters were enriched in choroid plexus, others were much more highly expressed in the leptomeninges including multidrug resistance protein 1 (p-glycoprotein), breast cancer resistance protein (bcrp) and organic anion transporter-1. The role of the arachnoid membrane in brain homeostasis and drug disposition deserves greater attention.
One potential reason for the high expression of transporters in the leptomeninges is suggested by the recent finding that the leptomeninges are the primary source of prostaglandin D2, which is involved in sleep regulation [78]. This adds to a growing literature indicating that the leptomeninges are a source of neuroactive/neurotropic factors. Transporter expression at the leptomeninges may be involved in regulating the concentration and distribution of such factors.
Drug delivery
With the rising capabilities of immunotherapy, there is great interest in the delivery of therapeutic antibodies to the brain for a variety of neurological conditions, including Alzheimer's disease and advances are being made in the design of antibodies to enhance CNS delivery (reviewed by Stanimirovic et al. [79]). Most preclinical work on antibody delivery is performed in rodents and differences between species are a concern for translational applications. Wang et al. [80] examined whether the CSF/serum ratio after systemic antibody delivery differs between rats and cynomolgus monkeys and found good agreement between those two species.
Alternative approaches to delivering antibodies to the brain involve bypassing the BBB via either the intranasal route or direct CNS (usually intrathecal) administration. The utility of such approaches in accessing particular brain areas or cell types requires knowledge of how antibodies move within the CNS. Pizzo et al. [60] examined how IgG and smaller single-domain antibodies move within the rat brain after intrathecal administration, with a particular focus on perivascular transport. The perivascular route allowed the penetration of these macromolecules deep within the brain in contrast to diffusion across the brain surface. Interestingly, they found evidence of perivascular transport of these molecules in all vessels, including capillaries (see below) and reported that perivascular flow was enhanced by osmolyte co-infusion with the antibodies.
Much of the in vivo evidence on small drug penetration into brain has been derived from rodent studies. There are concerns that there may be differences between species in drug penetration, particularly if there is a transporter component in either influx or efflux. There has, therefore, been interest in developing humanized mouse models expressing human transporters. Sano et al. [81] reports development of such a mouse expressing the organic anion transporter OATP1A2 in the brain endothelial cell although, unfortunately, it had little effect on the brain penetration of the substrates examined.
There continues to be great interest in developing methods to disrupt the BBB to enhance drug delivery. One approach is the use of focused ultrasound and Arvantis et al. [82] reports studies on a mouse model of brain metastases. Interestingly, they found that focused ultrasound in combination with microbubbles not only enhanced barrier permeability, it also increased interstitial convective transport.
A proposed alternate approach for increasing drug delivery via enhanced neurovascular permeability is to use endogenous signaling pathways that regulate barrier function. There is preclinical evidence that activation of the adenosine A2A receptor can cause transient enhanced permeability and Jackson et al. [83] examined whether an A2A receptor agonist, regadenoson, could enhance temozolomide concentrations in brain in glioblastoma patients. The initial results indicated no enhancement. This again shows the difficulty of translating preclinical data to patients and further pharmacodynamic and pharmacokinetic studies are needed.
Fluid movement within the brain
Fluids and Barriers of the CNS is currently producing a thematic series on this subject, entitled, CNS Fluid and Solute Movement: Physiology, Modeling and Imaging.
Perivascular and parenchymal fluid flow
The concept of a glymphatic system has engendered a surge in interest in fluid (and associated solute) flow within the brain [84]. The proposed glymphatic system involves fluid entry into brain along the arterial perivascular space, fluid movement through brain parenchyma that is astrocyte and aquaporin-4 dependent, and fluid exit from brain along the venous perivascular space. There is evidence that this system is altered by variables such as exercise [85], circadian rhythm [86] and disease states [84]. However, recently the concept of a glymphatic system has been vigorously debated (for reviews see [84,87]).
Most evidence for the glymphatic system comes from studies on perivascular flow using two-photon microscopy and also now with Magnetic Resonance Imaging [88][89][90]. However, it should be noted that even that component has been questioned. For example, Faghih et al. [13] have tried to computer model fluid flow within the glymphatic system and found it implausible based on current anatomical and pressure gradient data. Another modeling paper, Rey and Sarntinoranont [14], also predicted that fluid flow in the perivascular space would be oscillatory with no net flow over time. Most studies of the glymphatic system are currently based on solute tracking rather than measuring fluid flow and new techniques to examine the latter could be very informative.
The parenchymal component (astrocyte/aquaporin-4 mediated) of the glymphatic system has been the most difficult to study. However, Huber et al. [91] did recently report that an aquaporin-4 facilitator promotes brain interstitial fluid circulation. Recently, a potential alternate link between the periarterial and perivenous spaces has been proposed for fluid and solute flow, a pericapillary space [60]. Anatomically, the vascular basement membrane is secreted by endothelial cells and perivascular cells (pericytes/astrocytes) and a gap between these types of the basement membrane may form the basis of a pericapillary space [59]. Also in contrast to the original glymphatic hypothesis, a recent study by Albargothy et al. [92] concluded that tracers in the CSF pass into the brain parenchyma along the pia-glial basement membrane alongside arteries and exit the brain along intramural pericapillary and periarterial basement membranes.
One proposed role of the glymphatic system is in the clearance of potentially toxic peptides/proteins from the brain, including β-amyloid [84]. The evidence on the relative importance of perivascular drainage of β-amyloid versus BBB transport has recently been reviewed in depth by Hladky and Barrand [44]. They contend that the current evidence favors the BBB as being the most important route.
CSF flow and drainage
CSF flow and the role of the choroid plexus continues to be somewhat of a contentious subject [93]. However, recent evidence shows the critical importance of CSF flow. Petrik et al. [94] found that alterations in ventricular fluid flow promote proliferation in subependymal zone neural stem cells in mice by eliciting Na and Ca signaling in those cells. This signaling only occurred in neural stem cells in contact with ventricular fluid. Thus, CSF flow has a central role in regulating adult neurogenesis and the implications of this for conditions with altered CSF flow, e.g. hydrocephalus, need to be explored.
The classical view that CSF drainage occurs at the arachnoid villi of the sagittal and transverse sinuses has been challenged for decades with a growing understanding of the importance of drainage across the cribriform plate to the nasal lymphatics/cervical lymph nodes and drainage via the spinal nerve roots to the lumbar lymph. More recently, the importance of meningeal lymphatics that also drain to the cervical lymph nodes has been highlighted. Da Mesquita et al. [95] examined the effects of manipulating meningeal lymphatics using a variety of techniques to inhibit drainage in mice and vascular endothelial growth factor C to enhance those lymphatics. They found that meningeal lymphatic dysfunction aggravates age-associated cognitive decline, Alzheimer's disease pathology and β-amyloid accumulation, while enhancing lymphatic drainage improved learning and memory in mice. Similarly, Louveau et al. [96] provides evidence on the importance of meningeal lymphatics in neuroinflammation. Ma et al. recently found using fluorescent tracers and imaging in awake and anesthetized mice, that tracers exited to the lymphatic system faster with slower spread into the brain perivascular spaces in awake mice than when anesthetized [97].
Disease states
Hydrocephalus Congenital hydrocephalus is difficult to treat successfully and has a variety of causes. Fetal MRI can reveal anatomic features that predict aqueduct stenosis which can help in subsequent obstetric management [98]. There continues to be an interest in the role of ependymal cilia in the development of hydrocephalus. Abdelhamed et al. [99] found that a homozygous splice site mutation in the coiled-coil domain containing 39 (Ccdc39) protein causes the progressive hydrocephalus that occurs in the prh mouse. Those mice develop shorter ependymal cilia with disorganized microtubules. Similarly, mouse models lacking the ciliary proteins CFAP221, CFAP54 and SPEF2 all develop hydrocephalus, the severity of which depends on the background strain. McKenzie et al. [100] have examined the genetic basis of that variation, identifying genes involved in brain and cilia development and function. Malfunctioning genes that regulate neural tube development and neural stem cell fate also lead to abnormal neurogenesis and congenital hydrocephalus [101].
In premature infants and in adults, post-hemorrhagic hydrocephalus is a major clinical problem and the Hydrocephalus Association hosted a conference on the topic to discuss opportunities for research and encourage further research efforts. A summary of that meeting has been published [102]. There has long been an interest in whether choroid plexus ion transport might be targeted as a therapy for hydrocephalus. For example, in 2017 Karimy et al. [103] provided preclinical data that intraventricular hemorrhage causes CSF hypersecretion by stimulating choroid plexus Na/K/Cl cotransport. In 2018, Li et al. [104] found that germinal matrix hemorrhage in neonatal rats induces upregulation of the sodium-coupled bicarbonate exchanger (NCBE) and targeting that transporter using small interfering RNAs reduces posthemorrhagic hydrocephalus.
Idiopathic intracranial hypertension is a syndrome of unknown cause in obese patients with high intracranial pressure, without ventriculomegaly but with associated visual disturbances and morbidity. A multivariate analysis showed that anemia and NSAID use were risk factors [105]. There is some evidence that the condition is related to abnormal pressure in the cranial venous sinuses affecting CSF drainage [106,107].
The pathogenesis of normal pressure hydrocephalus (NPH), which occurs in the elderly, is unclear. A large group of possible NPH patients studied in Finland showed that confirmed NPH patients had a higher incidence of hypertension or type-2 diabetes mellitus and that cardiovascular and cerebrovascular disease was the most frequent cause of death [108]. A cohort of Finish and Norwegian NPH patients was found to have a fourfold higher incidence of copy number loss in intron 2 of SFMBT1 than normal although the pathogenic role was not clear [109]. Treatment is usually performed when patients show classic symptoms of gait disturbance, mental and urinary problems. A small study however, has shown that patients develop enlarged ventricles at least 3 years before symptoms are apparent, suggesting possible implications for management [110]. Diagnosis most often relies on invasive infusion techniques. For example, as part of a clinical trial, the intracranial CSF dynamic profile was studied by infusion in NPH patients and found to differ from that of healthy controls [111]. Recently, an MRI study on healthy individuals by Burman et al. [112] determined the relative distribution of compliance between the spinal and cranial CSF compartments and proposed a model that can be used to estimate both the cranial compliance and intracranial pressure non-invasively. Hence, non-invasive MRI techniques continue to be a promising route to aid diagnosis and determine shunt surgery response. Computerized volumetric analysis of the CSF spaces enabled good discrimination between NPH and brain atrophy patients [113] and the apparent diffusion coefficient of water measured in different brain regions can distinguish between NPH patients and patients with vascular dementia or Alzheimer's disease [114]. On the other hand, specific features of MR images in NPH patients are not good predictors for reversibility of symptoms after shunt surgery [115]. The same group reported that around 40% show improved symptoms after surgery [116]. Many CSF biomarkers continue to be investigated as a tool to predict treatment outcome for NPH: promising compounds suggested to warrant further investigation are Aβ42, tau, p-tau, neurofilament light protein (NFL) and leucine-rich α-2-glycoprotein (LRG) [117].
Role of NVU/BBB changes in neurological conditions
NVU/BBB functions are altered in many neurological conditions. There have been multiple reviews in 2018 outlining the current state of knowledge with regards to Alzheimer's disease and other neurodegenerative diseases [118][119][120], ischemic stroke [51,121], hemorrhagic stroke [122], multiple sclerosis [51] and primary and metastatic brain cancer [51]. Importantly, these reviews serve to highlight that NVU/BBB changes are not just a consequence of parenchymal injury, but may actually contribute to that injury and are a therapeutic target. Thus, for example, targeting the vascular changes in Alzheimer's disease and cerebral ischemia may reduce disease progression [120,121]. Recent results also indicate that brain endothelial cell dysfunction is the underlying cause of white matter injury in cerebral small vessel disease [123]. That is not to say that parenchymal cell dysfunction may not cause NVU/BBB dysfunction. For example, Rempe et al. [124] recently showed that neuronal glutamate release in epilepsy causes matrix metalloproteinase-2 and -9 upregulation that, in turn, causes BBB disruption and may further impact the brain.
The mechanisms by which NVU/BBB dysfunction may cause parenchymal cell injury in different neurological conditions are under investigation. There are multiple neuroactive compounds present in plasma that may gain entry into brain and participate in injury. One extensively studied compound is fibrinogen that has pleiotropic roles in CNS inflammation [125].
Another interesting question is what severity of brain injury is required to cause NVU/BBB dysfunction. A recent porcine study suggests that concussion causes mechanical BBB disruption [126]. Indeed, a study on high school American football players indicates that even sub-concussive (clinically asymptomatic) high acceleration hits during a season result in elevated levels of brain injury markers in serum [127]. The appearance of these brain proteins (tau and ubiquitin C-terminal hydrolase L1) in serum may involve some NVU/BBB disruption. Tagge et al. [128], studying mice also found that closedhead injuries can cause neuropathology including microvascular injury independent of signs of concussion.
Temporal heterogeneity
Far from being static entities, the blood-brain barriers show dynamic change with time (long-and short-term). While there have been many studies examining the development of the blood-brain and blood-CSF barriers, including in 2018 (e.g. [19,129]), the impact of ageing on these barriers is still receiving relative little attention. Such changes may participate in disease susceptibility, even in people without neurodegenerative diseases. Goodall et al. [130] examined the impact of ageing in humans and mice on BBB function. They found increased TJ breaks in mice with age and increased brain vascular permeability (protein extravasation) in humans with age. Similarly, Stamatovic et al. [131] found increased vascular permeability and altered TJ organization with ageing in mice. Interestingly, they found that these changes were associated with decreased brain endothelial sirtuin-1 (Sirt-1) expression and that Sirt-1 regulates BBB function (e.g. cell-specific knockout increases permeability). Importantly, they also found evidence that down-regulation of brain endothelial Sirt-1 also occurs in the human brain with ageing.
In addition to long-term temporal changes in function of the blood-brain barriers, there is growing evidence for short-term changes not only in response to injury but also in the normal brain. Thus, the choroid plexus displays a circadian rhythm, with circadian changes in the expression of 'clock genes' [132]. Importantly, those changes may, via the CSF, impact the suprachiasmatic nucleus of the hypothalamus, the location of the main mammalian circadian clock [132]. One impact of the choroid plexus circadian rhythm are changes in the clearance of metabolic waste products [133]. A recent study on the drosophila 'equivalent' of the BBB also detected a circadian rhythm with alterations in efflux transport [20]. There has been considerable interest on the effects of Keep et al. Fluids Barriers CNS (2019) 16:4 sleep on the glymphatic system and metabolic clearance [134]. How these barrier and fluid flow changes are integrated merits further attention.
Spatial heterogeneity
As well as temporal heterogeneity, there is growing evidence of spatial heterogeneity in the NVU/BBB. Thus, there is evidence of differences in barrier function between different areas of the brain that might impact the course of neuropathology [135]. Interestingly, Wang et al. recently discovered that the role of the Norrin and Wnt7a/Wnt7b signaling systems in NVU/BBB and bloodretinal barrier development shows marked regional heterogeneity [129].
In an important study, Vanlandewijck et al. [136] used single cell transcriptomics to examine changes in endothelial and mural cell gene expression going from arterial, capillary, to venous vessels (zonation). That study, and the associated searchable database, demonstrates marked effects of zonation on gene expression, e.g. for some genes there are marked changes in expression between arterial, capillary and venous endothelial cells. It should be noted that at the protein level differences in expression have be found even between adjacent endothelial cells (e.g. [137]).
Future directions
Brain barrier and brain fluid research continues to be a vibrant field with many groups participating in this exciting area of CNS investigation. Insights from scientists in other areas, such as imaging, bioengineering and computer modeling, are helping to advance the field. Research has also been inspired by discoveries in other tissues. As with 2018, 2019 promises to be a year of provocative and important findings and we look forward to further excellent contributions to Fluids Barriers of the CNS.
Authors' contributions RFK wrote the initial draft. HCJ and LRD added sections and edited the manuscript. All authors read and approved the final manuscript. | 2019-02-05T15:29:35.608Z | 2019-02-05T00:00:00.000 | {
"year": 2019,
"sha1": "0e946d7de09fb6a640a6ecd92a48c44867952913",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12987-019-0124-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a31bf32e8cd83b84ac3055543bddeabb72c75a02",
"s2fieldsofstudy": [
"Biology",
"Psychology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
261396862 | pes2o/s2orc | v3-fos-license | Widespread retreat of coastal habitat is likely at warming levels above 1.5 °C
Several coastal ecosystems—most notably mangroves and tidal marshes—exhibit biogenic feedbacks that are facilitating adjustment to relative sea-level rise (RSLR), including the sequestration of carbon and the trapping of mineral sediment1. The stability of reef-top habitats under RSLR is similarly linked to reef-derived sediment accumulation and the vertical accretion of protective coral reefs2. The persistence of these ecosystems under high rates of RSLR is contested3. Here we show that the probability of vertical adjustment to RSLR inferred from palaeo-stratigraphic observations aligns with contemporary in situ survey measurements. A deficit between tidal marsh and mangrove adjustment and RSLR is likely at 4 mm yr−1 and highly likely at 7 mm yr−1 of RSLR. As rates of RSLR exceed 7 mm yr−1, the probability that reef islands destabilize through increased shoreline erosion and wave over-topping increases. Increased global warming from 1.5 °C to 2.0 °C would double the area of mapped tidal marsh exposed to 4 mm yr−1 of RSLR by between 2080 and 2100. With 3 °C of warming, nearly all the world’s mangrove forests and coral reef islands and almost 40% of mapped tidal marshes are estimated to be exposed to RSLR of at least 7 mm yr−1. Meeting the Paris agreement targets would minimize disruption to coastal ecosystems.
Nature | Vol 621 | 7 September 2023 | 113 deficit is sustained for a sufficiently long period, elevation capital is exhausted. For wetlands, retreat and a transition to open water may occur, and in reef islands, submergence of reef crests will increase wave exposure and wave over-topping frequency. Whether the areal extent of the habitat expands or contracts over time depends on the rate of loss and the rate of new habitat formation, both both of which are influenced by RSLR 14 .
Contemporary observations of high accretion rates in coastal ecosystems have indicated resilience under current and projected RSLR rates, prompting reassessment of their vulnerability in modelling studies 3,11,14 . Conversely, studies emerging from the palaeo record show a comparatively high vulnerability of mangroves 12 and tidal marshes 15,16 to rates of RSLR that are anticipated in coming decades under moderate and high emissions scenarios 17 . Palaeo records show that most coral reef islands formed during the later stages of the Holocene epoch under conditions of stable or falling relative sea level 2 (RSL). The upper limits of resilience to projected RSLR remains an important knowledge gap, with wide ranging implications for coastal zone protection and management.
Here we analyse three independent lines of evidence to assess the vulnerability and exposure of coastal ecosystems to the higher rates of sea-level rise (4 mm yr −1 to more than 10 mm yr −1 ) projected under global warming scenarios. We focus on intertidal and supratidal ecosystems that undergo vertical adjustment from biogenic feedbacks, facilitating resilience to RSLR: mangroves, tidal marshes and coral reef island systems. We exclude beaches, rocky reefs and rock platforms, for which biogenic feedbacks with RSLR are largely absent, and subtidal vegetated ecosystems (seagrass meadows and kelp forests) for which thermal stress is likely to be the primary driver of change, rather than RSLR 18,19 . First, we review the behaviour of these ecosystems over the range of sea-level histories encountered following the Last Glacial Maximum 19 thousand years ago (ka), and particularly since 10 ka. Second, for mangroves and tidal marshes, we document elevation trends in relation to contemporary rates of RSLR using a global network of survey benchmarks, the surface elevation table-marker horizon (SET-MH) network. Third, we analyse the extent to which contemporary coastal ecosystems show conversion to open water (hereafter referred to as 'retreat') under a range of settings with varying rates of RSLR. From these three lines of evidence, we estimate the probability of coastal ecosystem retreat in relation to RSLR rates and model the response of the world's existing coastal ecosystems under the RSLR projections of the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report 17 , including potential compensation through conversion of terrestrial uplands (hereafter 'landward migration'). From these analyses a picture emerges of the narrowing boundaries of the 'safe operating space' 20 for coastal ecosystems: the climate futures expected to be of low risk to existing ecosystems.
Responses to past sea-level rise
RSL varies globally in response to both water and land vertical movement 21 . Coastlines continue to adjust to the loss of ice sheets after the previous glacial period, particularly in higher latitudes, a process called glacial isostatic adjustment 21,22 (GIA). GIA modelling provides insights into sea-level trends since the last deglaciation. These observations and models have been applied to interpret rates of RSLR associated with the timing of mangrove and tidal marsh retreat and/or advance in stratigraphic successions, with results showing broad consistency among settings 12,15 . Rapid global mean sea-level (GMSL) rise (over 10 mm yr −1 ) during several periods since the Last Glacial Maximum has drowned mangrove forests and tidal marshes (Fig. 1). Periods of rapid GMSL rise include: (1) meltwater pulse 1A (14.6-14.3 ka) which drowned mangroves, tidal marshes and coral reefs, the remains of which have been found at water depths of around 90 m; and (2) a rapid rise in GMSL 11.3-11.0 ka, leaving relict features at water depths of around 50 m. Interspersed with these phases have been periods of slower GMSL rise, allowing extended periods of coastal ecosystem expansion ( Fig. 1; Methods). For example, mangrove sediments associated with preserved coastal palaeo channels on the Sahul Shelf (northwest of mainland Australia) and the Sunda Shelf (western South China Sea) dating to 16.0-14.5 ka were probably drowned during meltwater pulse 1A (Methods). Mangrove forests re-established in India from around 10 ka on the former delta of the Ganges-Brahmaputra River (Fig. 1) and the northeast Australian continental shelf (Queensland). In both locations, RSLR declined to between 6 and 7 mm yr −1 , and rates of sedimentation were high. These forests were subsequently drowned during a period of more rapid RSLR of 7 to 8 mm yr −1 between 9.0 and 8.5 ka (Methods). Widespread mangrove forest development commenced around 8.5-7.5 ka in Southeast Asia, northern and eastern Australia, South America and Africa (Fig. 1) as the rate of RSLR declined 12 below 7 mm yr −1 (Fig. 2c), and mangroves associated with large rivers were able to maintain their intertidal position by trapping sediment and accumulating root mass.
Tidal marshes in Great Britain were 9 times more likely to retreat than advance during the Holocene when RSLR exceeded 7.1 mm yr −1 , based on more than 780 reconstructions of tidal marsh evolution 15 (Fig. 2b). In the Mississippi Delta, only short-lived and rapidly retreating fringing tidal marshes existed before 8.2 ka, with retreat occurring in approximately 50 years before RSLR slowed 16 to less than 6 to 9 mm yr −1 . Tidal marsh retreat took longer (centuries), as RSLR dropped below 6 mm yr −1 after around 8.2 ka, but marshes did not stop retreating in the Mississippi Delta 16 until RSLR was less than 3 mm yr −1 (Fig. 2b).
Sea level stabilized in the mid-Holocene in those parts of the world that were distant from former centres of glaciation, and many coral reefs-especially in the Pacific-reached sea level with diversification of reef habitats 23 . Subsequent fall of sea level relative to these regions resulted in emergent reef platforms, some of which became suitable habitat for mangroves, and on which it became possible for reef islands to form 24 . Infilling of estuaries resulted in development of extensive coastal plains 12 , reducing intertidal areas previously covered by mangroves, including in coastal Northern Australia (South Alligator River, Fitzroy River, Ord River, Cleveland Bay and Richmond River), Thailand (Great Songkhla Lakes) and Vietnam (Mekong River and Red River).
The palaeo record therefore indicates a capacity for vertical adjustment to rates of RSLR similar to those encountered in the instrumental period. If these rates of RSLR are sustained, coastal lowlands may be re-occupied by tidal wetlands where migration is permitted, and in many places this encroachment has already commenced 25 . The potential for increased extent in these regions under higher sea level is captured in global wetland adjustment models 14,26 . However, there is consistent evidence that vertical adjustment and habitat extent are greatly reduced 12,23 as RSLR approaches 7 to 8 mm yr −1 .
Elevation trends under current sea-level rise
Mangrove and tidal marsh accretion can increase with the rate of RSLR. Increased inundation depth and duration can facilitate both mineral deposition 27 and higher plant productivity and root mass accumulation 11 . Rates of accretion measured against artificial marker horizons and radiometric markers often correspond to high rates of RSLR encountered in settings where land is subsiding 3,28,29 . High rates of accretion (10 to 20 mm yr −1 ) have been observed in contemporary mangroves and tidal marshes on active deltas 29,30 , and accretion in the intertidal zone increases with increased depth and duration of inundation 3 . The assumption that accretion enables vertical adjustment to RSLR is the basis of projections of possible resilience under projected future rates 31 , but the assumption requires testing against fixed elevation benchmarks.
To assess the relationship between accretion of surface sediment, vertical adjustment and sea-level rise in contemporary coastal wetlands, we used the SET-MH method ( Fig. 2d; Methods). The SET-MH 114 | Nature | Vol 621 | 7 September 2023 Article uses a benchmark survey rod coupled with an introduced sediment horizon to assess the relationship between accretion and elevation gain 32 . SET-MH data in tidal marshes show that shallow subsidence (the difference between sediment accretion and elevation gain) increases with accretion rate and the rate of RSLR 28,29 . A previous analysis of a globally distributed network of 477 tidal marsh SET-MH stations showed that the increase in the subsidence rate with increasing accretion was non-linear 29 . For this reason, elevation deficits emerged under rates of RSLR similar to those inferred from the stratigraphic record 29 (Fig. 2e and Extended Data Fig. 2). We repeated this Bayesian analysis for 190 SET-MH installations in mangrove forests (Methods), estimating the cumulative probability of vertical adjustment at or exceeding the rate of RSLR at the SET-MH stations ( Fig. 2f and Extended Data Fig. 2). The results were consistent with those for tidal marshes. We found that an elevation deficit at mangrove sites is very likely (P > 0.9) at RSLR between 7 and 8 mm yr −1 (Fig. 2f), consistent with tidal marshes monitored using the same method (Fig. 2e). These observations concur with the limits of tidal marsh and mangrove stability in relation to palaeo-RSLR as inferred from the stratigraphic record 29 and described previously (Fig. 2a-c).
Habitat change under current sea-level rise
As a third line of evidence, we assessed whether changes in the extent of tidal marsh and open water were consistent with RSLR and/or the deficit between RSLR and marsh vertical adjustment (Extended Data Fig. 3; Methods). Previous surveys of contemporary North American tidal marshes in low-to-moderate tidal range settings 33 found that habitat retreat commenced at a RSLR of 4 to 6 mm yr −1 . For example, the Maryland Eastern Shore is retreating 34 under a long-term RSLR trend of around 6 mm yr −1 . In a comprehensive analysis of tidal marshes in the contiguous USA, gains in tidal marsh were found to be inversely related to RSLR, with some marsh loss associated with short-term perturbations, notably hurricanes 34 . RSLR was also associated with reduced normalized difference vegetation index (NDVI) values for vegetation adjacent to the marsh 34 , possibly resulting from saline water intrusion. We used high-resolution global mapping of surface water change 35 and tidal wetland extent 36 in the immediate vicinity of tidal marsh SET-MH stations globally to determine the influence of contemporary RSLR, elevation capital and elevation deficit on marsh loss. Canopy cover obscured observations of surface water in mangroves. We found that tidal marsh sites were likely to show a trend towards increased presence of surface water (P > 0.66) once RSLR exceeded 2.3 mm yr −1 (Extended Data Fig. 3). The frequency of surface water observations at marsh sites increased with both the rate of RSLR (r 2 = 0.16, P < 0.001; Extended Data Fig. 4) and marsh elevation deficit (r 2 = 0.14, P < 0.001; Extended Data Figs. 4 and 5). The relationship between surface water change and marsh elevation deficit was evident in lower elevation marsh sites (r 2 = 0.20) rather than higher elevation sites (r 2 = 0.03; Extended Data Fig. 5), illustrating the temporary resilience conferred by elevation capital. We also found a significant relationship between the proportion of tidal marsh conversion to open water habitat and RSLR (P = 0.018). Tidal marshes were as likely as not (P = 0.5) to be retreating as RSLR increased above 5.4 mm yr −1 (Extended Data Fig. 3), with relatively few marshes advancing. This estimate of retreat may be conservative because patches of interior marsh break-up may not have been identified (Methods). The ameliorating influence of elevation capital was also evident in the extent of marsh retreat. Where marshes had higher than the median elevation capital, there was no relationship between marsh retreat and RSLR (P = 0.850). At lower than the median elevation capital, the relationship was highly significant (P = 0.002).
There are relatively few data on the change to reef-top habitats. Surveys of reef island planiform change in the tropical western Pacific and Indian Oceans have shown a remarkable degree of stability under rates of RSLR up to the contemporary GMSL rate 37,38 . Our collation of existing data on reef island morphometric changes (n = 872) from the Indian and Pacific Oceans shows a higher probability of island contraction at rates of RSLR above the rate of contemporary GMSL rise (Methods; Extended Data Fig. 3c). Island size reduction is likely (P ≥ 0.66) at RSLR above 6.2 mm yr −1 . The rate of RSLR in the Solomon Islands has averaged between 7 and 10 mm yr −1 since 1994 (ref. 39), and in the exposed northern Isabel Province, five of the twenty vegetated reef islands have completely eroded, leaving dead mangrove trunks on hard coral 40 . A further six islands contracted by more than 20% in the
Projected response to future sea-level rise
Modelling of spatial variability in RSLR was completed in the IPCC AR6 for each warming scenario 42 . We compared regional RSLR projections to 2080-2100 with the distribution of mangroves, tidal marshes and coral reefs across the globe ( Fig. 3; Methods). For each of the modelled scenarios, we determined the proportion of mangrove, tidal marsh and coral reef island habitat occurring where RSLR is projected to rise to levels for which eventual retreat of mangroves and tidal marshes is likely (4 mm yr −1 ) or very likely (7 mm yr −1 ), the best estimate from our combined palaeo and instrumental observations. For reef islands (a subset of mapped reefs), contraction or increasing island instability by RSLR of 7 mm yr −1 is likely (Extended Data Fig. 2c), although we cannot yet specify a rate of RSLR at which contraction is highly likely, given the scarcity of contemporary observations at higher rates of RSLR, and because this threshold will vary with the rate of surrounding reef vertical growth, reef flat width, wave exposure, island size and height, and reef-derived sediment supply.
In the 1.5 °C scenario, the likely (P ≥ 0.66) rate of GMSL rise at 2080-2100 is between 2.4 and 6.4 mm yr −1 . Coastlines subject to rates of RSLR of 4 to 7 mm yr −1 correspond to centres of contemporary mangrove development, notably Southeast Asia and the Caribbean. Under this rate of RSLR, elevation deficits are likely (P = 0.66-0.90; Fig. 2). The probability of reaching a rate of RSLR at which elevation deficits are very likely (7 mm yr −1 ) remains low (<11%), although coastlines subject to high rates of land subsidence-including, for example, the US Gulf Coast and Southeast Asian deltas 28,43 -are projected to exceed this rate. Median projections for the 2 °C warming scenario suggest that one third of global mangroves are subject to ≥7 mm yr −1 and nearly all exposed to ≥4 mm yr −1 of RSLR, although there is comparatively little change in the proportion of tidal marshes and reefs exposed to ≥7 mm yr −1 of RSLR (Table 1). Under 3 °C of warming, nearly all tropical and subtropical latitude coastlines are exposed to ≥7 mm yr −1 of RSLR, and these are the locations of most of the world's mangroves and coral reefs. Median RSLR projections along the world's coastlines therefore show the probability of elevation deficits in mangroves shifting from likely to very likely between 2 °C and 3 °C of global warming ( Fig. 3 and Table 1).
At high latitudes, portions of coastline have declining RSLR owing to gravitational, rotational and elastic deformational effects resulting from mass loss of glaciers and the Greenland Ice Sheet offsetting GMSL rise. For this reason, proportional loss of existing tidal marsh with RSLR is expected to be lower than for mangroves with increased Note that projected rates of RLSR rely to a considerable extent on tide gauge records that may capture local anomalies (for example, due to fluid extraction) that could produce locally higher rates. e-g, The proportion of global tidal marsh (e), mangrove (f) and coral reef (g) habitat subject to 7 mm yr −1 of RSLR by 2100 in the scenarios shown in a-d, as well as the 5 °C scenario. Error bands show the 17-83% likely range. These projections do not take into account the possibility that ice sheet instabilities substantially increase RSLR in warming scenarios exceeding 2 °C.
Nature | Vol 621 | 7 September 2023 | 117 warming. At 2 °C warming, the high-latitude European and North American west coasts remain below 4 mm yr −1 RSLR under median estimates, and at 3 °C the Baltic Sea and Gulf of Alaska remain below 4 mm yr −1 . Tidal marsh habitat is likely to expand in extent in northern Siberia under higher RSLR owing to limited topographic and human development impediments ( Fig. 4 and Extended Data Fig. 6). Far northern coastlines therefore emerge as important future habitats for tidal marsh-as also projected for seagrass meadows and kelp forests 19,30,44 -under warmer temperatures and reduced ice cover and ice scour, increasing their relative contribution to blue carbon capture and storage at high latitudes.
The influence of global change drivers
The behaviour of future ecosystems may not always be anticipated by palaeo and contemporary analogues. Processes influencing vertical adjustment of coastal wetlands and reefs to sea-level rise may be modified by climate change, though often the influence is to supress vertical adjustment. Land-use change driven by population growth may increase sediment supply by rivers, subsidizing sediment accumulation in coastal deltas 45,46 . Counteracting this is the association between economic development and dam construction, an intervention that retains sediment within catchments. Sediment yields to coastal environments in the global north are nearly half those prior to such hydrological modifications 45 . Major hydrological developments in Southeast Asian rivers have negative implications for the resilience of mangroves to sea-level rise 47 . Elevated concentrations of atmospheric CO 2 and associated climate change may modify biotic feedbacks to sea-level rise. Long-term field mesocosm experiments in Chesapeake Bay, USA have shown that root growth and marsh vertical adjustment was enhanced by the atmospheric CO 2 fertilization effect 48 and moderate warming (approximately 1.7 °C above ambient 49 ). However, as observed RSLR increased above 7 mm yr −1 , water stress negated the benefit of elevated CO 2 (ref. 50), and temperatures above 1.7 °C increasingly promoted organic carbon remineralization, lowering elevation gain 50 .
Ocean acidification and thermal stress will supress reef vertical growth due to impacts on coral cover, unless rapid adaptation occurs. Recent estimates identify low accretion potential (averaging 1.8 ± 2.2 mm yr −1 ) across many tropical western Atlantic reefs 51 , compared with rates derived from palaeo-reef core records 51 . Currently For mangroves and tidal marshes, loss is the proportion of existing area exposed to 4 mm yr −1 (likely loss; P > 0.66) and 7 mm yr −1 (very likely loss; P > 0.9) of RSLR based on probability distributions presented in Fig. 2, and the RSLR modelling in Fig. 3. For coral reef islands, the proportion refers to numbers of reefs, and uses the conservative estimate of likely vulnerability to RSLR at 7 mm yr -1 (the full dataset with uncertainties is presented as Extended Data Article less than half of reefs in the western Atlantic and Indian Ocean have maximum accretion potential rates matching altimetry-derived rates of sea-level rise 51 . Recent modelling of the impacts of climate change on reef accretion potential to 2100 suggest that increasingly severe and frequent bleaching events will further limit reef accretion potential 52 (even in the absence of other confounding local disturbance pressures). The potential of reef-top habitats and reef islands to accrete will therefore be influenced by increasing water depths above the surrounding fringing reefs and probable shifts in the abundance and production rates of biota from which sediment is derived. Both may negatively affect future reef-top habitats and will almost certainly impinge upon cultural use and sustainability 14 .
Implications for management
The committed loss of coastal habitats under high warming scenarios should not discourage conservation and restoration efforts. Under small elevation deficits, centuries may elapse before the elevation capital of a wetland is exhausted, and this will provide sufficient time for the supply of ecosystems services, including those critical for well-being and sustenance. Over the current century, landward migration driven by sea-level rise may compensate wetland loss, or even facilitate wetland expansion and associated carbon burial potential 53 . Extensive mangrove forest development in the mid-Holocene, coupled with high rates of vertical accretion under 4 to 7 mm yr −1 RSLR promoted blue carbon capture and storage at a scale that may have contributed to an observed decline in global atmospheric CO 2 concentrations for this period 12 . In the near-term, increased GMSL potentially allows for the recolonization of these coastal floodplains, expanding mangrove area while promoting higher rates of organic carbon accumulation than currently encountered 53 . Although intensive coastal development in Asia has reduced coastal wetland extent in former biogeographic centres 36 and is likely to restrict landward retreat (Fig. 4), extensive coastal floodplains provide viable opportunities for mangrove landward migration and even aerial expansion in northern and northwestern Australia 54 , the northern Gulf of Mexico 55 , Siberia and-depending on opportunities for restoration in more populated areas-Central America, Colombia and the western Mediterranean (Fig. 4.). In the Gulf of Mexico and northern Australia, mangrove forest and tidal creek encroachment under higher rates of RSLR is already being observed 25,56 . The implication of the gap between the Paris Agreement aspiration (2 °C with an aim of 1.5 °C) and the pathways consistent with the implementation of current policies (2.4 °C to 3.5 °C by 2080-2100, medium confidence 57 ) is profound for coastal ecosystems. Warming above 2 °C would restore the conditions faced by mangroves and tidal marshes under previous high RSLR periods and would likely expose most of the world's mangrove and two thirds of the world's tidal marsh to elevation deficits ( Table 1). Warming of 3 °C by 2100 would accelerate GMSL rise to rates consistent with a high probability of eventual tidal marsh and mangrove retreat and increased reef island instability for much of their geographic extent. Once reached, these rates of RSLR are projected to persist for centuries to millennia 58 . The thermal inertia of ocean waters is likely to drive irreversible ice sheet grounding line retreat where bedrock slopes away from the coast 58 , ensuring ongoing marine ice sheet instability 59 . Projected elevation deficits therefore define committed losses upon the exhaustion of elevation capital. Our analysis therefore suggests that the long-term contribution of blue carbon to climate mitigation is compromised under higher emissions scenarios. While preserving organic carbon in situ in many settings 12,53 , narrower, younger and more transitional wetlands would predominate 23 . As a result, coastlines and reef islands that are currently protected will be increasingly exposed to erosion and retreat, consistent with palaeo observations 16,23 .
Coastal ecosystems represent another of the numerous tipping elements for climate change impacts and rank among the more vital to human well-being and vulnerable to imminent warming levels 60 . The non-linear response to external forcing as seen in a wide range of ecosystems is closely associated with the concept of safe operating space, which promotes planetary boundaries being maintained a safe distance from critical thresholds of unacceptable environmental change 20 . Our findings demonstrate that the boundaries for a safe operating space for coastal ecosystems are approaching, and will be set by near-term emissions pathways. They also highlight the importance of mitigating against local environmental stressors (such as pollution in coral reefs) and restoring cleared and degraded wetlands to enhance resilience against climate change and coastal recession. In the face of irrevocable disruption under high rates of RSLR, the most effective means of promoting the continued survival of widespread mangrove forests, tidal marshes and coral reef islands is to achieve the Paris Agreement goal of net zero emissions by 2050. To this end, a contribution will be made by the preservation, restoration and landward accommodation of coastal blue carbon ecosystems.
Online content
Any methods, additional references, Nature Portfolio reporting summaries, source data, extended data, supplementary information, acknowledgements, peer review information; details of author contributions and competing interests; and statements of data and code availability are available at https://doi.org/10.1038/s41586-023-06448-z. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Palaeo wetland response to RSLR
To estimate RSLR following the Last Glacial Maximum (Fig. 1), we use a revised numerical simulation of GIA 61 , which adopts the ICE-6G global ice reconstruction from the Last Glacial Maximum to the present 62,63 . We use an ensemble of 300 combinations of rheological parameters in the GIA model to estimate RSL at 500-year time steps on a 512 × 260 global latitude × longitude grid, resulting in predictions of RSL at >130,000 points in space for each time step.
Post-glacial coastal habitat development and retreat prior to the Holocene are inferred from relict features that include the following: (1) drowned mangroves, tidal marshes and coral reefs, the remains of which have been found at around 90 m water depth 64 , corresponding to meltwater pulse 1A (14.6-14.3 ka); and (2) relict features at around 50 m water depth 65,66 , corresponding to a rapid rise in GMSL dating to 11.3-11.0 ka. Mangrove vertical development in relation to Holocene RSLR is based on 78 observations of the timing of the initiation of sustained mangrove peat development 12 . Evidence of post-glacial mangrove expansion is evident on the Sahul Shelf, Western Australia 67 and the Sunda Shelf, Southeast Asia 68 ~12-14.5 ka, a phase ceasing during meltwater pulse 1A (Fig. 1). A relatively brief (~300-year) period of mangrove expansion and vertical development prior to 9 ka is documented in the western Ganges-Brahmaputra Delta 69 , and the Queensland continental shelf 70 , locations of high sediment delivery at the time 69,70 . Our GIA modelling (Fig. 1) suggests RSLR dipped to around 6 mm yr −1 at this time. Mangroves in both sites were drowned during a period in which RSLR increased to circa 7 mm yr −1 ~9 ka. A pan-tropical expansion in mangrove development and sustained vertical adjustment 54,71-75 commenced from ~8.5 ka as RSLR declined 12 to <6 mm yr −1 according to GIA modelling (Fig. 1). A subset of these observations representative of the global dataset 12 are provided in Fig. 1 81 . Reef island development commenced in the Pacific during the mid-Holocene corresponding to RSLR stabilization and fall. The example provided in Fig. 1 is Bewick Cay, Queensland, Australia 24 .
Tidal marsh vulnerability to Holocene RSLR presented in Fig. 2 is based on the data from two studies utilizing multiple proxies across the UK 15 and the Mississippi Delta 16 . Holocene RSL data was compiled for 54 regions from Great Britain with the rate of RSL varying in relation to proximity to the centre of the Last Glacial Maximum British-Irish Ice Sheet 15 . RSLR rates estimated from GIA model predictions were compared to sea-level tendencies for 781 tidal marsh index points 15 (positive n = 403; negative n = 360; no tendency n = 19). For the Mississippi Delta, the Holocene RSL history was inferred from 72 sea-level index points, with marsh tendency assessed using 334 boreholes showing a well-defined Pleistocene-Holocene transition overlain by at least 2 m of sediment 16 .
Contemporary wetland response to RSLR
Assessment of mangrove and tidal marsh vertical adjustment was conducted using the SET-MH technique. This globally distributed network of monitoring stations 82 combines a stable benchmark rod against which measurements of elevation change are made, with an artificial marker horizon introduced at the time of benchmark rod installation, against which sediment accretion is measured (Extended Data Fig. 1). Pins extended from a portable arm (the surface elevation table) extend to the marsh surface, measuring surface elevation change in relation to the base of the benchmark rod. Comparison with elevation gain can then be made against water level changes measured at nearby tide gauges 29 .
This technique was previously used 29 to test how elevation gain at 477 SET-MH monitoring stations compared to RSLR changes measured over the same period. To this analysis (presented as Fig. 2e) we have added a mangrove SET-MH network of 190 SET-MH stations (Fig. 2f), the location of which are provided in Extended Data Table 2. These data combine published rates of elevation gain with new measurements reported here (Extended Data Table 2). RSLR for the period of SET-MH measurement was extracted from tide gauge records provided by the National Oceanic and Atmospheric Administration (https:// tidesandcurrents.noaa.gov/sltrends/sltrends.html) and, for Australia, the Australian Baseline Sea Level Monitoring Project (http://www.bom. gov.au/oceanography/projects/abslmp/abslmp.shtml).
Contemporary habitat distribution
The contemporary distribution and extent of mangroves 83 (https://doi. org/10.34892/07vk-ws51), tidal marshes 84 (https://doi.org/10.34892/ w2ew-m835) and coral reefs 85 (Figs. 1, 3 and 4) was accessed from the Ocean Data Viewer (https://data.unep-wcmc.org), hosted by the UN Environment World Conservation Monitoring Centre. An important caveat in relation to the representation of tidal marshes is the poor coverage of their possible extent at high northern latitudes. For Fig. 3, the coral reef dataset was complemented by additional data on the global distribution of atolls, which was sourced from the World Atolls database 86 (https://www.arcgis.com/home/item.html?id=1c18adf04 d9e47669281061ff60167e1).
Surface water and marsh change analysis
Because mangrove canopy cover obscured surface water observations, we report changes in surface water occurrence, and conversion of wetland to open water only for tidal marshes. We used two earth-observation derived global datasets to estimate tidal marsh conversion to open water across the SET-MH monitoring network. The Global Tidal Wetland Change (GTWC) dataset 36 depicts losses and gains of tidal marshes, tidal flats and mangroves (collectively termed 'tidal wetlands') at 30-m resolution over a 20-yr period (1999-2019). The data were developed through a machine learning classification of more than 1.1 million Landsat scenes acquired over the global coastal zone since 1999. GTWC data layers include tidal wetland losses, gains, and the probability of occurrence of tidal wetlands for the first (1999) and last (2019) time steps of the analysis. The Global Surface Water dataset depicts the location and temporal distribution of surface water from 1984 to 2020 at 30-m resolution 35 . The data were generated from >4.4 million Landsat scenes by individually classifying each Landsat pixel into water and non-water using an expert system. Although the two datasets are developed using Landsat data, the datasets differ in their temporal spans (2 to 4 decades), methodological approaches to mapping change dynamics, post-processing methods and minimum mapping unit. We therefore used both datasets to estimate the extent of tidal marsh conversion to open water in relation to observed RSLR (Supplementary Data 1).
To estimate net tidal wetland change and the extent of conversion to open water at each SET-MH monitoring site, we developed a buffer feature around the SET installation with an area of 5 km 2 . For global tidal wetland change, the area of losses and gains of each tidal wetland ecosystem type (tidal marshes, tidal flats and mangroves) was computed, yielding a net change estimate of tidal wetlands associated with each SET site. For global surface water, we used the water occurrence change intensity layer, which is computed as the absolute difference in the per pixel mean water occurrence between two distinct epochs 35 (1984-1999 and 2000-2020). The average surface water change in each SET buffer feature was computed (Supplementary Data 1).
The relationships between surface water and tidal wetland change versus contemporaneous RSLR and elevation deficit were tested using multiple linear regression. Predictive variables are provided in Extended Data Table 3, and consist of climatic, hydrological and edaphic properties associated with each SET-MH station, and are sourced from ref. 39. Potential collinearity of predictors was assessed using variance inflation factor from the car package 87 . The variance inflation factor was found to be below the level usually considered problematic (3.22). The overall relative importance of the key predictors was assessed using random forest regression analyses 88 , a machine learning approach which tallies the results of small classification trees (n = 20,000) while retaining a bootstrapped subset of all observations for out-of-bag (internal) error testing. Analyses were performed in R version 4.1.3 and presented as Extended Data Fig. 4b.
Island contraction and expansion
Data on island contraction or expansion (Extended Data Fig. 3) were sourced from recent assessments and reviews 37,40,89,90 (total island n = 872: Supplementary Data 2). We compared the proportion of islands showing areal contraction or expansion, binned at 1 mm yr −1 RSLR increments, using the rate of RSLR cited for each reef island in the manuscripts. Islands were considered stable if change was less than 3% of the original area, following ref. 37.
Ecosystem stability under RSLR
Contemporary marsh and mangrove resilience to RSLR was inferred from data from the globally distributed mangrove and tidal marsh SET-MH networks. The elevation surplus or deficit of each site was estimated by comparing the rate of tidal marsh surface elevation change recorded by the SET to rates of RSLR over the period of operation of the SET. RSLR was sourced from the National Oceanic and Atmospheric Administration's Laboratory for Satellite Altimetry (https://tidesandcurrents.noaa.gov/sltrends/). The elevation surplus/deficit of each SET site was categorized in Extended Data Fig. 2d,e as in surplus if surface elevation change exceeded the RSLR rate by 1 mm yr −1 , stable if surface elevation change was within ±1 mm yr −1 of the RSLR rate, or in deficit if the RSLR rate exceeded surface elevation change by 1 mm yr −1 . The stacked histograms in Extended Data Fig. 2d,e show the proportion of elevation budget categories in relation to RSLR rates (1 mm yr −1 bin size) at each tidal marsh (Extended Data Fig. 2d) and mangrove (Extended Data Fig. 2e) SET site.
The resilience of palaeo tidal marsh to RLSR is represented in Extended Data Fig. 2a,b for the UK and Mississippi Delta, respectively. A 'negative' sea-level tendency-indicating tidal marsh advance-is identified by decreasing marine influence (that is, regressive contact), whereas a 'positive' sea-level tendency-which indicates tidal marsh retreat-is identified by increasing marine influence (that is, transgressive contact) in sediment archives. In the example core (Extended Data Fig. 2a), the contact between an intertidal mud and tidal marsh peat, which represents a negative tendency and marsh advance, was dated to ~8,439-8,956 years ago. The thin accumulation of tidal marsh peat is overlain by an intertidal mud, representing a positive tendency and marsh retreat; this event was dated to 8,501-8,959 years ago. RSLR rates were estimated for the timing of these marsh advance and retreat events recorded in the stratigraphy using a GIA model. The stacked histogram (Extended Data Fig. 2a) shows the proportion of these events from sediment archives across the UK in relation to RSLR rates (0.5 mm yr −1 bin size). The facies succession identified in sediment cores from the Mississippi Delta (Extended Data Fig. 2b) were categorized based on the following criteria: a 'terrestrial' succession-indicating no evidence of marsh 'drowning'-is associated with the presence of terrestrial (marsh) mud or peat throughout the core and an absence of lagoonal facies; 'gradual drowning'-indicating marsh drowning that occurred over centuries-identified by at least a 30-cm-thick unit of marsh mud or peat occurring beneath lagoonal mud; 'rapid drowning'-indicating marsh drowning that occurred over about half a century-associated with less than a 30-cm-thick unit of marsh mud or peat occurring beneath lagoonal facies. The contact between marsh and lagoonal facies representing gradual or rapid marsh drowning was radiocarbon dated to determine the timing of the event, and RSLR rates at that time were estimated from an RSLR record obtained from compaction-free basal peats from the Mississippi Delta. The proportion of each type of facies succession is shown in comparison to estimated rates of RSLR (0.5 mm yr −1 bin size).
The 'initiation' of sustained mangrove accretion (Extended Data Fig. 2c) (at least 2 m of mangrove sediment) was radiocarbon dated and RSLR rates at that time interval were estimated from an ensemble of GIA model predictions 12 . The histogram shows the probability density (distribution) of initiation events in relation to RSLR rates (1 mm yr −1 bins).
We summarized the probability thresholds at which marsh or mangrove elevation deficit becomes likely (P ≥ 0.66) or very likely (P ≥ 0.90), adopting IPCC likelihood language 91 . To estimate the probability of a negative tendency (Fig. 2b,c) or elevation deficit (Fig. 2e,f) conditional on rates of RSLR, we follow ref. 15 by modelling the elevation budget or facies successions as binary response variables (elevation deficit or drowning, 1; elevation surplus or terrestrial, 0) in a Bayesian framework. We chose the bin widths for histograms and the number of segments in the Bayesian analysis by visual inspection for best fit. Details of the probabilistic analysis used to estimate the relationship between mangrove initiation and RSLR rates (Fig. 3f) can be found in ref. 4.
Sea-level rise projections
Sea-level rise projections (Figs. 3 and 4) were those used in the IPCC AR6 and were sourced from https://doi.org/10.5281/zenodo.5914710 42 . Sea-level rise scenarios to 2100 were converted to point shapefiles for the median, 17th and 83rd percentile projections for the following warming-level-based scenarios: 1.5 °C; 2.0 °C, 3.0 °C, 4.0 °C and 5.0 °C. The 17th-83rd percentile ranges are associated with the assessed IPCC likely range; the IPCC assessment is that there is at least a 66% chance that the true value will fall within this range. From the AR6 sea-level rise scenarios, sea-level rise rates at 2100 were converted to raster format (cell size 1 degree) for the median, 17th and 83rd percentile projections for the above-listed temperature-limited scenarios. All land-based pixels (defined as pixels where sea-level rise rates were zero for all percentiles of one temperature scenario) were converted to NoData.
Ecosystem exposure to projected RSLR For Fig. 3, available polygons for tidal marshes, mangroves and coral reefs were converted to point files based on each polygon's centroid coordinates. Where polygon features consisted of multiple polygons, polygon features were split into single-polygon features before converting them to centroid points. All resulting polygon centroids were merged with the available point data for mangroves, tidal marshes and reef islands into a dataset containing 1,885,466 entries. To visualize the spatial variability of wetland exposure to local sea-level rise (Fig. 3a-d), local RSLR rates (incorporating vertical land movement) 42 were extracted from the median projections of the temperature-limited scenarios 1.5 °C, 2.0 °C, 3.0 °C and 4.0 °C and classified into the following RSLR rate exposure categories: <0 mm yr −1 (blue), 0-4 mm yr −1 (yellow), 4-7 mm yr −1 (orange) and >7 mm yr −1 (red).
To calculate proportional changes of exposure to local RSLR rates for all five temperature-limited scenarios (Fig. 3e-g), only the available polygons for salt marshes, mangroves and coral reefs were utilized, as those included accurate aerial information. As above, polygon data were converted to point files, based on their centroid locations, but to preserve the accurate aerial information multi-polygon features were not split up. All local RSLR rates of each scenario (five temperature scenarios, with three percentiles each) were extracted for each ecosystem category to calculate proportional exposure to RSLR rates <0 mm yr −1 , 0-4 mm yr −1 , 4-7 mm yr −1 and >7 mm yr −1 . For each temperature scenario, the respective uncertainty range was defined by the lower (17%) and upper (83%) percentiles respectively.
Modelling retreat potential AR6 RSLR data up to 2100 were utilized to model the inland retreat space of coastal wetlands available for two RSLR scenarios: the 2 °C and 3 °C warming levels (Extended Data Fig. 6). The 3 °C warming level, representing the greater potential landward retreat, is also presented in Fig. 4. This modelling relies on the global coastal wetland model, which assumes inland retreat can occur where local population densities are below a pre-defined population density threshold and the coastal topography provides sufficiently flat inland areas 14 .
AR6 RSLR data (RSLR between 2020 and 2100) were retrieved from https://doi.org/10.5281/zenodo.5914710 42 with a spatial resolution of 1 degree. Trajectories of RSLR for each coastal segment, the spatial unit that the global coastal wetland model is based on ref. 92, were derived from the data point located closest (Euclidean distance) to the center of the respective coastline segment. Inland retreat space was calculated as the area additionally inundated during mean high water spring conditions, under future RSLR scenarios, and expressed as percentage of current wetland extents 14 . High water spring levels were thereby assumed to rise at the same rate as mean sea level. Local topographical profiles were calculated based on global Shuttle Radar Topography Mission data 93 and on the method first presented in ref. 94.
Taking into account the widespread obstruction that human coastal infrastructure imposes on coastal wetland inland retreat 95 and assuming that the extent of obstruction is a function of population density, wetland inland retreat was accounted for only where population densities within the local 1-in-100 year floodplain are below a threshold of 20 people per km 2 as a best case of these scenarios, a threshold of 5 people per km 2 as a worst case scenario. This range has previously been estimated to represent current conditions for the existence of barriers to coastal wetland inland retreat 14 . Meanwhile, population density has been subjected to estimated population growth following the 'middle-of-the-road' shared socio-economic pathway (SSP2) 96 . We also modelled potential landward space available for ecosystem redistribution ignoring the potential impediment of population density, the no barriers scenario (Fig. 4).
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Extended Data Table 2 | Location and timing of mangrove SET-MH measurements, with corresponding rate of RSLR and elevation gain *Original SET Data; **RSLR for the period of SET-MH measurements, from the nearest tide gauge.
Extended Data Table 3 | Identifiers and Variables used in the RF regression analysis (Data 29 )
Corresponding author(s): Last updated by author(s):
Reporting Summary
Nature Portfolio wishes to improve the reproducibility of the work that we publish. This form provides structure for consistency and transparency in reporting. For further information on Nature Portfolio policies, see our Editorial Policies and the Editorial Policy Checklist.
Please do not complete any field with "not applicable" or n/a. Refer to the help text for what text to use if an item is not relevant to your study. For final submission: please carefully check your responses for accuracy; you will not be able to make changes later.
Statistics
For all statistical analyses, confirm that the following items are present in the figure legend, table legend, main text, or Methods section.
n/a Confirmed
The exact sample size (n) for each experimental group/condition, given as a discrete number and unit of measurement A statement on whether measurements were taken from distinct samples or whether the same sample was measured repeatedly The statistical test(s) used AND whether they are one-or two-sided Only common tests should be described solely by name; describe more complex techniques in the Methods section.
A description of all covariates tested A description of any assumptions or corrections, such as tests of normality and adjustment for multiple comparisons A full description of the statistical parameters including central tendency (e.g. means) or other basic estimates (e.g. regression coefficient) AND variation (e.g. standard deviation) or associated estimates of uncertainty (e.g. confidence intervals) For null hypothesis testing, the test statistic (e.g. F, t, r) with confidence intervals, effect sizes, degrees of freedom and P value noted Give P values as exact values whenever suitable.
For Bayesian analysis, information on the choice of priors and Markov chain Monte Carlo settings
For hierarchical and complex designs, identification of the appropriate level for tests and full reporting of outcomes Estimates of effect sizes (e.g. Cohen's d, Pearson's r), indicating how they were calculated Our web collection on statistics for biologists contains articles on many of the points above.
Software and code
Policy information about availability of computer code Data collection
Data analysis
For manuscripts utilizing custom algorithms or software that are central to the research but not yet described in published literature, software must be made available to editors and reviewers. We strongly encourage code deposition in a community repository (e.g. GitHub). See the Nature Portfolio guidelines for submitting code & software for further information.
Data
Policy information about availability of data All manuscripts must include a data availability statement. This statement should provide the following information, where applicable: -Accession codes, unique identifiers, or web links for publicly available datasets -A description of any restrictions on data availability -For clinical datasets or third party data, please ensure that the statement adheres to our policy Tick this box to confirm that the raw and calibrated dates are available in the paper or in Supplementary Information.
Ethics oversight
Note that full information on the approval of the study protocol must also be provided in the manuscript. Policy information about dual use research of concern
Hazards
Could the accidental, deliberate or reckless misuse of agents or technologies generated in the work, or the application of information presented in the manuscript, pose a threat to:
Experiments of concern
Does the work involve any of these experiments of concern: No Yes Confirm that both raw and final processed data have been deposited in a public database such as GEO.
Confirm that you have deposited or provided access to graph files (e.g. BED files) for the called peaks.
Data access links
May remain private before publication.
Files in database submission
Genome browser session The axis labels state the marker and fluorochrome used (e.g. CD4-FITC).
The axis scales are clearly visible. Include numbers along axes only for bottom left plot of group (a 'group' is an analysis of identical markers).
All plots are contour plots with outliers or pseudocolor plots. This checklist template is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ | 2023-09-01T06:16:12.258Z | 2023-08-30T00:00:00.000 | {
"year": 2023,
"sha1": "2e5b7d6b790f6752e6fe40b17a8be350fc37f534",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41586-023-06448-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "7ebc03f0dac1fb6d1677b279cb69a76ce2acd2e8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17486934 | pes2o/s2orc | v3-fos-license | Engineering a cyanobacterium as the catalyst for the photosynthetic conversion of CO2 to 1,2-propanediol
Background The modern society primarily relies on petroleum and natural gas for the production of fuels and chemicals. One of the major commodity chemicals 1,2-propanediol (1,2-PDO), which has an annual production of more than 0.5 million tons in the United States, is currently produced by chemical processes from petroleum derived propylene oxide, which is energy intensive and not sustainable. In this study, we sought to achieve photosynthetic production of 1,2-PDO from CO2 using a genetically engineered cyanobacterium Synechococcus elongatus PCC 7942. Compared to the previously reported biological 1,2-PDO production processes which used sugar or glycerol as the substrates, direct chemical production from CO2 in photosynthetic organisms recycles the atmospheric CO2 and will not compete with food crops for arable land. Results In this study, we reported photosynthetic production of 1,2-PDO from CO2 using a genetically engineered cyanobacterium Synechococcus elongatus PCC 7942. Introduction of the genes encoding methylglyoxal synthase (mgsA), glycerol dehydrogenase (gldA), and aldehyde reductase (yqhD) resulted in the production of ~22mg/L 1,2-PDO from CO2. However, a comparable amount of the pathway intermediate acetol was also produced, especially during the stationary phase. The production of 1,2-PDO requires a robust input of reducing equivalents from cellular metabolism. To take advantage of cyanobacteria’s NADPH pool, the synthetic pathway of 1,2-PDO was engineered to be NADPH-dependent by exploiting the NADPH-specific secondary alcohol dehydrogenases which have not been reported for 1,2-PDO production previously. This optimization strategy resulted in the production of ~150mg/L 1,2-PDO and minimized the accumulation of the incomplete reduction product, acetol. Conclusion This work demonstrated that cyanobacteria can be engineered as a catalyst for the photosynthetic conversion of CO2 to 1,2-PDO. This work also characterized two NADPH-dependent sADHs for their catalytic capacity in 1,2-PDO formation, and suggested that they may be useful tools for renewable production of reduced chemicals in photosynthetic organisms.
Background
Many natural metabolites containing bi-functional groups such as succinate, lactate, and 3-hydroxybutaoate can be used as monomers to make polymers, and have long been produced biologically by fermentation processes. However, production of diols from renewable source represents unique challenges partially because they are not typical fermentation products and that they are more reduced compared to the average carbon redox state in biological systems [1]. Over the past decade, progress in synthetic biology and metabolic engineering have enabled substantial achievements in diol production [1][2][3][4], mostly from sugars, glycerol, or biomass feedstocks. Direct production of chemicals from CO 2 in photosynthetic organisms [5][6][7] and lithoautotrophic organisms [8][9][10] have been proposed to be advantageous in particular situations. This work aims to produce 1,2-propanediol (1,2-PDO) directly from CO 2 by an engineered cyanobacterium, Synechococcus elongatus PCC 7942.
The racemic 1,2-PDO can find applications in antifreeze and heat transfer fluids, plasticizers and thermoset plastics, and cosmetics [11]. 1,2-PDO is naturally produced by anaerobic microorganisms such as Thermoanaerobacterium thermosaccharolyticum [12]. It has also been produced by engineered E.coli [3,11,13], Corynebacterium glutamicum [14], Saccharomyces cerevisiae [15,16], and Pichia Pastoris [17] from glycerol or sugar. Consistent with the fermentative nature of the pathway in its native host, 1,2-PDO production in heterologous hosts only achieved relatively high titer (4.5~6.5g/L) in anaerobic fermentation. However, cyanobacterium S. elongatus produces O 2 in the light reaction of photosynthesis ( Figure 1A) and does not perform fermentation in light. Moreover, CO 2 is a more oxidized substrate than sugar or glycerol and therefore requires more energy and reducing equivalents to be converted to the product. Here we report introduction of the heterologous 1,2-PDO production pathway into S. elongatus and subsequent tailoring of the pathway to fit in the metabolism of this photosynthetic CO 2 -fixing host.
Results and discussion
Designing of the 1,2-PDO production pathway In light conditions, cyanobacteria fix CO 2 via the Calvin-Benson-Bassham (CBB) cycle which is powered by ATP and NADPH generated by the photosystems ( Figure 1A).
Two CBB cycle intermediates, fructose-6-phosphate (F6P) and glyceraldehydes-3-phosphate (GAP), serve as the branch points of carbon leaving the CBB cycle to the central metabolism for glycogen synthesis and glycolysis, respectively. While glycogen synthesis is the major carbon and energy storage pathway, glycolysis and TCA cycle produce building blocks for cell growth. The synthesis of 1,2-PDO, on the other hand, starts from another CBB cycle intermediate, dihydroxyacetonephosphate (DHAP). The introduction of one extra branch point can potentially increase the flux of output carbon from the CBB cycle, which has been suggested to be beneficial for increasing photosynthesis efficiency in higher plants [18,19], but may also disrupt the normal flux distribution in the cell.
To synthesize 1,2-PDO, DHAP is first converted to methylglyoxal ( Figure 1A) by methylglyoxal synthase (encoded by mgsA in E. coli). Methyglyoxal is very toxic to the cells [11] and needs to be efficiently utilized by downstream enzymes. Two different metabolic routes have been shown to synthesize 1,2-PDO from methyglyoxal ( Figure 1B) [11]. The first involves reduction of methyglyoxal by the glycerol dehydrogenase (encoded by gldA in E. coli) to lactaldehyde, which is further reduced by the 1,2-propanediol reductase (encoded by fucO in E. coli) to yield the final product. The second route includes an alcohol dehydrogenase (such as the broadsubstrate range aldehyde reductase encoded by yqhD in E. coli) to produce acetol as the intermediate, which is then converted to 1,2-PDO by gldA. The latter route was chosen to introduce into S. elongatus because yqhD Figure 1 The pathway for 1,2-Propanediol (1,2-PDO) production from CO 2 in Synechococcus elongatus PCC 7942. A) Light reaction of photosynthesis generates ATP and reducing equivalents NADPH, which power the CO 2 fixation through Calvin cycle and the synthetic 1,2-PDO production pathway. gldA, yqhD and mgsA are from E.coli. Secondary alcohol dehydrogenases (sADHs) are from C. beijerinckii and T. brockii. DHAP, dihydroxyacetone phosphate. GAP, glyceraldehyde-3-phosphate. F6P, fructose-6-phophate. B) Two possible pathways for 1,2-Propanediol synthesis from methylglyoxal. gene has been previously overexpressed in this organism for biofuel production and showed relatively good performance, possibly due to its NADPH-specific cofactor preference.
Introduction of the 1,2-PDO biosynthesis genes As described above, the genes mgsA, yqhD, and gldA from E. coli are needed to construct the 1,2-PDO biosynthesis pathway in S. elongatus from the CBB cycle intermediate DHAP ( Figure 1A, B). These genes were cloned into an artificial operon driven by the P trc promoter under the control of lacO (Figure 2A). The operon was inserted into the S. elongatus chromosome by homologous recombination at the Neutral Site I (NSI). A lacI gene and a spectinomycin resistant gene were also inserted together with the operon to achieve inducible gene expression (Additional file 1: Figure S2) and facilitate antibiotics selection, respectively. The resulting strain was named LH21.
To check if all the genes were successfully introduced and transcribed, reverse transcription polymerase chain reaction (RT-PCR) was performed. After induction with Isopropyl β-D-1-thiogalactopyranoside (IPTG), total RNA was extracted from LH21 and wildtype S. elongatus PCC 7942. RT-PCR of the house keeping gene rnpB, whose transcription product is the RNA component of RNase P, was performed to verify the RT-PCR system. Using the rnpB specific primers, PCR products were obtained using cDNA synthesized from both wildtype and LH21 total RNA ( Figure 2B). On the other hand, the no-reversetranscriptase (NRT) controls did not yield any products, suggesting that the genomic DNA contamination during the total RNA extraction was minimal and that the positive signals of RT-PCR are representative for the transcription of the target genes. Using the verified system, mgsA, yqhD, and gldA genes from E. coli were tested and showed to have expression in LH21 under inductive condition ( Figure 2C). Activity assays using cell lysate further suggested that these heterologous enzymes were functional in cyanobacterial cells ( Figure 3A, B, Additional file 1: Figure S1).
Production of 1,2-PDO
1,2-PDO production by LH21 was performed under high light condition (100 μE/s/m 2 ) with 50mM bicarbonate supplementation in the medium. After induction, LH21 produced around 16mg/L 1,2-PDO in 4 days, with the highest production rate of 7mg/L/day. However, although no defect in cell growth was seen, the production rate decreased rapidly and the total titer was only around 22mg/L after 10 days ( Figure 4A, B). The drastically decreased and eventually ceased production by LH21 led to one hypothesis: key substrate(s) might become limited at certain stage of cell growth which decreased the flux of the synthetic 1,2-PDO production pathway. Three substrates are needed to produce 1,2-PDO in LH21: NADPH, NADH, and DHAP ( Figure 1A, B). Among these substrates, NADPH and DHAP are made through photosynthesis light and dark reactions ( Figure 1A), respectively, and could be continuously generated under light condition. However, NADH may mainly be generated by the NADH-dependent glyceraldehyde-3-phosphate dehydrogenase in the glycolysis [20]. And the reaction catalyzed by the putative NADHdependent malate dehydrogenase in TCA cycle may also contribute to the cellular NADH level. It has been suggested that the main function of glycolysis and TCA cycle in cyanobacteria is to generate essential metabolites for biomass synthesis under light conditions, rather than to produce reducing equivalents and energy [21]. As such, when the growth rate of LH21 cells slowed down in the stationary phase, the activities of these NADH generating pathways may also decrease. If NADH is really the limiting factor in our production scenario, the partially reduced intermediate acetol may accumulate. In fact, at the end of the production, around 16mg/L acetol was accumulated, which was comparable to the level of 1,2-PDO (~22mg/L) ( Figure 3D). In addition, acetol was only detected after 4 days and kept accumulating during the late stage of production (data not shown). These results are consistent with the abovementioned hypothesis and suggest that the NADHdependent reduction of acetol catalyzed by gldA might be the limiting step in the 1,2-PDO production pathway.
Improving 1,2-PDO production using NADPH-dependent secondary alcohol dehydrogenases
To overcome the bottleneck of the 1,2-PDO production in LH21, one possible strategy is to overexpress the soluble transdehydrogenase (STH) which produces NADH at the expense of NADPH. However, genes encoding this enzyme have not been found in S. elongatus genome. Heterologous overexpression of the Pseudomonas aeruginosasth gene in cyanobacteria has been shown to be instable and caused growth defect [22].
Alternatively, it could be beneficial to find the NADPHdependent counterpart of gldA. To convert acetol to 1,2-PDO, a hydroxyl group on the secondary carbon has to be made, which can be catalyzed by the secondary alcohol dehydrogenase (sADH) family of enzymes. Several sADHs have been characterized previously that are NADPH-dependent [4], including the sADH encoded by the adh gene in Thermoanaero bacterbrockii [23] and Clostridium beijerinckii [24]. To test their catalytic activity for the substrate acetol, these two sADHs as well as the E. coli gldA were purified and their kinetics parameters were measured ( Table 1). The kinetics studies showed that the C. beijerinckii sADH has the highest K cat among all three enzymes tested. However, the large K m value of C. beijerinckii sADH suggested that the enzyme has relatively low affinity to the substrate acetol. On the other hand, the T. bacterbrockii sADH has the highest acetol affinity and a two-fold higher K cat than that of the gldA enzyme, which is the most commonly used enzyme for this reaction step in previous studies. In summary, these two sADH have distinct kinetic features and both showed activity towards the substrate acetol, which suggested that they may be used in cyanobacterial cells for 1,2-propanediol production in vivo. C. beijerinckii and T. brockii adh were cloned and introduced into the cyanobacterial genome to replace gldA. The resulting strains are named LH22 and LH23, respectively ( Figure 2A). RT-PCR was also performed to verify the expression of these genes ( Figure 2B, C). Enzyme assays with crude cell extract of LH22 and LH23 further verified that both C. beijerinckii and T. brockii sADH were functionally overexpressed and showed higher activities of NAD(P)H-dependent acetol reduction compared to that in LH21. Especially, the C. beijerinckii sADH overexpression in LH22 delivered the highest activity ( Figure 3B, C).
Production using strains LH22 and LH23 yielded significantly higher 1,2-PDO titer (~150 and 80mg/L, respectively) compared to that of LH21 ( Figure 4B). Notably, the high production rate was maintained through the 10 days of production. In consistent with the hypothesis mentioned in the previous section, the high level of NADPHdependent acetol reduction activity in LH22 and LH23 also significantly reduced the accumulation of the intermediate acetol ( Figure 3D).
Despite its great significance to metabolic engineers, the information on intracellular NAD(P)H level during different growth phases and growth conditions in cyanobacteria is very limited. Although it is believed that NADPH is more abundant than NADH in cyanobacteria [25], only a few studies discussed its role in biofuel/biochemical synthesis from CO 2 [6,22]. The science behind efficient conversion of CO 2 to chemicals and fuels is still in its infancy and the NADPH driving force theory still needs to be extensively tested, which requires the accumulation of empirical evidence in more production scenario, as well as fundamental studies on NAD(P)H levels and their regulation. In our case, other factors may also contribute to the difference between the production levels of the NADH and NADPH-dependent pathways. For example, the NADPH-dependent enzymes may be better folded and more active when expressed in cyanobacteria. And different level of physiological fitness may be caused by overespression of different enzymes, although all production strains showed the same growth phenotype as the wildtype.
Conclusion
In this work, we demonstrated the 1,2-PDO production from CO2 for the first time by the engineered cyanobacterium S. elongatus PCC 7942. By exploiting sADHs which have not been reported for 1,2-PDO production previously, a completely NADPH dependent pathway was built to channel the CBB cycle intermediate DHAP for 1,2-PDO production without accumulating the pathway intermediate, acetol. The best strain LH22, which harbors mgsA and yqhD both from E. coli and the adh from C. beijerinckii, produced~150mg/L 1,2-PDO.
This work revealed the great potential of the vast NADPH pool in photosynthetic cyanobacteria as a robust driving force for the production of chemicals. Among the chemicals that have been produced biologically in industrial scale, a significant number of them are synthesized by NADPH consuming pathways. For example, in amino acid production, studies have shown that increasing the NADPH pool can improve the production performance [26,27]. However, in most of the heterotrophic microorganisms, NADPH is mainly generated through the pentose phosphate pathway and TCA cycle and its pool size is relatively small compared to that of the NADH. On the other hand, photosynthetic organisms maintain high intracellular NADPH level. The unique metabolic feature of photosynthetic organisms provides great opportunities for the production of chemicals through NADPH dependent pathways.
Chemicals and reagents
All chemicals were purchased from Sigma-Aldrich (St. Louis, MO) or Fisher Scientifics (Pittsburgh, PA). Restriction enzymes were purchased from New England BioLabs (Ipswich, MA). The Rapid DNA ligation kit was from Roche (Mannheim, Germany). KOD DNA polymerase was from EMD Chemicals (San Diego, CA). Oligonucleotides were purchased from IDT (San Diego, CA).
Medium and culture condition
All S. elongatus PCC 7942 strains were grown on BG-11 medium (Sigma-Aldrich) containing 50mM NaHCO 3 in shake flasks. Plates contain 1.5% (w/v) agar. For 1,2-PDO production, 50mL culture was grown in 250mL shake flasks under 100 μE/s/m 2 light supplied by four Lumichrome F30W-1XX 6500K 98CRI light tubes, at 30°C. Cell growth was monitored by measuring OD 730 . 1mM IPTG was added to induce the gene expression at OD 730 of around 1. Daily, samples were taken for analysis and 50mM NaHCO 3 was added. IPTG concentration in the culture was maintained to 1mM by adding appropriate amount of fresh IPTG to compensate the IPTG lost from sampling. Spectinomycin was added for LH21, LH22, and LH23 at a final concentration of 20mg/L. E. coli strains were grown in LB medium. And a spectinomycin concentration of 50mg/L was used where appropriate.
Plasmid construction
All cloning and plasmid preparation were done using E. coli XL1-blue cells (Stratagene, La Jolla, CA). Detained information about plasmids and primers used in this study can be found in Table 2 and Table 3. Briefly, to construct plasmid GYM, gldA, mgsA, and yqhD were amplified from E. coli genomic DNA using primer pairs gldA SpeI fwd/gldA_YqhD rev, gldA_YqhD fwd/YqhD_mgsA rev, and YqhD_mgsA fwd/mgsA NotI rev, respectively. The PCR products were purified and linked into an artificial operon using Splicing by overhang extension (SOE) PCR using primers gldA SpeI fwd/mgsA NotI rev. The PCR product was digested with restriction enzymes SpeI and NotI and then inserted into the NSI targeting vector pAM2991. The CYM and TYM plasmids were constructed similarly. The C. beijerinckii adh and T. brockii adh genes were amplified from plasmids pZE12-alsS-alsD-CBADH and pZE12-alsS-alsD-TBADH [4], respectively, using primer pairs CBSADH SpeI fwd/CBSADH_yqhD rev and TBSADH SpeI fwd/ TBSADH_yqhD rev, respectively. In order to amplify yqhD gene that have overlapping region with the C. beijerinckii adh and T. brockii adh, the forward primer for yqhD amplification was CBSADH_yqhD fwd and TBSADH_yqhD fwd, respectively.
To purify the 6xHis-tagged gldA and secondary alcohol dehydrogenases, plasmids his-gldA, his-CB, and his-TB were constructed. Briefly, the T5 promoter/lacO and 6xHis tag fragment of pQE-9 (Qiagen) was amplified using primers pQE XhoI fwd and pQE Acc65I rev and then digested and inserted at XhoI/Acc65I sites of the plasmid pZElac [29]. The resulted plasmid was named pZElac-his. To insert the E. coli gldA, C. beijerinckii adh and T. brockii adh genes in pZElac-his, the isothermal DNA assembly method [30] was used. The primers his ad up rev and his ad down fwd were used to amplify the vector backbone using pZElac-his as template. And the primer pairs his ad up_gldA fwd/his ad down_gldA rev, his ad up_CB fwd/his ad down_CB rev, and his ad up_TB fwd/his ad down_TB rev were used to amplify the corresponding genes. The gene amplification products were assembled with the backbone.
Protein purification and enzyme kinetics study
The plasmid his-gldA, his-CB, and his-TB were transformed into BL21 cells. The transformants were cultured in 40 mL LB medium containing 100mg/L ampicillin. After the cells reached mid-log phase, 1mM IPTG was added to induce protein expression followed by incubation at 30°C overnight. The cells were collected by centrifugation and the recombinant proteins were purified using His-Spin Protein Miniprep kit (Zymo research Corporation, CA) according to the manufacturer's instructions. The purified proteins were checked by SDS-PAGE for homogeneity and quantified by Bradford assay (Bio-Rad, Hercules, CA).
Dehydrogenase activity was measured by monitoring the absorbance decrease of NADH or NADPH at wavelength of 340 nm. To determine the kinetic parameters, the assay reaction was prepared with Tris-HCl buffer Transformation and selection S. elongatus PCC7942 cells transformed as described [31]. The transformed cells were spread on BG-11 plates with 20mg/ml spectinomycin and incubated in light to select for recombinants. Colonies were verified by PCR and inoculated into liquid BG-11 medium with 20 mg/ml spectinomycin for further tests.
Enzyme assays
For sADH enzyme assays, different cyanobacterial strains were grown in 30mL BG-11 medium in 125mL shake flask under light condition and induced with 1mM IPTG at the OD 730 of around 1. After overnight induction, cells were harvested and resuspended in 1mL Buffer A (100mM Tris-HCl, pH = 8.0). Cell lysates were prepared by bead beating followed by centrifugation at Table 3 Primers used in this study 10,000 × g for 20min at 4°C. Cell. 10μL cell lysate was used in 200μL reaction system which also contained 100mM Tris-HCl, pH = 8.0, 200μL NAD(P)H, and 20mM acetol (or no substrate for negative control). The reaction was started by adding the substrate, and the OD 340 was monitored. The soluble protein concentrations in cell lysates were quantified using Quick Start Bradford Protein Assay (Bio-Rad, CA) according to manufacturer's instructions. Similar method was used to determine the activity of methylglyoxal reduction. The activity of methylglyoxal synthase was determined by the previously reported method [32]. Briefly, the reaction mixture at 30°C contained, in 0.5 ml: Tris-HCl buffer, pH7.5 (50mM), dihydroxyacetone phosphate (20mM) and cell lysate. The reaction was allowed to proceed for 10min. The methylglyoxal formed was measured colorimetrically by taking 0.1 ml samples into 0.33 ml of 2,4-dinitrophenylhydrazine reagent (0.1% 2,4-dinitrophenylhydrazine in 2M-HCI) plus 0.9ml of water. After incubation at 30°C for 15min, 1.67ml of 2.5M NaOH was added and the OD 555 measured after a further 15min. A molar extinction coefficient of 4.48 × 10 4 was used to convert the readings into nmol of methylglyoxal.
RT-PCR
For RT-PCR enzyme assays, different cyanobacterial strains were grown in 30mL BG-11 medium in 125mL shake flask under light condition and induced with 1mM IPTG at the OD 730 of around 1. After overnight induction, total RNA was extracted using RiboPure-Bacteria Kit (Life Technologies, NY). RNA was quantified using Nanodrop. After treatment with the TURBO DNA-free kit (Life Technologies, NY), cDNA was synthesized using iScript cDNA Synthesis kit (Bio-Rad, CA). PCR was performed using the specific primers listed in Table 2. The PCR product was then checked by electrophoresis on 2% agarose gel and stained with ethidium bromide. | 2016-05-12T22:15:10.714Z | 2013-01-22T00:00:00.000 | {
"year": 2013,
"sha1": "7420fd69b2bfc11d00a99651d4f3937e715bdd8a",
"oa_license": "CCBY",
"oa_url": "https://microbialcellfactories.biomedcentral.com/track/pdf/10.1186/1475-2859-12-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "775877fe90f8b7e53f4041dbba293861026df5bb",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251110277 | pes2o/s2orc | v3-fos-license | Diatoms: A Review on its Forensic Significance
Diatoms also called as the ‘jewels of sea’ are microorganisms which are extensively found in the aquatic system. These unicellular organisms make up nearly half of the biological material in the water body. It is also one of the most significant biological evidence that is obtained in case of drowning. The diatoms that infiltrate inside the body of the deceased may serve as a corroborative or even conclusive evidence to support the diagnosis of death. These diatoms also help in ascertaining whether the drowning is ante-mortem or post-mortem. The review discusses the current extraction procedures and microscopic examination techniques used in forensic science for diagnosis of death by drowning.
Introduction
Diatoms are unicellular, non-motile micro-algae. They are the most common type of phytoplankton which belong to the kingdom of Protista and are classified under Bacillariophyceae. Diatoms are the most successful organisms which can thrive in large numbers in almost every aquatic environment. They are also responsible for the carbon fixation in the environment. Typical diatoms come in the range of 10 µm -20 µm 1 . They have cell wall made of silica, which makes the outer lining of the cell harder. The siliceous covering of the cell is inert and indestructible 2 . Diatoms are broadly classified into two categories based on their structure: Centrales and Pennales. Centric diatoms are radially symmetrical, found drifting near the surfaces of the oceans and are wheel shaped whereas pinnate diatoms are laterally symmetrical and live in fresh water streams, swamps, or bottoms of shallow water 3 .
There are more than 10,000 species of diatoms that have been discovered. Therefore, they come in various shapes and sizes. Due to these variants, these microorganisms are advantageous for the diagnosis of death by drowning 4 . Perpetrators could often be seen disposing the corpse into water to mimic the cause of death and raise a suspicion of suicide during the investigation. During drowning there is a struggle for the expiratory process due to which the water enters the respiratory tract. Only a living body with an active circulation could transport the diatoms from the lungs and rupture the alveolar wall and pass through the lymph nodes, pulmonary veins, heart (left side) and systemic circulation that include the bone marrow, kidney and brain 5 . It is a challenging situation for a forensic pathologist to examine these specimens and identify the cause of the death. But due to the presence of diatoms, it can be identified if the death has occurred before drowning or after drowning 6 .
Organs for the Examination
The success of the conclusive tissue analysis helps in the qualitative and quantitative diagnosis for the presence of diatoms in suspected cases of drowning. The distribution of diatoms in each organ varies; therefore it is essential for the forensic scientist to identify the appropriate specimen for the process of examination. For many decades the conventional practice was to perform the analysis by using lungs, bone marrow or long bones, heart, liver and blood. In a control experimental study on laboratory rats to determine the number of diatom cells in each organ, it was reported that the highest number of diatom cells were identified in the stomach and lungs due to the direct entry of the water through the mouth and the nostrils 7 . During the postmortem examination, the samples are collected and preserved using suitable preservatives. Formalin used to fix lung tissue in the autopsy was found to be effective. The samples preserved using formalin can be utilized for long term examination purposes as these microorganisms resist putrefaction 8 . However, formalin is not suggested for the preservation as it destroys the fine structure of the cells. Control samples collected for the examination process should be preserved using Lugol's iodine solution or ethanol 9 .
Extraction Methods
A great number of techniques are proposed by various studies to compare the efficiency of different extraction methods for forensic practice. The primary goal of each method is to extract diatoms from the postmortem Table 1. Procedure for different methods of extraction for diatoms
Extraction Method Procedure
Acid digestion It is one of the oldest techniques accepted for the extraction of diatoms. In this method the tissue is digested in nitric acid. The residue that is obtained is then centrifuged to make a pellet. These pellets consist of nitric acid resistant material which is then smeared on a microscopic slide for further examination.
To overcome the practical complication an instrument called 'can' was developed. The procedure involves liquidating the tissue with a strong acid and then subjecting to a high temperature. The special feature of this instrument is that it simple to operate, less time consuming and it is practically more efficient [11][12][13] .
Enzymatic digestion This method involves the application of proteinase K. The tissue sample is minced and rinsed in proteinase K and Tris HCl buffer solution. The solvent is incubated overnight and the sample is then centrifuged. The solid residue is then removed and observed under the microscope which is mounted using Naphrax 14 .
Soluene-350 Soluene-350 method of extraction is effective for fresh water diatoms. The tissue sample is washed thrice with distilled water and centrifuged. The residue obtained is suspended in Soluene-350 solution and incubated at room temperature overnight. This solution is then centrifuged and the resultant pellet is smeared on the microscopic slide and observed under the light microscope 15 .
Microwave digestion It is a highly sensitive novel extraction method for diatoms. The sample is digested using microwave digestion apparatus containing a mixture of tissue sample and acid solution. The technique is highly efficient and there is less contamination. The digested solution is then subjected to SEM analysis 16,17 .
Microscopic Techniques Description
Light microscopy Light microscopic examination is not used widely because of its limitations in the resolution or magnification of the microorganism which is smaller in size. The efficiency of distinguishing smaller species is difficult with this microscopy. Therefore it is not recommended for forensic purposes 18 .
Scanning
Electron microscopy SEM gives a 3D image of the diatom with high resolution. In recent years Zhao et al., 2017 developed a new novel method called microwave digestion-vacuum filtration -automated SEM analysis which is highly sensitive and specific. One of the disadvantages of this technique is that it is time consuming. Therefore to overcome this, deep learning based algorithm was developed which could automatically detect the diatoms from the SEM images 19 .
Transmission electron microscopy
TEM is a technique of choice for the analysis and the evaluation of micro structures which generates a 2D image of the sample 20 .
Atomic Force microscope
Atomic force microscope also called as scanning probe microscope is a newer technique developed to observe objects or materials which are in nano scale. The image processed from the AFM has high resolution and also the image can be scanned in both vertical and horizontal axis 21, 22 . tissue. This includes acid digestion, enzymatic method, Soluene-350 and microwave digestion 10 .
Microscopy Techniques
Diatoms are made up of tiny cells which could be observed using a microscope for detection and identification.
Microscopic examination is the oldest and conventional method of diatom test. The taxonomical features can be observed to identify the species which is necessary for medico legal purposes. Presence of diatoms helps to differentiate a death by submersion from an immersion of a body. During the extraction process, the foreign materials are removed from the tissue sample to avoid any interference during the microscopic examination. The most commonly used microscopes are Light microscope, Scanning electron microscope, Transmission electron microscope and Atomic force microscope.
DNA Barcoding of Diatoms
In case of drowning, many small living microorganisms present in the water source gets deposited into the victim's internal organs because of the continuous flow of water in and out of the body. DNA barcoding is a method of characterizing a particular target region in a DNA strand. The advancement in DNA barcoding has many applications in forensics and other bio assessments. The amplification of specific sequence is generated to identify an organism to their definitive taxa. It provides additional information on the diatom species meticulously than the conventional microscopic techniques or the culturing techniques for analysis 23 . Examination of diatom using a microscope is a time-consuming process and requires taxonomic expertise. At present there are many molecular based technologies to efficiently detect the diatom. A novel microarray based on specific 18sRNA using Sanger sequencing was developed for single cell rDNA 24 .
Conclusion
On a broader aspect, diatoms can be studied efficiently using the various techniques that are available. Specifically, in forensic science diatoms can be used in the identification of death due to drowning, their extraction and examination can be done with the various techniques as discussed above. Comparisons with the present databases could possibly also help in the regionwise identification of diatoms. Diatoms are a boon to the identification of death due to drowning.
Conflict of Interest
The authors have no conflict of interest. | 2022-07-28T15:08:44.259Z | 2022-07-25T00:00:00.000 | {
"year": 2022,
"sha1": "5eb37271570022a837fc053eeedebd803e9f4808",
"oa_license": "CCBY",
"oa_url": "https://jfds.org/index.php/jfds/article/download/566/466",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "12945cd365a755af79399b8644f942d7d18cde99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
9925415 | pes2o/s2orc | v3-fos-license | Enumerating the gene sets in breast cancer, a "direct" alternative to hierarchical clustering
Background Two-way hierarchical clustering, with results visualized as heatmaps, has served as the method of choice for exploring structure in large matrices of expression data since the advent of microarrays. While it has delivered important insights, including a typology of breast cancer subtypes, it suffers from instability in the face of gene or sample selection, and an inability to detect small sets that may be dominated by larger sets such as the estrogen-related genes in breast cancer. The rank-based partitioning algorithm introduced in this paper addresses several of these limitations. It delivers results comparable to two-way hierarchical clustering, and much more. Applied systematically across a range of parameter settings, it enumerates all the partition-inducing gene sets in a matrix of expression values. Results Applied to four large breast cancer datasets, this alternative exploratory method detects more than thirty sets of co-regulated genes, many of which are conserved across experiments and across platforms. Many of these sets are readily identified in biological terms, e.g., "estrogen", "erbb2", and 8p11-12, and several are clinically significant as prognostic of either increased survival ("adipose", "stromal"...) or diminished survival ("proliferation", "immune/interferon", "histone",...). Of special interest are the sets that effectively factor "immune response" and "stromal signalling". Conclusion The gene sets induced by the enumeration include many of the sets reported in the literature. In this regard these inventories confirm and consolidate findings from microarray-based work on breast cancer over the last decade. But, the enumerations also identify gene sets that have not been studied as of yet, some of which are prognostic of survival. The sets induced are robust, biologically meaningful, and serve to reveal a finer structure in existing breast cancer microarrays.
Background
Detecting genes-by-samples patterns in expression data After removing genes that exhibit little variance, the standard script for exploring microarray data applies two-way hierarchical clustering (HC), followed by a visual search for patterns displayed in a red-green heatmap [1,2]. For breast cancer in particular, this procedure has proven immensely productive. It can be credited with the discovery [3] (or rediscovery) [4] of the basal subtype, and, more broadly the identification of subtypes of breast cancer that hold out the potential to inform clinical practice [3,[5][6][7][8] Despite its utility, the standard script suffers from several limitations, in particular the instability of the binary tree of the clusters found [9]. Perturbation and re-sampling techniques are available to gauge the robustness of the clusters defined by subtrees [10,11]. But, small changes in the selection of genes or choice of samples can result in disconcertingly large changes in the overall configuration of the tree, which calls into question any typology defined on such tree-based partitions [12].
A different problem stems from the disproportionate impact of large, tightly coordinated clusters on the overall arrangement of the tree [13]. Because the largest gene sets, for example estrogen or immune response, will dominate the branching of the tree, smaller sets may be broken up and redistributed. The problem of large dense sets of genes occluding smaller sets arises in the simple or one-way application of HC; it is compounded when two trees are visually crossed in the two-way HC used in the standard script. The question then becomes one of what can be faithfully represented in the twodimensional arrangement. The brief answer, as spelled out by Hartigan in the context of "direct" clustering, is that clusters jointly defined by two trees can be rendered as contiguous regions only if they are either disjoint or nested [14,15].
An alternative to two-way hierarchical clustering, biclustering, seeks to find submatrices in the array of expression values that satisfy some defining criteria. Essential to biclustering is the notion that, given a matrix of expression values, the pattern of coordinated expression for a group of genes may be confined to only a subset of samples. That is, the pattern is "local", and may not be detectable using a "global" measure of the similarity between pairs of genes, e.g., Pearson correlation coefficient computed on vectors of expression values [16][17][18]. Motivated by this concern with detecting "local" structure, several algorithms for detecting biclusters have been advanced. For surveys, see [19,20]. Because of the computational complexity inherent in the biclustering problem, which is provably NP-hard under several formal descriptions [16,18,21], these programs abandon exhaustive search and either resort to heuristics or impose bounds on the size of the genes-bysamples submatrices that can be recovered. The method applied in this paper reframes the biclustering problem such that an exhaustive enumeration of all of the submatrices that meet a certain definition can be achieved. Included in the resulting inventory are not only such familiar sets as "estrogen" and "erbb2/17q12", but additional sets highly prognostic of survival. Some of these newly defined sets may serve alongside "estrogen" etc., as building blocks for future models and classifiers of breast cancer.
Defining gene patterns by partitions on samples
HC is an agglomerative method that recursively joins sets of genes or, separately, sets of samples [22]. The starting point is the notion that similar genes are assigned to the same set (subtree) using a measure of similarity such as correlation [23]. A wholly different approach can be predicated on the notion that two genes belong to the same set provided that they induce the same partition on the set of samples. As an example, the erbb2/17q12 gene set in Figure 1 appears to satisfy such a requirement. Here the samples have been ordered by column sum. It is apparent that the largest (reddest) values of each of these six genes align to a considerable degree. That is, they indicate, or select for, much the same subset of columns (samples). While the notion of genes inducing partitions like this may be intuitive, rendering the notion such that it can be implemented in a search algorithm requires further specifying what constitutes an admissible partition. Crucially important is the partition's size and the degree to which the samples induced by any two genes must align for the partition to be considered the "same". These two parameters (partition "size" and set difference or "tolerance") control a matching rule that can serve to explicitly define what is meant by "gene set".
Setting up the enumeration
The two parameters that define what counts as the "same partition" suggest a partition-based strategy for finding all sets of genes (all red submatrices) in a matrix of expression values, namely, try all combinations of size and tolerance. Because this collects all the genes-bysamples patterns, it constitutes an enumeration.
Since the size and quality of the partitions to be discovered are not known in advance (the procedure is wholly unsupervised), to enumerate all of the partitions in a dataset, and the genes that support them, requires trying all combinations of size and tolerance. This sets up a two dimensional grid search with partition size (s) and tolerance (t) as dimensions. Given the number of samples (n), and the number of draws, that is genes, bounds can be set for the parameters (s) and (t) by computing p-values with a counting rule (equivalent to the tail of the hypergeometric distribution).
Though the rule for bounding t works well enough with smaller datasets, we find that with data sets of the size of the NKI295 [24], these p-values are so impossibly small that they provide little guidance in controlling t. An alternative strategy relies on a heuristic suggested by results of an initial spiral search in which s and t are stepped in large increments until gene sets are found, at which point s and t, are incremented, separately, and in small steps. This preliminary scan of the four datasets finds: 1) For most partition sizes s, small (stringent) values of t fail to detect any gene sets at all, while large t settings yield a single set composed of most, if not all of the genes. Between these two extremes, for any given s, there is a relatively narrow range of t that returns multiple distinct gene sets. 2) In size and composition, these sets vary smoothly as s and t are stepped.
3) The number of unique components (gene sets) is surprisingly small, for example, between thirty and forty in the NKI295 data.
In light of these observations from the spiral searches, rather than bounding t by a combinatorial counting rule, we choose to set t such that specific gene sets remain distinct. In the enumerations reported in this paper, the set we wish to preserve consists of interferon genes, a clinically significant set that the initial spiral search encountered repeatedly. The resulting "stop rule" says, in effect, that for each partition size, which ranges from a small constant (6 or 10), to n minus that same constant, increment t, starting at 0 (perfect match) until the interferon set disappears as a distinct object as it merges with other immune sets. The motivation for devising this stop rule will be developed further in the context of the gene sets that decompose "immune response".
As the algorithm steps through its search it typically finds multiple versions of the same set, which may differ only marginally in the number and composition of the genes. This means that rather than a single list of genes, a "gene set" in fact refers to a set of sets, like those listed for the ERBB2 set in Table 1. Each column in that table catalogues the genes found for the erbb2/17q12 gene set in the Uppsala data as detected at ten combinations of the "size" and "tolerance" parameters. The list of genes in each column is like a snapshot of the same object in gene space, viewed at a slightly different resolution.
It is important to note that the end product of the enumeration is both the list of genes in each set and the partition induced on the samples by that set. While the number of genes in the sets can vary from six (the lower bound) to as many as 600 (as in the case of the proliferation gene set), regardless of the number of genes in the set there can be only one partitioning pattern per gene set. It is in this context that the issue of genes with multiple probe sets can be addressed. As would be expected, multiple probe sets spotted for the same gene will, to the extent that they induce the same partition on the samples, be assigned to the same gene set by the algorithm. A case in point is the small stromal gene set in Figure 2 which consists principally of decorin (DCN) and fibulin 1 (FBLN1). Decorin is spotted four times on the Affymetrix HG-U133a and the algorithm finds that all four decorin probe sets induce the same partition pattern on the 251 samples in the Miller dataset [25]. That same pattern is also induced by two copies of FBLN1, as well as by single instances of GLT8D2, CTSK, PRRX1, and SPON1, and several ESTs. All in all, in the Miller data, this partitioning pattern is found at five combinations of s and t settings. For the purpose of making the survival analysis manageable, we summarize (squash) these five versions of this gene set into one composed of those probe sets that appear in at least half of the original versions. The result is the set pictured in the heatmap in the figure, to which metastasis events have been attached. As described more fully in the Results section, the χ 2 for a partition at the median value of a vector of column sums is 7.62 (p = 0.005). The χ 2 for first versus last quartile is 15.62 (p = 0.00007). Since each of the genes in a gene set realize the same partitioning pattern, the end result, in terms of partitioning and survival analysis would be virtually the same if the four decorin probe sets were replaced by any one (or if the four were averaged). This issue of multiple probe sets for the same gene does become a problem for the detection of the smallest gene sets in the case in which the partitioning pattern is realized exclusively in probe sets all of which spot the same gene. This occurs occasionally in the Affymetrix data, and we consider these instances to be artefacts of the chip design.
Detecting patterns that were heretofor undetectable
The virtues of HC and the standard script include the fact that it is intuitive, simple to implement, and effective (at least in finding the largest and most prominent genes-by-samples patterns). Moreover, the software that implements this method of discovery is free and well maintained [26,27], and the output is not only immanently useful, but a thing of beauty. Replacing this standard script requires both a conceptual shift and considerable computational apparatus. For these reasons, such a switch can only be justified if the alternative not only reproduces the standard results, but significantly extends those results. That is, it must find additional patterns that are biologically meaningful and clinically significant. To show that this is in fact the case we enumerate the gene sets in four well-studied breast cancer microarray data sets listed in Table 2. We refer to these as: Uppsala, Stockholm, TRANSBIG, and NKI data sets. Three of these use Affymetrix U133a chips, a platform that has attained the status of an industry standard, and has been used perhaps in more studies than any other microarray platform. To identify gene sets conserved across platforms, we add the fourth data set, generated by the Nederlands Kanker Instituut (NKI), which used a custom Agilent chip. The Uppsala and Stockholm cohorts [25,28] are population studies (consecutively presented primary breast cancers). The TRANSBIG dataset is a validation series for a genebased classifier [29] and, accordingly, consists of younger (median age 47) node-negative patients [30]. The NKI data was collected in conjunction with a second classifier [24], and consists of younger patients, both node-positive and node-negative. For purpose of survival analysis in this paper, the end point for the Uppsala data is Breast Cancer Specific Survival (BCSS); the end points in the other three data sets are Distant Metastasis Free Survival (DMFS). The datasets were selected, in the first instance, because of the quality of the data, both expression and accompanying clinical data. But they were also chosen to set up explicit comparisons between the gene sets detected by the enumerations in this paper and gene clusters and signatures induced on the same data by other methods and programs.
Data sets were downloaded from the GEO website [31], or, in the case of the NKI data, from the Stanford website [32] that accompanies [33,34]. Missing values were estimated using k Nearest Neighbours, (k = 10) [35]. Because the algorithm takes the full expression matrix as input, e.g., in the case of the Uppsala data, a matrix of size 22,283 × 251, there is no prior filtering/ selection of genes.
Results
In each of four breast cancer datasets in Table 2, the rank-based bi-clustering algorithm finds between thirty and forty sets of genes. Many of these sets are conserved across the four studies despite differences in platform: Affymetrix U133a (Uppsala, Stockholm, TRANSBIG), and custom Agilent (NKI). Several of the largest and most prominent of the sets closely resemble gene clusters reported many times in microarray studies of breast cancer. These include "estrogen", "proliferation", ERBB2, Table 1 Instances of the erbb2/17q12 gene set detected in the Uppsala data The partition size (s) and tolerance (t) are listed at the head of each column. The first instance of this gene set is detected at s = 21, t = 4. As s and t are stepped, the number of erbb2/17q12 genes increases from size to a maximum of 15, at s = 29, t = 6. From that point the number of genes decreases.
Figure 2
The survival results for the stromal(5)decorin gene set in the Uppsala cohort. Columns are ordered by column sum, and recurrence events have been added.
and "adipose" [1,2,9,[36][37][38]. Other sets, a number of which are relatively small (e.g., 6 to 12 genes), appear to be novel. These include "histone", "hemoglobin" "amplicon 8p11-12". Overall, nearly a third of the gene sets detected are clinically significant as measured by association with survival, and some are very significant with a log-rank χ 2 values exceeding 20 (on one degree of freedom). For purpose of presentation, in the course of enumerating the gene sets in each of the four data sets, we proceed in three steps. The gene sets detected in each of the three Affymetrix data sets are first tabulated separately. This is followed by a comparison of these sets to identify gene sets conserved on the Affymetrix U133a platform. This, then, is followed by an enumeration of the gene sets in the NKI295 data set which uses a custom Agilent chip. These sets are then compared to the Affymetrix sets for the purpose of identifying gene sets conserved across the two platforms.
Regarding the clinical significance of the gene sets, since a gene set, in general, is detected multiple times (resulting in multiple gene lists), the survival analysis of a gene set would yield multiple log-rank values. To simplify the analysis we have replaced these multiple versions of each set with a "core" set comprised of those genes that appear in at least half of the instances detected. Five log-rank χ 2 values are reported for each core gene set. The first reflects all the samples in a data set partitioned at the median value of the column sums of the genes in the set. The second partitions all samples by lowest and highest quartiles. The third restricts the samples to those which are ER positive, and partitions at the median. The fourth also restricts the samples to ER positive, partitioning by quartiles. The fifth restricts the samples to ER negative, partitioned at the median. Because of small class size, the ER negative samples are not partitioned by first and last quartile. Table 3 reports the survival analysis for the 35 gene sets induced in the Uppsala data. The rows have been arranged with gene sets associated with increased breast cancer specific survival are at the top, and the sets prognostic of diminished survival are at the bottom. A list of the genes in the top ten sets and bottom seven sets are provided in Additional File 1. A complete list of genes for all 35 sets is available in Additional File 2. The most striking result is the large, positive significance of the four stromal sets, which are discussed in a later section. Also predictive of increased survival are two immune sets, and estrogen and adipose, as might be expected.
The proliferation set is strongly prognostic of recurrence in the Uppsala data, as it is in the other three data sets, which accords with a number of microarraybased studies of breast cancer [39,40]. Other sets strongly prognostic of recurrence are the histones, the metallothioneins/16q13, as well as GAPDH, CD24, AFFX-M27830_5, and GNAS. The histone set includes HIST1H2BF HIST1H2BE HIST1H2BH H2BFS HIST1H2BK HIST1H2BD, while the metallothioneins gene set consists of seven, or perhaps eight, isoforms and two ESTs: MT1E, MT1F, MT1G, MT1 H, MT1M, MT1X, MT2A, LOC645745. Up-regulation of MT2A, the most abundant of these, is associated with more aggressive breast cancer and poor prognosis [41], and it is reported that the over-expression of metallothioneins predicts resistance to doxorubicin [42].
GNAS is located on the long arm of chromosome 20 at 20q13.3, lying just outside the interval of the 20q13 amplicon investigated by Ginestier et al. [43], but falling within one of the two 20q13 amplicons analyzed by Yao et al [44]. It is reported that increased copy numbers for 20q13 amplicon genes occur in 12% of primary breast tumors, and are associated with more aggressive disease [45].
GAPDH is often used as a housekeeping gene [46]. For example it is one of the five genes used to normalize recurrence score in OncogeneDX [47,48]. But, GAPDH is reported to be up-regulated in some cancers, e.g., by a factor of 3 to 6 in non small cell lung cancer compared to normal lung tissue [49]. In a study of GAPDH expression in breast cancer, Revillion et al., [50] conclude with the warning that it should not be used as a control RNA. Valenti et al reach a similar conclusion [51]. The fact that the GAPDH partitions the Uppsala samples into good and poor prognostic groups is added evidence that it is not only unsuitable for scaling expression values, but may in fact be a candidate oncogene.
Data set 2: Stockholm cohort Similar to the results for the Uppsala data, for the Stockholm cohort three stromal gene sets are strongly associated with increased survival as reported in Table 4. Also, again, the adipose set proves significant. In addition, among the sets associated with increased survival is a hemoglobin set consisting of: HBH1, HBB, HBA1, HBB, HBA1, HBA2, HBA2, HBB, HBA2, HBG1 (where the order of the genes reflect the row order of the probe sets on the Affymetrix U133a). A complete list of the genes in all 31 gene sets for the Stockholm cohort is available in Additional File 3. Though ER and PR status are reported in the aggregate in the original article [52], ER status was not available for this data set.
Consequently the log-rank values tabulated in Table 4 represent all 159 samples partitioned at the median, and by first and last quartiles. Among the sets significantly associated with decreased survival, as in the Uppsala data, proliferation is most prominent. Also, again, the histone set, CD24, and GAPDH figure among the sets associated with recurrence. In addition, among the gene sets that have a negative impact on survival in this data set include ACTG1, a ribosomal set, and a set we have labelled ezrin, because it includes the gene VIL2. Ezrin expression is associated with metastasis in a number of [56], and Li et al demonstrated that ezrin silencing reverses cell migration and invasion in a metastatic breast cancer cell line [57].
With regard to survival in breast cancer, Bruce et al report that ezrin expression is associated with poor outcome [58]. The ezrin gene set also contains ARF1, which is reported to modulate migration and proliferation in breast cancer cell lines via the regulation of the PI3K pathway [59].
Data set 3: TRANSBIG (Desmedt 2007)
While the TRANSBIG data employs the same Affymetrix platform as the Uppsala and Stockholm datasets, it differs in terms of sample selection. The first two data sets are population-based, while this data set, as a validation series, reflects the sampling criteria used in the development of Wang et al.'s 76-gene classifier [29]. As a consequence, patients were younger (less than 61 years of age, with a median age of 47), node-negative, with smaller tumour grade (T1-T2, less than 5 cm). Despite these differences, as is apparent in Table 5, the gene sets detected are largely the same as those found in the Uppsula and Stockholm cohorts. A complete list of the genes in each of the 37 sets discovered in the TRANSBIG data are available in Additional File 4. As in the previous two data sets, stromal (2) is associated with increased survival. Distinctive of this third data set are the three ribosomal sets associated with survival, a relationship which for two of the sets (ribosomal(0) and ribosomal (3)) is particularly strong for ER positive samples.
With regard to gene sets prognostic of decreased survival, again, as with the Uppsala and Stockholm cohorts, histone and proliferation figure prominently, as do the metallothioneins. Also among the sets associated with decreased survival are the basal set and NKTR.
Comparing gene sets across the three Affymetrix U133a data sets Factors that impact the ability of the algorithm to detect sets that induce a common partition include the design of the microarray platform. Some genes, for example CD24 or LST1, are rendered by multiple probe sets. To the extent that they take similar expression values for the same samples, the algorithm will identify them as a (small) gene set in their own right. That is, all gene sets that satisfy the matching criteria are reported in the enumeration, including several as small as six genes in size. While some of these will prove to be artefacts of chip design, others may prove to be significant, for example the eight-gene 8p11-12 gene set detected in the NKI data. A virtue of the method over HC is this capacity to find even the smallest sets that induce important patterns on the samples. Table 6 juxtaposes the sets found in each of the three Affymetrix data sets (Uppsala, Stockholm, and TRANS-BIG). As apparent from the table, there is a remarkable concordance: to a considerable extent the same thirty, or so, sets are extracted from independently assembled matrices of expression values, each of which is comprised of as many as five million real numbers.
Data set 4: NKI 2002
Of the 35 gene sets discovered in the enumeration of the NKI295, available in Additional File 5, eight are significantly associated with increased survival. Of these, the two largest (in terms of number of genes) are the estrogen set, well-documented in microarray-based studies of breast cancer [60], and a second set that we have labelled "FOXA1". These sets are disjoint except for the pivotal FOXA1 (HNF3A) gene which appears in both, and which is represented by two probes on the custom Agilent chip used in the NKI data. While the estrogen and FOXA1 gene sets are both significantly associated with increased survival, the two sets induce substantially different partitions on the 295 tumor samples, possibly suggesting different positive mechanisms at work. The gene FOXA1 is known to correlate strongly with estrogen receptor alpha [61][62][63], and the mechanism that accounts for this association has been established, namely, ER binding requires FOXA1 binding in close proximity [64,65]. But, it has also been shown that FOXA1 expression is largely independent of estrogen , while, in one study, more than 40% of ER-positive tumors were down-regulated for FOXA1 [67]. Additional support for the notion that FOXA1 is independent of ER, is the fact that most of the known FOXA1-response genes are not ERα-response genes [67,68]. So, paradoxically, it would appear that FOXA1 is both correlated and uncorrelated with ER. The estrogen and FOXA1 sets detected in the enumeration, support both propositions. It should be noted that at least two other genes in the FOXA1 set, TLOC1 and SDCCAG1, are reported to act as tumor suppressors in their own right [69,70]. Two stromal sets, described in the next section, are shown in Table 7 to be associated with increased survival.
The genes in the 8p11-12 gene set essentially tile a region of 8p11-12 amplicon between 37 M and 38.5 M. The set largely agrees with the amplicon as refined by Haverty et al. [71]. Seven of the eight genes map to one of the four 8p11-12 sub-amplicons that Gelsi-Boyer et al identified using array-CGH and Reyal et al.'s method for correlating genes within 20-gene windows [72,73]. Studies have found that the expression of each of these genes is significantly correlated with copy number [74,75].
It is interesting to observe that immune sets are associated with survival both positively (immune(0)T-cell and immune(1)IgG) and negatively (immune(2)MHC-I and immune(3)interferon). As with the three Affymetrix data sets, the histone and proliferation gene sets are strongly prognostic of poor outcome in the NKI data. Two histone sets, H2B_histone and H3_histone are detected in this data set, but it is only the H2B, which corresponds to the histone set in the Affymetrix data, that is significant. Figure 3 attempts to present an overview of all the sets induced across all four data sets. The inner box contains the sets found in two or more of the data sets; the outer box lists sets unique to individual data sets. Sets significantly associated with increased survival are marked in green; sets associated with decreased survival are marked in red. It is apparent that the proliferation and histone sets are robust indicators of poor prognosis, while several of the stromal and ribosomal sets are associated with better outcomes, possibly conferring some form of protection against recurrence. The immune sets are a mixed bag, pointing in both directions.
Gene sets conserved across the Agilent and Affymetrix platforms
immune (1) immune (1) immune (1) immune (2) immune (2) immune (2) immune (3) immune (4) immune (4) immune (4) immune (5) immune (5) immune(5) immune (6) immune (6) immune (7) immune (10) stromal(0) stromal(0) stromal (0) stromal (1) stromal (1) stromal (2) stromal (2) stromal (2) stromal (3) stromal (3) stromal (5) stromal (5) stromal(5) ribosomal(0) ribosomal(0) ribosomal (0) ribosomal (1) ribosomal (1) ribosomal (3) ribosomal (4) ribosomal ( Discussion While the tables in the Results Section report the gene sets found and their clinical significance, here the purpose is to compare gene sets, in particular those related to immune response and stromal signalling, to gene clusters and signatures identified across a number of microarray studies. For many of the sets detected the correspondence is immediate. This is the case in particular for "estrogen", "erbb2", "basal", and "proliferation", all of which have become virtual fixtures in genomic studies of breast cancer [2,39,76]. Other sets, including several small sets found only in the Affymetrix data, may be trivial in the sense that they consist of housekeeping genes, or reflect the design decision to spot single genes multiple times. In these cases, the resulting patterns, as detected by the algorithm, are real, but may be of only technical interest in so far as they concern issues of normalization and quality control. Of the remaining sets detected by the enumerations, several are easily described and Of special interest are the ten sets labelled "immune", and the six or more identified as "stromal".
A natural factoring of Immune Response
Eight immune sets are detected in the Uppsala data, five in the Stockholm, seven in TRANSBIG, and six in NKI. All told, across the four data sets, ten unique immune sets are induced. Of these, two are questionable: one consisting essentially of five copies of CASP1, and the other, a small set in the NKI containing TNFRSF17, which is most likely a subset of the gene set labelled "immune(1)IgG". Setting these aside, the broad category of "immune-related genes" appears to factor naturally into eight distinct subcategories. To marshal evidence for, and against, this division, and to aid in their biological interpretation, these eight sets can be matched against clusters and lists of immune genes identified by others using standard methods, generally hierarchical clustering. The correspondence is one-one between the immune gene sets and eight of Loi's et al's "pclusts" [77] and seven of Rody et al.'s immune-related metagenes [78]. Examining this correspondence in greater detail, seven of the eight "immune" sets correspond almost exactly to the gene clusters used by Rody et al. to compute immune "metagenes" in their study of lymphocytic infiltrates in breast cancer [78]. In that study of twelve Affymetrix breast cancer data sets, hierarchical clustering repeatedly yielded a cluster of approximately 600 immune-related genes. Applying hierarchical clustering a second time to this cluster revealed seven distinct subclusters. The concordance between the genes in these seven clusters and seven of the immune sets as detected in the enumeration of the Uppsala data is available in Additional File 6. The one minor discrepancy between the enumerated sets and these seven clusters involves the five LST1 probe sets in the monocyte cluster (last column). In the enumerations of the Uppsala and Stockholm cohorts, LST1 forms its own small gene set.
The survival analysis on the two largest datasets, Uppsala and NKI, confirm Rody et al.'s principal finding [78], namely T-cell genes are prognostic of increased survival time for estrogen-negative patients, with χ 2 = 4.26, p = 0.03 for the Miller data, and χ 2 = 8.21, p = 0.004 for the Van de Vijver data. But on these same datasets, contrary to Rody et al. who find no association between B-cell/immunoglobulin genes and survival, immune(1)IgG is a significant predictor of increased survival in the NKI data, χ 2 = 4.5, p = 0.03, and marginally so in the Uppsala data, χ 2 = 3.32, p = 0.06. The genes in immune(1)IgG closely match the B-cell gene cluster that Schmidt et al. find to be prognostic of increased survival among highly proliferating tumor samples [79]. Contrary to our results, and to that of Rody et al., for tumor samples stratified by proliferation, Schmidt et al. fail to find any association between survival and the expression of their T-cell gene cluster. Rody et al. attribute this discrepancy to possible differences in cohorts and/or treatments, but the difference in results may also be due to the composition of the respective Tcell gene clusters or sets. Schmidt et al.'s T-cell cluster contains our immune(1)T-cell genes as a proper subset, but it also contains genes that belong to immune(4) STAT1 and immune (7)complement, neither of which we find to be significantly associated with survival. Hence, the significance of the T-cell genes as a predictor of survival may be attenuated by genes from other sets.
Immediately relevant to the question of T-cell genes versus IgG genes as predicators of survival, Calabro et al., assembled eighteen genes from the literature to measure the presence of lymphocytic infiltrate [80]. They found that this list is somewhat associated with diminished survival time for estrogen-positive samples, but is strongly prognostic of increased survival for estrogen-negative samples. The genes in this list range from CCL5, CD37, CDE3...to IGHG3 and IGJ, which effectively merges our immune(0)T-cell and Immune(1)IgG sets. Therefore, the separate positive associations with survival between immune(0)T-cell and immune(1)IgG for estrogen-negative samples in the Uppsala and NKI datasets appear to confirm Calibro et al.'s results.
The positive effect of the genes in immune(0)T-cell on survival may in part explain the performance of Finak et al.'s stromal-derived classifier [81]. In that study, hierarchical clustering applied to individually matched tumor and normal stroma yielded three clusters of samples that are starkly distinguished by outcome. From the genes that best discriminate between pairs of these clusters, Finak et al. construct a 23-gene classifier. Ten of these 23 are associated with the good outcome cluster of samples. Of these ten, eight belong to the immune(0) T-cell set: GZMA, CD8A, CD52, CD247, CD48, PLEK, RUNX3, and GIMAP5. This suggests that the stromalderived classifier, in assigning samples to the good, poor, and mixed groups, may be powered substantially by the association between T-cell genes and increased survival. If this is the case, then the stromal-derived classifier provides yet more evidence for the association between T-cell genes and survival for estrogen-negative breast cancer observed in the Uppsala and NKI data.
The capacity of Teschendorff et al.'s seven gene immune response module to identify good outcome samples from among estrogen-negative tumors may constitute yet more support for this association. Those seven genes span at least four of our immune sets with immuned(0)T-cell represented by LY9, immune(1)IgG by IGLC2 and TNFRSF17, immune(2)MHC-I by HLA-F, and immune(7)complement by C1QA. It is interesting to note that in the attempt to validate this seven-gene module on independent data, only four of the seven genes prove significant, and of these, three belong to either immune(0)T-cell (LY9), or to immune(1)IgG (IGLC2 and TNFRSF17). If these are the genes that are driving the performance of the Teschendorff et al. immune response module, then the effectiveness of that signature for predicting increased survival among estrogen-negative samples is consistent with the immune(0) T-cell and immune(1)IgG survival results for estrogennegative samples in the Uppsala and NKI data. Overall, the strong association between the T-cell gene set and survival, and the milder association between the immunogloblulin/B-cell gene set and survival, as identified in the enumerations, appears to converge with the important results of each of these several studies despite large differences in approach and research design.
Immune(3)/interferon
The immune(3)/interferon set closely resembles the cluster of "interferon response" genes identified by Buess et al. [82]. In that experiment fibroblasts were cocultured with several breast cancer cell lines to investigate cell-cell signalling between stroma and malignant epithelial cells. In comparing the gene expression of cocultures to matched monocultures, the starkest difference involved interferon-related genes which were preferentially expressed in co-cultures of fibroblasts with estrogen negative cell lines. Of the twelve genes in the immune(3) gene set, as realized for example in the NKI data, five belong to this "interferon response": OAS1, OAS2, MX1, MX2, and IFIT1. The remaining seven include: IFIT4, ISG15, OS4, MTAP4A, USP18, G1P3, and GS3686. Buess et al show that these interferon genes are significantly associated with shorter survival time in the NKI data. The enumeration of the NKI295 confirms this finding.
The stromal gene sets factor stromal signaling Efforts to delineate the interaction between stromal cells and epithelial tumor are challenged by the complexity of the microenvironment, which has been defined as all components of the mammary gland other than luminal and/or tumor epithelial cells [83]. The effective division of the stroma-related genes into six gene sets by the enumerations may be relevant to this problem. Unlike the immune sets which appear to be distinct, monolithic entitites, the stromal sets resemble a constellation with a core entity, which we designate "stromal(0)", accompanied by several satellite sets, "stromal(1), stromal(2), stromal(3) and stromal (5). Each of these is detected as a self-contained and independent set under at least one combination of partition "size" and "tolerance" parameters, but, as the algorithm increases the partition size, and relaxes the stringency of what qualifies as a match, these sets tend to quickly merge into an omnibus stromal set. Despite this, the importance of distinguishing the smaller sets becomes apparent in the survival analysis: five of the six stromal gene sets are associated with increased survival, several exceptionally so with log-rank χ 2 values in excess of 20.
The stromal(0) set, induced in each of the four enumerations, is composed of many, if not all of the collagen and ECM remodelling enzymes featured in West et al's desmoids-type fibromatosis stromal signature (DTF) [84][85][86]. Prominent genes include: SPARC, CSPG2, FBLN2, FBN1, and type-I, type-III, and type-VI collagen genes. That signature was devised as a proof of concept for a larger, on-going program that exploits the mono-cellular property of soft tissue tumors to inductively define subtypes (or states) of fibroblastic stroma cells [84,85,87]. The original DTF signature, comprised of genes differentially expressed between two types of soft tissue tumors, was refined for the purpose of identifying a distinctive stromal response in breast cancer. In five datasets, including three of the four used in this paper, the DTF stromal response identifies a subset of breast cancer patients who experience increased survival. The two versions of the DTF signature contain 182 and 66 genes, respectively. In an analysis of the functional relations among the proteins that correspond to these genes, this list is reduced further to a protein-protein network of 20 genes [86]. The close relationship between the DTF signature and the stromal(0) gene set as realized in each of the four data sets can be conveyed with respect to these twenty essential genes (Additional File 7).
Stromal(1)/COL11A1
Among the six stromal sets, stromal(1)/COL11A1 is the only one that is negatively related to survival, though this is apparent only when inspecting the entire family of sets detected. In the enumerations, this gene set, consisting exclusively of COL11A1 and FN1 probe sets, is almost always subsumed in a larger stromal set composed essentially of stromal(0)/DTF genes. Nevertheless it appears to be a distinct entity under some of the parameter combinations that control the stringency of the match. It is interesting to observe that the survival value of the stromal(0) set, as measured by log-rank, and which is generally positive, abruptly goes to zero as COLL11A1 merges with that set. This scenario is played out in all four datasets. The implication is that important relationships between stromal expression and outcome can be lost unless the stromal genes are decomposed into their constituent sets.
Stromal(2)/LAMA2
The stromal(2) gene set is detected in the three Affymetrix data sets and is characterized by LAMA2 and COL14A1. The genes in this gene set tend to merge with the genes in stromal(3)/DARC as the size of the partition is increased and the stringency of the match is relaxed. Stromal(2)/LAMA2 is significantly associated with increased survival: χ 2 13.94 (p = 0.0001) for the Uppsala251, χ 2 20.01, p = 0.000007 for Stockholm159, and χ 2 3.57, p = 0.05 for TRANSBIG198.
The merozoite of P. vivax malaria enters red blood cells via DARC, consequently individuals who lack DARC are resistant to that strain of malaria [93]. The association between the lack of expression of DARC in a large percentage of black men and increased rates of aggressive prostate cancer, compared to whites, has been established [94]. A similar relationship appears to hold between black women and the increased incidence of more aggressive breast cancer. It is estimated that 70% of blacks of West African descent lack the expression of DARC, which is a population that suffers high rates of both prostate and breast cancer [91,95]. A second gene in the stromal(3) gene set, tenascin B (TNXB), is down-regulated during the tumor progression of neurofibromatosis [96].
Stromal(4)/ADAMTS5
Among the smallest of the stromal sets, Stromal(4) is comprised of ADAMTS5, ZNF288 (ZBTB20), and four Agilent probes that lack gene names. It is significantly associated with increased survival in the NKI295 data: χ 2 3.8, p = 0.05, when partitioned at the median, χ 2 8.63, p = 0.003, when partitioned by first and last quartiles.
Stromal(5)/decorin
Stromal (5) consists almost exclusively of DCN and FBLN1 probe sets. Like the other small stromal sets, with the exception of stromal(4)/ADAMTS5, this set tends to merge with stromal(0), but conceptually and empirically it should be treated as a distinct entity. While stromal(0) is virtually defined by the DTF signature [85] neither DCN nor FBLN1 are found among the 493 genes in the original list of genes that discriminate between DTF and SFT [84]. In contrast, DCN and FBLN1 are both prominent on the list of myoepihelial genes identified by Allinen et al. in a SAGE-based sequential purification of cell types [97]. Therefore, stromal(5) might better be labelled "myoepithelial". In keeping with reports of the positive effect of myoepithelial cells in co-cluture experiments [98], stromal(5)/decorin is highly significant in all four of the datasets, though in the NKI295 it is apparent that it merges with genes from stromal(0). This may be partially explained by the fact that DCN is spotted only once on the Agilent chip, while it is represented by four probe sets on the Affymetrix U133a. As an experiment, if we used the single Agilent decorin probe as a surrogate for stromal (5). Defined in this way the stromal(5)/decorin gene set proves exceptionally significant as a prognosticator of increased survival in the NKI295 (χ 2 15.7, p = 7.51e-05). Across the four datasets the substantive finding is that stromal(5), principally decorin, appears to be strongly associated with increased survival for patients at risk of early onset metastasis.
In sum, of the six stromal gene sets, five are significantly associated with increased survival in one or more of the data sets. This appears to be further evidence of the normalizing effect of stroma. Coculture/coinjection experiments have convincingly shown that aberrant stromal cells are required to promote tumor formation in epithelial cells, and the reverse has also been shown, namely that tumorigenic epithelial cells can revert to normal in the presence of normal stroma [98][99][100]). Decomposing stromal signalling into its constituent gene sets (and the mechanisms they reflect) may contribute to an understanding of the complex of cell types and signals that comprise the mircroenvironment which is an active participant in the initiation and progression of cancer [3,101,102].
Conclusions
A research program of hierarchical clustering and data visualization has proven immensely productive for more than a decade [103,104]. But continued reliance on this research script dependent on hierarchical clustering may inhibit the further exploration of large genomic data sets. Here we offer an alternative program for the unsupervised exploration of microarray data, one that delivers all that the standard script delivers plus considerably more.
The paper is merely a proof of the concept that an enumeration of all the genes-by-samples sets in breast cancer is computationally feasible and substantively useful. By enumerating the gene sets in three data sets that use the industry standard Affymetrix U133a, we identify gene sets that are conserved across experiments on a single platform. By enumerating the gene sets in a forth dataset, the NKI data spotted on a custom Agilent chip, we identify gene sets that are conserved across platforms. In terms of substantive results, nearly 40% of the sets detected prove to be significantly associated with survival. These include subsets of immune response and stromal signalling genes involved in the complex interactions between the epithelial tumor and its microenvironment.
From the first microarray-based studies of cancer, a fundamental challenge has been that of data reduction. The task is to filter and factor these large matrices such that statistical modelling is possible. Over the course of a decade, expression-based analysis has progressed from single-factor designs, e.g., regressing groups (subtypes) on survival [9], to two-factor designs, as exemplified by studies that first stratify on estrogen status (or HER2 status, or proliferation), then proceed to identify sets or modules of immune genes that correlate with survival [79,80,105,106]. Using the gene sets induced by the enumerations, at this point the stage may be set for exploring the effects and interactions of multiple sets of genes as determinants of the progress and outcome of disease. These sets, or more specifically the partitions they induce, can be conveniently incorporated into (survival) decision trees. In this way, the best results to date regarding, for example, estrogen status and immune response, can be further qualified and extended via-a-vis important additional factors, such as stromal-status.
The rank-based matching algorithm (and the grid search scheme that generates the enumeration), can be applied to any matrix of expression values. As such, it should be useful for the exploration of a broad range of biological and medical data. Limitations of the algorithm are addressed in Additional File 8. Describing the method as a "direct" approach is a direct reference to Hartigan's original work with two-way clustering [14]. Input is a matrix of real numbers; output is a set of submatrices that can be read off the original data matrix (with rows and columns appropriately permuted). Because it works with the entire data set, it dispenses with the standard data reduction steps, e.g., restricting analysis to "differential" genes. Multiple testing is effectively controlled by classical counting rules. The submatrices (gene sets) discovered can be immediately interpreted using the original variables, namely, in the present context of breast cancer, in terms of genes and tissue samples. Because the clusters are "two-way", they reveal an association between possible biological mechanisms embodied in the gene sets and subsets or subclasses of breast cancer. | 2017-08-03T01:22:41.000Z | 2010-08-23T00:00:00.000 | {
"year": 2010,
"sha1": "e72e93ce05b54e8039b4196bc587fd5c8c10a86c",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-11-482",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "348e94d0c4992c7cbb3bd847ac7219519cf0e948",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237513700 | pes2o/s2orc | v3-fos-license | Stellar feedback in a clumpy galaxy at $z \sim$ 3.4
Giant star-forming regions (clumps) are widespread features of galaxies at $z \approx 1-4$. Theory predicts that they can play a crucial role in galaxy evolution if they survive to stellar feedback for>50 Myr. Numerical simulations show that clumps' survival depends on the stellar feedback recipes that are adopted. Up to date, observational constraints on both clumps' outflows strength and gas removal timescale are still uncertain. In this context, we study a line-emitting galaxy at redshift $z \simeq 3.4$ lensed by the foreground galaxy cluster Abell 2895. Four compact clumps with sizes $\lesssim$ 280 pc and representative of the low-mass end of clumps' mass distribution (stellar masses $\lesssim 2\times10^8\ {\rm M}_\odot$) dominate the galaxy morphology. The clumps are likely forming stars in a starbursting mode and have a young stellar population ($\sim$ 10 Myr). The properties of the Lyman-$\alpha$ (Ly$\alpha$) emission and nebular far-ultraviolet absorption lines indicate the presence of ejected material with global outflowing velocities of $\sim$ 200-300 km/s. Assuming that the detected outflows are the consequence of star formation feedback, we infer an average mass loading factor ($\eta$) for the clumps of $\sim$ 1.8 - 2.4 consistent with results obtained from hydro-dynamical simulations of clumpy galaxies that assume relatively strong stellar feedback. Assuming no gas inflows (semi-closed box model), the estimates of $\eta$ suggest that the timescale over which the outflows expel the molecular gas reservoir ($\simeq 7\times 10^8\ \text{M}_\odot$) of the four detected low-mass clumps is $\lesssim$ 50 Myr.
Observations showed that clumps have sizes 1 kpc (e.g. Elmegreen et al. 2007;Förster Schreiber et al. 2011b), estimated stellar masses (M ★ ) of ∼ 10 7 − 10 9 M (e.g. Förster Schreiber et al. 2011a;Guo et al. 2012;Soto et al. 2017), and SFR from 0.1 − 10 M /yr (e.g. Guo et al. 2012;Soto et al. 2017). Evidence also suggests that clumps are starbursting, i.e. they have a specific star formation rate (sSFR= SFR/M ★ ) that is a few orders of magnitude higher than the integrated sSFR of their host galaxies (e.g. Bournaud et al. 2015;Zanella et al. 2015Zanella et al. , 2019. Because of these properties, clumps are therefore thought to trace giant star-forming regions. Several studies have highlighted how a comprehensive understanding of clumps could unveil the mechanisms driving star formation at high-redshift and provide critical insights on how galaxy assembly proceeds. In particular, hydro-dynamical and cosmological simulations have suggested that if clumps survive to stellar feedback for hundreds of Myr (e.g. Gabor & Bournaud 2013;Bournaud et al. 2011aBournaud et al. , 2014Mandelker et al. 2014Mandelker et al. , 2017, while spiralling via dynamical friction towards the centre of the galaxy potential well, they generate torque and funnel inward large amounts of gas. With time, the inflow of gas contributes to the thickening of the galaxy disk and growth of the bulge (Noguchi 1999;Immeli et al. 2004a,b;Förster Schreiber et al. 2006;Genzel et al. 2006Genzel et al. , 2008Elmegreen et al. 2008;Carollo et al. 2007;Dekel et al. 2009;Bournaud et al. 2009;Ceverino et al. 2010), and possibly powers bright active galactic nucleus (AGN) episodes (Bournaud et al. 2011b;Gabor & Bournaud 2013;Dubois et al. 2012). However, not all simulations agree with clumps survival scenario. Indeed, depending on the stellar feedback recipes adopted, clumps could retain much of their mass and survive (weak feedback, e.g. Immeli et al. 2004a;Elmegreen et al. 2008;Mandelker et al. 2014), or be blown out by their own intense stellar feedback over timescales shorter than ∼ 50 Myr (strong feedback, e.g. Murray et al. 2010;Genel et al. 2012;Hopkins et al. 2012;Tamburello et al. 2015;Buck et al. 2017;Oklopčić et al. 2017). In this scenario, clumps' mass seems to play an important role since low-mass clumps are found to be affected by stellar feedback the most. It is therefore crucial to observationally constrain (as a function of clumps' stellar mass) the strength of stellar feedback (e.g. mass outflow rate, mass loading factor) as well as the timescale over which star formation consumes the gas reservoir and/or stellar winds and supernovae (SNe) outflows expel gas from the clumps.
In this framework, in this paper we investigate a high-redshift ( ∼ 3.4) lensed (average magnification factor = 7 ± 1) clumpy galaxy drawn from the sample of 12 gravitationally lensed galaxies by Livermore et al. (2015). We target a lensed galaxy since both lensing effects of magnification and stretching allow to reach very faint fluxes in a short amount of observing time and to spatially resolve galaxy substructures (e.g. clumps) down to sizes of ∼ 0.1 kpc and, possibly, SFR ∼ 1 M /yr (e.g. Jones et al. 2010;Livermore et al. 2012Livermore et al. , 2015Rigby et al. 2017;Cava et al. 2018;Patrício et al. 2018;Dessauges-Zavadsky & Adamo 2018;Dessauges-Zavadsky et al. 2019). Our target (dubbed in Livermore et al. 2015 as Abell 2895a) is lensed by the brightest cluster galaxy (BCG) residing at the very centre of the Abell 2895 (A2895, hereafter) galaxy cluster ( ≈ 0.227). The galaxy has three multiple images (M1, M2, M3, see Figure 1) located at the celestial coordinates (right ascension, declination) of (1 ℎ 18 11.19 ,−26 • 58 04.4 ), (1 ℎ 18 10.89 ,−26 • 58 07.5 ), and (1 ℎ 18 10.57 ,−26 • 58 20.5 ), respectively. Thanks to the image multiplicity and lensing magnification, we are able to probe in detail the properties of this source from the multi-wavelength dataset at our disposal and composed of HST, VLT/MUSE and SINFONI observations. This paper is organised as follows. In Section 2, we present our observations and data reduction. In Section 3, we describe the lensing model of the A2895 galaxy cluster, discuss the morphological properties of our target, the method used to derive pseudonarrow-band images of emission lines, extract the integrated farultraviolet (FUV) and optical spectra and the modelling of the tar- We present cutouts for the three multiple images (M1, M2, M3) of our target A2895a (Livermore et al. 2015). Superimposed on the HST image we display the contours of the 5h MUSE WFM+AO FoV (in blue), and the ∼ 5h SINFONI K-band NoAO FoV (in red). The SINFONI observations cover only two of the three multiple images, i.e. M1 and M2. get's Ly emission. From the FUV and optical spectra, in Section 4, we derive the galaxy physical properties (e.g. dust content, interstellar medium metallicity, SFR). Finally, in Section 5, we study clumps' gas outflows, and their properties. In particular, we derive the outflows energetic and clumps' gas removal timescale. We summarise our results in Section 6.
OBSERVATIONS AND DATA REDUCTION
To study the rest-frame FUV and optical emission of our target galaxy, we gather a multi-wavelength dataset that combines archive HST imaging and VLT/SINFONI near-IR integral-field spectroscopic data with new VLT/MUSE AO-assisted optical integral-field spectroscopy observations. In the following, we describe the characteristics of each dataset and the procedure adopted for the data reduction.
HST data
The A2895 galaxy cluster was observed with the Advanced Camera for Surveys (ACS) on board HST during Cycle 15 (SNAP program 10881, PI. G. Smith). The observations were executed with the Wide Field Camera (WFC) F606W filter for a total exposure time of 0.33h. The fully reduced F606W broad-band image was downloaded from the Hubble Legacy Archive 1 .
To evaluate the point spread function (PSF) of the HST/ACS image, we fit two dimensional (2D) gaussians to 5 non-saturated stars in the HST field of view (FoV). The median value of the PSF full width at half maximum (FWHM) is 0.13 .
To assess the absolute astrometry of the HST image, we select 14 compact sources with high signal-to-noise ratio (SNR) and compare their HST sky-coordinates with the GAIA DR2 catalogue (Gaia Collaboration et al. 2016. We register the HST astrometry to GAIA DR2 applying the inferred median-offsets of ΔRA = 0.66 ± 0.03 and ΔDec = 0.06 ± 0.05 .
MUSE data
The central region of the A2895 galaxy cluster was observed with VLT/MUSE (Bacon et al. 2010), in Wide Field Mode with Ground-Layer Adapative Optics (GLAO) provided by the GALACSI module (Arsenault et al. 2008;Ströbele et al. 2012 We reduce the data via the ESO reduction ( ) pipeline 2 , version 2.4.1 (Weilbacher et al. 2020). We follow the standard reduction procedure, including bias correction, flat-fielding, wavelength and flux calibration, atmospheric extinction and astrometric correction. We disable the correction of telluric absorption since no suitable star in the field-of-view, nor a standard star close enough in time and airmass is available. Areas of strong telluric absorption are simply discarded in our analysis. When combining individual exposures, we enable the use of the task that corrects for the background patterns caused by slightly different illumination of the MUSE slices. We remove sky residual lines from the final cube with the Zurich Atmosphere Purge software ( 3 version 2. 1, Soto et al. 2016).
As pointed out in Bacon et al. (2014), the variance of the MUSE datacube propagated by the pipeline is underestimated. To account for this, we define an extended region of the sky (∼ 96 arcsec 2 ) where we evaluate the sky flux variance at each wavelength. We find that this is 1.35 times higher than the one estimated by the pipeline. We correct the MUSE variance cube by this factor.
To bring the MUSE and HST data on the same astrometric reference system, we consider 20 point-like sources selected from the MUSE white-light image. Through bi-dimensional Gaussian modelling of the sources surface brightness profile, we evaluate their centroid celestial coordinates on both HST and MUSE observations. We correct the MUSE astrometry by adopting the median value of the difference between the GAIA-corrected HST and MUSE celestial coordinates, i.e. ΔRA = −0.49 ± 0.07 and ΔDec = −1.40 ± 0.08 .
Finally, to reconstruct the MUSE PSF, we resort to , the publicly available 4 PSF reconstruction algorithm for MUSE data by Fusco et al. (2020). When applied to our observations, the software retrieves a median PSF of 0.4 (FWHM).
SINFONI data
Our target galaxy was observed with the NIR integral-field spectrograph SINFONI (Eisenhauer et al. 2003;Bonnet et al. 2004), with the K-band grating, between July 25th and September 4th 2011 (Programme ID: 087.B-0875(A), PI: R. Livermore) without adaptive optics (NOAO mode). The total exposure time was 5.33h. The median seeing in the optical (at ∼ 6000Å) as evaluated by the telescope guide probe during the observations was 0.75 (FWHM). Hence, the seeing-limited PSF of the observations in SINFONI K-band is ∼ 0.6 (FWHM).
We reduce the data via the ESO SINFONI pipeline ( version 3.13.2, Modigliani et al. 2007) that corrects for dark current, bad pixels and distortions. It also applies a flat field and performs a wavelength calibration. We correct the science cubes for telluric features and flux calibrate them using the standard star observed before or after each observing blocks (OB). The header astrometric information was used to combine science exposures within the same OB. After the reduction of the single OBs, we correct their wavelength calibration for the barycentric velocity, a step that is not automatically performed by the pipeline. As the OBs were taken during different nights, we need to tie them to a common astrometric reference system before combining them in a final cube. To this aim, for each OB, we create an [OIII] 5008 narrow-band image of the target, fit the emission with a 2D Gaussian, and estimate its centroid. We consider the [OIII] 5008 emission of the target, as this is the brightest line at these wavelengths and the K-band continuum of the galaxy is not detected. Furthermore, the fact that the target shows two mirrored images (due to lensing effects) in the SINFONI field of view, helps us to accurately align the individual exposures. We then mean-combine the cubes after applying a 3 clipping procedure to reject all spaxels affected by cosmic rays or displaying strong sky residuals. Finally, we match the astrometry of the final cube with the HST celestial coordinates, by minimising the spatial offset between the centroid of the [OIII] 5008 emission and that of the HST FUV continuum. A geometrical reasoning supports this assumption: the distance between the two multiple images of [OIII] 5008 matches the distance between the centroids of their FUV light. Because of the mirroring effect of lensing, no offset along the direction orthogonal to the lensing critical line can be assumed.
ANALYSES
In the following we report the procedures adopted to characterise the morphology of our target as well as its main properties.
Lensing model
The mass model we use in this work is constructed using the 5 software (Jullo et al. 2007), following the methodology described in Richard et al. (2010). The 2D-projected mass distribution of the cluster is modelled as a parametric combination of one cluster-scale and several galaxy-scale double pseudo-isothermal elliptical potentials (Árdís , representing the large-scale and cluster structure parts of the mass distributions, respectively. To restrain the number of parameters in the model, the centres and shapes of the galaxy-scale components are constrained to the centroid, ellipticity and position angle of cluster members as measured on the HST image. The cluster members are assumed to follow the Faber-Jackson relation for elliptical galaxies (Faber & Jackson 1976), and are selected through the colour-magnitude diagram method (e.g. Richard et al. 2014). This parametric model is constrained by using the location of two triply imaged systems with spectroscopic redshift and presented in Livermore et al. (2015), i.e. A2895a and A2895b. The best fit model reproduces the location of the multiply-imaged systems with an rms of 0.09 . We use with this best fit parameters to produce a 2D map of the magnification factor at the redshift of A2895a. We resample the maps of the lensing magnification to match the HST, MUSE and SINFONI spatial sampling, respectively. As a final step, we reconstruct the HST multiple images of A2895a on the galaxy source plane. This is done using our lens model to raytrace back each spaxel observed in every multiple image, and subtract the lensing displacement.
Galaxy morphology
The FUV continuum probed by HST shows that our target has an irregular morphology, dominated by bright star-forming regions, see cutouts from Figure 1. The presence of substructures with an intrinsic effective radius ranging from 60 (∼ 0.008 ) to 500 pc (∼ 0.07 ) was already identified in Livermore et al. (2015) in the reconstructed SINFONI H emission line map, despite the observed PSF (FWHM ∼ 0.6 , corresponding to an intrinsic FWHM of ∼ 0.2 on the source plane). To avoid possible bias induced by the use of reconstructed line maps on the galaxy source plane, we look for clumps directly on the image plane, leveraging the dataset with the highest angular resolution, i.e. HST (observed PSF FWHM ∼ 0.13 , ∼ 0.04 in the source plane).
To identify the clumps and understand what is their contribution to the overall galaxy emission, we implement an iterative modelling of the galaxy 2D surface brightness profile by means of the software (Peng et al. 2010). The methodology we use follows the one presented in Zanella et al. (2019) but is tailored to our scientific case, i.e. it is applied to all the three multiple images of our target (M1, M2, M3) and requires the additional modelling of the A2895 BCG optical light gradient that contaminates the FUV emission of our galaxy. We model the BCG 2D light profile by using two Sérsic models. The first component fits the BCG extended disk (R e ∼ 100 kpc, n ∼ 2, consistent with the measurements reported by Stott et al. 2011); the second fits a central, more compact component (R e ∼ 5 kpc, n ∼ 2). After the subtraction of the BCG light profile, the background at the location of the multiple images of our target is well subtracted. We then model our target employing a 2D Gaussian profile. The map of the residuals highlights the presence of four clumps. Hence, we re-run adding to the 2D Gaussian model of the overall galaxy (hereafter, the diffuse component) four additional 2D profiles each intended to represent a clump. The best fit of our target with minimum and non-structured residuals (see Figure 2) is obtained with a 2D Gaussian profile for the diffuse component, three 2D PSF and a 2D Sérsic models for the clumps. Indeed, while three clumps out of four are unresolved and well reproduced by a PSF-like profile, one is marginally resolved, having a radius ∼ 0.10 (Sérsic profile). We repeat this analysis independently on the three images of our target and reach similar conclusions.
The HST PSF gives us an upper-limit on the clumps' size of Table of the results from the line fitting procedure presented in Section 3.5. We report the parameters for the lines with a measured SNR > 3. Unless differently stated, the measurements reported refer to the intrinsic values, i.e. corrected for lensing magnification. : the wavelengths reported are in vacuum.
: the error on the flux has been increased by 5% (MUSE) and 20% (SINFONI) because of the error on the absolute calibration of the dataset. : the line SNR is estimated as the ratio between the flux and error measured from the fit only (i.e. without taking into account the additional absolute calibration error). : rest-frame EW of the line (the † highlights lines for which the EW 0 has been estimated taking into account an upper limit on the stellar continuum flux). 8). For SINFONI data, we adopt the value of 4.9Å (i.e. corresponding to two spectral pixels, see SINFONI user manual). ℎ : values of corr in units of km/s. ∼ 280 pc (value corrected for magnification) in radius. By summing the flux of all the clumps and comparing it with the total emission (clumps plus diffuse component), we conclude that ∼ 60% of the FUV light is emitted by the four star-forming regions. To verify that our result is not biased by the choice of the models used to fit the different components of the FUV emission (i.e. Sérsic, PSF), we carry out an independent test based on the construction of the galaxy curve of growth, see Appendix A. The results of this test confirm the findings. We assume that, similarly to the FUV continuum, clumps also dominate the FUV and optical line emission. This is a reasonable assumption, given that the emission lines probed by the MUSE and SINFONI data trace star formation, similarly to the FUV continuum. Likely, the contribution of young clumps (age ∼ 10 Myr , see Section 4.5) to the emission lines is even higher than the 60% estimated for the continuum (Zanella et al. 2019).
Emission lines pseudo-NB image
As revealed by a first inspection of the MUSE and SINFONI observations, the FUV and optical spectra of our target feature several emission lines among which the brightest are Ly , H and the [OIII] 4960, 5008 doublet ([OIII]db hereafter).
To investigate the spatial extent of these emission lines, and to compare them with the FUV continuum from HST, we create pseudo-NB images that maximise the lines' SNR, see Appendix B. We extract the flux and variance spectra within circular apertures of increasing size (from 0.3−3.0 , in steps of 0.2 ) centred at the position of each multiple image. Then, we convolve each spectrum with Gaussians of increasing (from 1.25−10Å, in steps of 1.25Å), and compute the SNR as a function of wavelength. From the convolved spectrum that maximises the line SNR, we derive the peak position of the line ( max ) as well as its standard deviation ( max ). The values obtained for the three multiple images are consistent with each other. Hence, we define the wavelength range within which we collapse the datacube as given by the interval max ±3 max . However, before obtaining the pseudo-NB image, we subtract spaxel by spaxel any eventual continuum emission by fitting the spectral region adjacent to the line. Finally, we reconstruct the derived pseudo-NB image of each line on the galaxy source plane, following the same procedure as adopted for the HST FUV continuum, see Section 3.1.
In Figure 3, we present the Ly , H and [OIII] 5008 emission contours overlaid on the rest-frame FUV HST image. While the H and [OIII] emission regions are spatially coincident with the FUV stellar continuum, the peak of Ly is offset. To evaluate the displacement ( Ly ) between the Ly and the centroid of the galaxy FUV light, we model with 2D Gaussian profiles the emissions on the reconstructed map of the galaxy counter-image, i.e. the least stretched and magnified image of our target (M3), in the source plane. From the reconstructed map, we measure a Ly -UV intrinsic offset of 0.16 ± 0.02 that corresponds to 1.2 ± 0.2 kpc. We resort to the reconstructed map of the galaxy counter-image since the Ly haloes in the other two multiple images are incomplete and merged together. The Ly emission appears to be extended and isotropic, i.e. without evidence of any clear substructure, at the resolution of our MUSE data. Despite the fact that offsets between the Ly and UV continuum of galaxies have been widely reported in the literature (e.g. Shibuya et al. 2014;Hoag et al. 2019, and references therein), the origin of these displacements remains unclear. 3D models of Ly radiative transfer (e.g. Laursen & Sommer-Larsen 2007;Verhamme et al. 2012;Behrens & Braun 2014;Zheng & Wallace 2014) of disk systems suggest that the Ly -UV offset could be ascribed to the easier propagation and escape of Ly photons in the direction perpendicular to the galaxy disk. Indeed, because of the resonant nature of Ly photons that makes them prone to undergo many scattering events, the distribution of neutral hydrogen and dust strongly affects the observed Ly distribution. In this case, the offset would be a consequence of the viewing angle under which the observer sees the target. The offset estimate we find is in good agreement with the typical displacements reported in the literature for LAEs and Lyman-break galaxies (LBGs), i.e. Ly = 1 − 4 kpc (e.g. Bunker et al. 2000;Fynbo et al. 2001;Shibuya et al. 2014;Hoag et al. 2019).
FUV and optical spectrum extraction
To define the spatial regions of the MUSE and SINFONI datacubes where to extract the FUV and optical spectra of the galaxy, we resort to the Ly and [OIII] 5008 pseudo-NB images, i.e. the brightest lines of the FUV and optical dataset, respectively. For both line maps, we measure the background level and variance ( ) 6 , and define the area where to extract the galaxy spectrum as given by all the spaxels where the line flux is ≥ 2.5 . The MUSE FUV spectrum however is heavily contaminated by the optical stellar continuum of the A2895 BCG. To obtain a 'clean' spectrum of our target we proceed as follows. We mask all the sources around the A2895 central galaxy, including the spaxels belonging to our target. For each MUSE spaxel with Ly flux > 2.5 , we estimate its elliptical angular distance from the BCG centre, consider all the unmasked spaxels laying at the same distance, and create a mediancombined spectrum of the BCG. This spectrum is then subtracted from the original observed spectrum of our target. In this way, we can effectively decontaminate it from the contribution of the BCG optical light. We avoid to simply use a combined spectrum of the innermost regions of the BCG since we detect variations in the BCG spectrum as a function of its radius. After correcting each spaxel for its lensing magnification factor, we sum all the spectra corresponding to the spaxels with flux ≥ 2.5 . Finally, we average the spectrum of all the available multiple images of our target to obtain a spectrum of maximum SNR. In Figure 4 we present the FUV (upper panel) and optical (lower panel) spectra of our target. 6 For the MUSE data we consider the variance cube produced by the pipeline and corrected it as described in Section 2.2. The SINFONI pipeline instead does not return a variance cube and therefore we evaluate, at each wavelength, the standard deviation of all the spaxels that do not show emission from the target.
Emission and absorption line measurements
Besides strong Ly , H and [OIII]db, we detect a plethora of other FUV and optical lines (both in emission and absorption). To estimate their peak position, flux, and width, we fit these lines with a Gaussian profile, after modelling the local stellar continuum with a slope, if present. We apply this procedure to all emission and absorption lines except for Ly that we analyse separately due to its peculiar properties (i.e. its resonant nature, see Section 3.6).
To estimate the uncertainties on the fit, we perform 1000 Monte Carlo realisations of the spectra. Each realisation is drawn randomly from a Gaussian distribution with mean and variance corresponding to the observed spectrum flux and variance. We then define the uncertainty on the line properties as the half distance between the 16th and 84th percentiles. In Table 1, we report the line properties obtained from our fit for all the lines with a SNR > 3. In our error budget, we include systematic uncertainties due to absolute flux calibration of 5% and 20% for MUSE and SINFONI data, respectively.
From the wavelength position of the emission lines' peak we estimate the galaxy systemic redshift sys = 3.39535 ± 0.00025. We limit this approach to emission lines since the interstellar medium (ISM) absorption features appear blueshifted because of outflows, see Section 5.1.
Finally, we measure the rest-frame equivalent width (EW 0 ) of each line as: where 0 and are the wavelength limits within which the line fit is performed, and line and con represent the flux density distributions of the line and stellar continuum as a function of the wavelength. We use a definition of EW 0 in which negative values indicate emission while positive values refer to absorption. Since the optical continuum of the galaxy is not detected in our SINFONI data, we report a 3 upper limit on the flux that, in turn, converts into a 3 lower limit on the line EW 0 . We estimate as the median of the error spectrum in the wavelength range within which the line fit is performed.
Ly modelling
Contrarily to the Balmer lines, which escape unobstructed from their production site following recombination, Ly photons undergo many scattering events. The number of scatterings depends on the neutral hydrogen column density, geometry and kinematics (see, e.g. Dijkstra 2014, and references therein). Each scattering produces a slight variation in the photon frequency and direction of propagation (Osterbrock 1962). As a consequence of this diffusion process, the spectral characteristics of the emerging radiation encode the properties of the scattering medium along the paths that offered least resistance to the photons (e.g. Dijkstra et al. 2016;).
To adequately model the asymmetric spectral profile of Ly , we resort to Equation 2 by Shibuya et al. (2014), i.e.: where is the amplitude and asym 0 the peak wavelength of the Ly line. The asymmetric dispersion, asym , is given by asym = asym · ( − asym 0 ) + , where asym and are the asymmetric parameter and typical width of the line, respectively. An object with a positive (negative) asym value has a skewed line profile with a red (blue) wing.
Before fitting the Ly emission, we model the stellar continuum with the Starburst99 7 synthetic models (Leitherer et al. 1999, see Section 4.5 for further details), subtract it from our Ly spectrum, and apply Equation 2 on the residuals. The Ly emission is characterised by a prominent redshifted component with a relative velocity (with respect to the systemic redshift) of 403 ± 4 km/s. We also detected a blue Ly peak with a flux equal to ∼ 5% of the red component, and a relative velocity of −294 ± 47 km/s with respect to the systemic redshift. The separation of the blue and red peak is Δ peak = 697 ± 50 km/s. This result is in agreement with values reported in Verhamme et al. (2018) for Ly -emitters (LAEs) with a blue peak.
From the fit of the two peaks, we obtain a total Ly flux of (1.41 ± 0.04) × 10 −17 erg/s/cm 2 and an EW 0 = −87 ± 10Å. The equivalent width of Ly is larger than the typical values observed in low-redshift LAEs (Rivera- Thorsen et al. 2015;Henry et al. 2015;Yang et al. 2016), but consistent with other sources at a similar redshift (Erb et al. 2014;Trainor et al. 2015;Runnholm et al. 2020). Comparing the peak separation and red peak asymmetry (as first proposed by Verhamme et al. 2017;Izotov et al. 2018) with the results from Kakiichi & Gronke 2019 (see their Figure 13), we infer that the escape fraction of Lyman continuum photons, LyC esc , is below 15%.
If we assume case B recombination and no dust extinction (see Section 4.2), we would expect a ratio 8 Ly /H = 23.55. However, the ratio we measure is of 1.97 ± 0.40, a factor ∼ 12 below the theoretical expectation (for a visual comparison see Figure 5). This converts into an escape fraction for the Ly emission of about Ly esc ∼ 8%. This value is in good agreement with the global Ly escape fraction typically observed at ∼ 3 (Gronwall et al. 2007;Ouchi et al. 2008;Hayes et al. 2011). We highlight that the Ly spectrum is extracted within the area of the MUSE datacube where we detect the line at a minimum threshold of 2.5 . This implies that we are neglecting part of the Ly at low surface brightness. Hence, our estimate of both the Ly flux and Ly esc are possibly lower-limits.
As an alternative, the observed discrepancy between the theoretical and observed Ly -H ratio could be ascribed to dust extinction. In this case, the observed ratio could be reconciled with the theoretical expectation by taking into account a colour excess for the nebular emission ( − ) neb 0.36 mag. In the case of dust selective extinction, we would obtain a colour excess for the stellar continuum ( − ) con 0.16 mag, if we assume a conversion factor ( − ) con / ( − ) neb = 0.44 (see Calzetti et al. 2000). This estimate, however, is not compatible with the observed very steep blue slope of the FUV stellar continuum and the inferred ( − ) con = 0 mag (see Section 4.2). A plethora of theoretical studies have demonstrated how the observed Ly emission profile and its equivalent width depend on the ISM metal and dust content (e.g. Charlot & Fall 1993), the relative geometries of the HI and HII regions and the kinematics of the neutral gas (e.g. Neufeld 1990;Verhamme et al. 2006; Dijkstra of the P package PyNeb by Luridiana et al. (2015) for an electronic temperature = 10 4 K and density = 10 3 cm −3 . et Laursen & Sommer-Larsen 2007). To extract physical information from the Ly spectral shape, we resort to the commonly used 'shell-model' (Ahn & Lee 2002;Verhamme et al. 2006). This model consists of a Ly and continuum emitting source surrounded by a shell of neutral hydrogen, and dust. It, thus, features four parameters describing the shell: the neutral hydrogen column density of the shell HI , its velocity exp (defined > 0 for outflowing), an (effective) temperature (which also includes the effect of smallscale turbulence), and the dust content -which we parametrise as a dust optical depth d . In addition, we use an intrinsic Gaussian emission which we characterise via the intrinsic Ly equivalent width EW int , and its width int .
To cover this parameter space we specifically employ an improved version of the pipeline described in Gronke et al. (2015) featuring 12960 radiative transfer models computed with the radiative transfer code tlac (Gronke & Dijkstra 2014). We carry out the fitting in wavelength space with a Gaussian prior on the redshift . Furthermore, we smooth the synthetic spectrum by the instrument resolution evaluated at the Ly observed wavelength (derived from Eq. 8 in Bacon et al. 2017). We show the result of this fitting procedure in Figure 6. According to the best-fit model, we derive a log 10 ( HI [cm −2 ]) = 19.99 ± 0.09, exp = 211 ± 4 km/s, log 10 ( [K]) = 5.35 +0. 18 −0.41 and = 0.80±0.13. While it is clear that the 'shell-model' is an oversimplification of the complex structure and kinematics of Ly emitting galaxies and their surroundings, it is still unknown how much of the radiative transfer process is captured by the model, and what the fitting parameters physically mean (see discussion, e.g., in Orlitová et al. 2018;Gronke et al. 2017;Li et al. 2020). What is clear is that the 'shell-model' is able to reproduce the wide range of observed Ly spectra well, which may be surprising given its simplicity (see, e.g., Karman et al. 2017;Gronke et al. 2017, for an analysis of the fit quality in a large suite of spectra) 9 . In addition, the column density HI as well as the outflow velocity influence the Ly spectral shape strongly and are much more robust predictions of the 'shell-model' compared to, for instance, the dust optical depth or the effective temperature (these two parameters typically show large uncertainties and how well they can be tied to their physical counterparts is indeed more uncertain, see Verhamme et al. 2006;Laursen et al. 2009;Gronke et al. 2015). In fact, it has been shown that at least for certain scenarios the outflow velocity and column density of the 'shell-model' correlate well with the ones of a more realistic multiphase medium (Gronke et al. 2017). In our analysis, we rely on only on these two most robust parameters and we thus conclude that the usage of the 'shell-model' to extract physical properties from the observed Ly spectrum is well justified. We summarise the main results obtained from both the fitting procedure and the analysis of the Ly emission in Table 2.
GALAXY PROPERTIES
In the following Section, we derive the physical properties (e.g. dust extinction, nebular metallicity, star formation rate) of our target while the analysis and interpretation of the results presented in the following will be discussed in the next Section. 9 Note that while this is a requirement for the reproducibility of the radiative transfer process occurring in nature, it is not trivial to do so -even with more complex geometries (see discussion of this fact in, e.g., Gronke Table 3. Rest-frame UV spectral windows employed for the measurement of the stellar continuum -slope, see Section 4.2.
AGN and SF diagnostics
As a first step in the analysis of our target spectra, we investigate which mechanism is ionising the galaxy ISM, thus driving the emission of the lines. In particular, we want to understand whether the emission lines that we detect are powered by star-formation only, or if the contribution of an AGN is present. From the comparison of the emission line profiles (absence of blue/red wings, broad components) and because of the narrow width of the emission lines (≤ 200 km/s, see values in Table 1), it is unlikely that our target (Nakajima et al. 2018), our target is securely located among the purely star-forming population (e.g. away from the type-2 AGN, composite, and LINERs regions). Hence, our galaxy appears to be a purely star-forming source.
Dust extinction
We estimate the dust extinction affecting the overall galaxy by considering the UV -slope. As widely implemented in the literature, we fit the observed UV continuum of our target with a power law, expressed as: Similarly to Calzetti et al. (1994), we define 9 spectral windows in the range 1200-2600Å (see Table 3) that are carefully designed to remove from the fitting procedure all the relevant absorption features, as well as the MUSE Na Notch filter and strong telluric absorption residuals. We measure the integrated flux and associated uncertainty of each spectral window and we fit them with Equation 1 from Castellano et al. (2012), see Figure 7. The value we obtain from the fit is = −2.53 ± 0.15, in line with the results found for other low-mass galaxies at similar redshift (Castellano et al. 2012;Bouwens et al. 2016a;Vanzella et al. 2018). Such low value of the parameter is typical of stars with steep blue UV slopes, i.e. young and unobscured stellar populations. Indeed, if we convert the measured into the colour excess of the stellar continuum ( − ) con via the relation by Meurer et al. (1999), we obtain a value that is compatible with ( − ) con = 0 mag only within the error bar ( ( − ) con = −0.08±0.08 mag). Despite the fact that the − ( − ) con relation depends on metallicity and star formation history (e.g. Kong et al. 2004;Dale et al. 2009;Muñoz-Mateos et al. 2009;Reddy et al. 2010Reddy et al. , 2018Schaerer et al. 2013;Zeimann et al. 2015), as well as on stellar mass and age (e.g. Buat et al. Result of the fitting procedure adopted to evaluate the galaxy -slope. With a black solid line, we show the 1220-1980Å wavelength region of the galaxy rest-frame UV integrated spectrum. The grey-shaded area around it displays the spectrum ±1 error. The vertical coloured-shaded areas delimit the 9 spectral regions (see Table 3) within which we evaluate the galaxy flux (red circles), and its error, we use for the fit. The best-fit power law (red solid line) is presented too. ( − ) con = 0.16 mag from the Ly -H ratio, see Section 3.6), we would obtain a -slope corrected for dust extinction even more extreme (e.g. = −3.43 ± 0.14) and hardly reconcilable with any known physical scenario. Finally, a preliminary analysis of our target far-infrared continuum emission (ALMA observations, PI: E. Iani, Zanella et al. in prep.) further corroborates our finding.
Nebular metallicity
Thanks to the variety of ISM emission lines we detect in the galaxy FUV spectrum, we can estimate the nebular metallicity of our target by considering the He2 -O3C3 diagnostic diagram by Byler et al. (2020). Through Equation 8 by Byler et al. (2020), we measure a metallicity 12 + log 10 ( / ) = 7.94 ± 0.07, that corresponds to Z = 0.18 ± 0.04 Z , if we assume the solar value of 12 + log 10 ( / ) = 8.69 ± 0.05 (Allende Prieto et al. 2001). From the comparison with the model grids, we can infer a rough estimate for the ionisation parameter (U) of log 10 (U) ∼ −2.
An independent estimate of the gas-phase metallicity can be derived also by considering the [OIII] 5008/H ratio (Maiolino et al. 2008). In this case, we obtain 12 + log 10 ( / ) ∼ 7.89 that corresponds to Z ∼ 0.16 Z . Even though this last estimator has been proven to be strongly dependent on the ionisation parameter (e.g. Kewley et al. 2019), the measurement is in good agreement with the He2 -O3C3 estimate.
Star formation rate
We estimate the star formation rate (SFR) of our target in two ways: from the H luminosity, and from the luminosity of the UV continuum at 1500Å. In both cases, we apply the recipes by Kennicutt (1998) after correcting them for a Chabrier IMF 10 (the original relations being defined for a Salpeter IMF, Salpeter 1955).
To convert the H luminosity into SFR, we use the relation: valid for an electronic temperature T = 10 4 K, and case B recombination (Osterbrock & Ferland 2006) (5) where (1500Å) = (2.30 ± 0.12) × 10 28 erg/s/Hz and was extrapolated from the fit of the UV continuum with a power-law, see Section 4.2. From the above equation, we obtain a SFR(1500Å) = 1.9 ± 0.7 M /yr.
According to our measurements, the ratio between the SFR(H ) and SFR(1500Å) is equal to 5.2 ± 2.3. This discrepancy is well-expected since Hydrogen lines and the UV stellar continuum trace the current star formation of a galaxy over different timescales. In fact, while Balmer lines allow to derive the instantaneous star formation, i.e. star formation of the last 10 Myr, the UV -SFR relation implicitly assumes a continuous and well-behaved star formation history, ongoing for at least 100 Myr. To properly account for this difference in timescales, the multiplicative factor in Equation 5 has to be corrected. In particular, if the star formation timescale is 10 Myr, the SFR(1500Å) can be underestimated up to a factor ∼ 3.5 (e.g. Calzetti 2013). With this correction, the two estimates agree within the errors.
It is useful to notice that discrepancies between the SFR estimators presented above can give insights on the timescale over which star formation processes are taking place, and hence, on the age of the youngest stellar populations (see Section 4.5 for further details).
In the following, unless differently stated, we assume as SFR the one obtained from Equation 4, i.e. SFR(H ) = 9.9 ± 2.3M /yr.
Stellar age and IMF
The ratio between the de-reddened H luminosity and the (1500Å) gives an estimate of the stellar population age, as this ratio decreases with increasing stellar age. In the left panel of Figure 8, we present the (H )/ (1500Å) evolution with time, assuming a Chabrier-like IMF 11 , different stellar metallicities (0.125 Z and 0.25 Z ), and different star formation histories (SFH, single burst and continuous star formation). To construct the (H )/ (1500Å) tracks, we use the spectrophotmetric synthetic models of Starburst99. Among the available discrete tracks, we choose those with stellar metallicity matching the nebular metallicity that we measured through the He2 -O3C3 diagnostic diagram, see Section 4.3. We expect the stellar metallicity to be comparable or lower than the metallicity of the ISM from which stars form. 10 To transform from a Salpeter to a Chabrier IMF, the derived SFR has to be divided by a 1.7 factor. 11 The Chabrier-like IMF we adopt in this paper is: From the comparison of the theoretical tracks with our measurement, we conclude that our target hosts a stellar population younger than 10 Myr. The (H )/ (1500Å) that we measure is high, at the limit of the ratios predicted by Starburst99 for a Chabrier-like IMF. We investigate whether the assumption of a more exotic solution (e.g. topheavy IMF) could alleviate the tension between observations and models. In the right panel of Figure 8 we show that a stellar population with a top-heavy IMF 12 would show higher (H )/ (1500Å) values than the Chabrier-like IMF case, and seems to be more compatible with our observations.
DISCUSSION
According to the results presented in the previous Sections, A2895a is a lensed star-forming Ly -emitter at 3.4 that hosts four compact clumps. Similarly to what typically found in LAEs (e.g. Ouchi et al. 2020, and references therein), the galaxy has a low metallicity ISM (Z 0.2 Z ) and a blue FUV stellar continuum ( −2.5) which implies a stellar extinction ( − ) con ∼ 0. The HST PSF sets an upper-limit on the clumps' size of 280 pc. Based on the clumps' size -stellar mass relation by Cava et al. (2018), we can associate to the individual clumps an upper-limit on their stellar mass of 2 × 10 8 M .
The clumps contribute to ∼ 60% of the galaxy FUV emission that appears to be powered by a young stellar population of hot and massive stars with an age of less than 10 Myr, as obtained from the (H )/ (1500Å) ratio (e.g. Leitherer et al. 1999;Zanella et al. 2015) and in agreement with studies of LAEs (e.g. Nakajima et al. 2012). Despite the fact that the emission lines (but the Ly ) detected in the target's FUV and optical spectra are spatially coincident with the FUV continuum probed by HST, we cannot determine weather the lines arise within the single clumps or from the overall galaxy because of MUSE and SINFONI coarser spatial resolution. Yet, several studies have shown that the lines emission predominantly originates from the clumps if their age is 10 Myr, as in the case of our target (Genzel et al. 2011;Förster Schreiber et al. 2011b;Zanella et al. 2015Zanella et al. , 2019. Hence, if we estimate the galaxy SFR from the conversion of the H luminosity, we can assume that the galaxy star formation activity is mainly taking place within the clumps at an overall estimated rate of ∼ 10 M /yr. This sets a lower-limit on clumps sSFR of 1.25 × 10 −8 yr −1 that is consistent with the sSFR estimates of compact clumps in Zanella et al. (2019) and that suggests that the detected clumps are forming stars in a 'starbursting mode' (e.g. Zanella et al. 2015;Bournaud et al. 2015).
Finally, The (H )/ (1500Å) ratio hints to a star formation activity that follows a top-heavy IMF. A similar result was already obtained for another very young clump (age 10 Myr) hosted by a ∼ 2 galaxy (Zanella et al. 2015). Yet, an analysis on a statistical sample is needed to draw more robust conclusions in this regard.
ISM outflows
The FUV absorption lines have larger velocity dispersion ( ∼ 90 km/s) than the emission lines ( ∼ 40 km/s, see Table 1). Besides, the absorption features display an asymmetrical profile skewed towards shorter wavelengths, a blue wing, that becomes particularly evident when stacking the absorption lines together (e.g. SiII 1260, 1527 and SiIV 1394, 1403), see Figure 9. Both the larger velocity dispersion and the presence of blue wings in UV absorption lines are typically ascribed to gas outflows in the galaxies' ISM (e.g. Pettini et al. 2000;Quider et al. 2009;Dessauges-Zavadsky et al. 2010;Erb et al. 2012;Patrício et al. 2016). This conclusion is also supported by our analysis of the Ly spectral shape according to which the Ly photons are propagating within a medium that is expanding at a velocity of exp = 211 ± 4 km/s, see Section 3.6.
Independently of the Ly modelling, the analysis of the observed UV absorption line profiles is often used to infer the maximum velocity of galactic outflows. One way to achieve this result is by means of the 90 parameter (Prochaska & Wolfe 1997;Wolfe & Prochaska 1998), i.e. the blue-shift velocity where the lines' wing intensity reaches 90% of the continuum intensity. To estimate 90 from our FUV spectrum, we first fit the normalised and stacked absorption line profile with a skewed Gaussian. From the best-fit model, we derive a maximum outflow velocity of −363 ± 53 km/s. We estimate the error following a MonteCarlo procedure, i.e. by perturbing the normalised stacked spectrum according to its associated error 5000 times and measuring the half distance between the 16th and 84th percentiles of the output 90 distribution. Given the galaxy SFR, the maximum outflow velocity we derived is in good agreement with what has been observed in other galaxies at lower redshifts (i.e. Chisholm et al. 2015;Heckman et al. 2015;Bordoloi et al. 2016).
The value of 90 is an independent estimate of the outflow velocity to the one obtained from the modelling of the Ly emission exp . Comparing the two estimates, we obtain 90 > exp . This is due to the fact that while 90 is indicative of the maximum velocity of the outflow, the Ly photons are likely susceptible to a mean (e.g., mass weighted) outflow velocity. We underline that this result is not affected by the geometry and inclination of the outflow. If we assume that the outflow is not spherical, regardless of the inclination of the galaxy, our 90 estimate would represent a lower limit. In fact, we would be measuring only the outflow component projected along the observer line-of-sight. On the contrary, exp does not suffer from projection effects as also photons initially escaping along a path different to the line-of-sight can be scattered back into the observer's direction. Therefore, even though the maximum outflow velocity could significantly increase, this would not create any tension with the actual velocity estimate derived by the Ly modelling.
Star formation feedback and outflows energetics
If we assume that the detected ISM outflows are the direct consequence of star formation feedback taking place only within the four star-forming regions harboured in our target, we can estimate the rate at which star formation expels the ISM from the four clumps, i.e. the gas mass-loss rate , as (following Pettini et al. 2000): where HI is the Hydrogen column density, exp the expansion velocity of the outflow, is the radius of the expanding shell, and is the ratio between the mass of the Hydrogen atom HI and the average particle mass in the outflowing medium (i.e. = HI / ). For a complete description on how Equation 6 has been derived see Appendix C.
For both HI and exp , we adopt the values derived from the analysis of the Ly emission, i.e. log 10 ( HI [cm −2 ]) = 19.99±0.09 and exp = 211 ± 4 km/s (see Section 3.6). We highlight that the parameters derived from the modelling of the Ly spectrum probe the galaxy medium along the so-called path of least resistance, i.e. the path with the lowest optical depth along which the Ly photons diffuse out as a consequence of resonant scattering Eide et al. 2018). Therefore, by inserting these parameters into Equation 6, we implicitly assume that the Ly probes the wind medium since the outflowing material is likely to have a significantly lowered optical depth (and possibly also lower column density, Behrens & Braun 2014).
We infer the radius of the expanding shell as given by the product between the gas expansion velocity exp and the age of the clumps' stellar population (10 Myr, see Section 4.5), i.e. = exp · age = 2.2 ± 0.2 kpc. This assumes that the outflows were in place since the beginning of the on-going burst of star formation and kept a constant expansion velocity through time. Even with these simplifying assumptions, we find that encompasses most of the observed Ly emission of our target from which we derive the exp and HI parameters.
For the HI fraction of the outflowing medium , previous studies have often adopted = 1 (e.g. Pettini et al. 2000;Verhamme et al. 2008), thus considering that the outflowing material consist of HI only. To take into account the possible presence of heavier elements, a few studies have lowered the estimate of to 0.74 based on the fact that the ISM of galaxies is mainly a mixture of HI (90% of the total ISM mass) and atomic Helium (10%), while the other metals contribute less than 0.1% (e.g. Genzel et al. 2008). However, star forming regions are rich in molecular gas (mostly H 2 ) and spatially resolved studies of galaxies in the local Universe have shown that the molecular phase of outflows constitutes a significant amount of the ejected material (e.g. Weiß et al. 1999;Walter et al. 2002;Sakamoto et al. 2006;Bolatto et al. 2013). In particular, Smirnova et al. (2017) found that in star-forming regions of galaxies in the local Universe the mass of 2 and HI are comparable. According to this finding, = 0.67. Because of the uncertainties related to the above assumptions (mainly on the metals and 2 content), in the following we assume as reference value for the interval 0.6-0.8.
Finally, Equation 6 is valid if we assume an outflow geometry given by a thin spherical expanding shell. Despite the fact that a few observational and theoretical works have shown evidence that Ly photons scatter off a bipolar outflow (e.g. Blandford & Rees 1974;Suchkov et al. 1994;Duval et al. 2016), we assume a thin spherical expanding shell since we do not have any direct evidence pointing to a bipolar geometry. We also highlight that gives an estimate of the overall mass-loss rate of the four detected clumps, and that each clump could be characterised by a bipolar outflow expanding in a different direction and with a different opening angle. We however report the effects of alternative geometries, i.e. biconical and double spherical sector outflows, in Appendix D.
In Figure 10, we present a track for the mass loss rate normalised by the galaxy SFR (i.e. /SFR), the so-called mass loading factor . This estimate can be considered as the average massloading factor of the clumps, if we assume that the estimated SFR and are equally distributed among the four detected clumps. The SFR we use for this estimate is the value obtained from the conversion of the H luminosity, i.e. SFR(H ) = 9.9 ± 2.3 M /yr. Following Swinbank et al. (2007), we limit the track in Figure 10 to the minimum value of for which outflows are feasible, i.e. = 0.06 and a corresponding value of 22. Independently on , > 1. In particular, for in between 0.6 and 0.8 we obtain = 2.4 and = 1.8, respectively. These mass loading factors are in good agreement with those found by Genzel et al. (2011) who analysed the spectral profile of optical emission lines (H and [OIII]) from massive clumps (10 9 −10 10 M ) characterised by high-velocity outflows (350 − 1000 km/s) in five star-forming galaxies at ∼ 2, and found ranging from 1 − 9. Similar mass loading factors ( = 2 − 9) were also found in Newman et al. (2012).
We also compare our findings with results obtained from hydro-dynamical simulations of ∼ 2 clumpy galaxies by Bournaud et al. (2014) and Fensch & Bournaud (2020). In their study, Bournaud et al. (2014) found evidence that gas clouds with masses of a few 10 7 M are rapidly blown up by star formation feedback, while massive clumps (≈ 10 8 M ) are long-lived and have lifetimes that range from 200 − 700 Myr. For such massive clumps, Bournaud et al. (2014) found that the mass loading factor of the clumps that formed in simulations implementing strong SNe feedback (i.e. simulations G1, G2 and G3) follows a distribution that has mean value of 1.6 and a tail that extends up to 10 (see their Figure 9, left panel). Such high values were hardly recovered in the case of simulations with a weaker SNe feedback (e.g. G'2 model) that have mass loading factors in the range 0.1 -5 with a median value of 0.7. A similar result was recently obtained by Fensch & Bournaud (2020), who found that the average mass loading factor in simulations of galaxies at 1 < < 3 hosting clumps with average stellar masses of 10 8 M implementing strong SNe feedback is of 3.5, independently from the galaxy gas mass fraction (see their Table 3). Lower values of where found only for weak ( = 0.3) and medium ( = 1) stellar feedback, i.e. simulations where the energy from type-II supernovae is mostly (≥ 90%) released thermally and not in kinetic form. Comparing the results by Bournaud et al. (2014) and Fensch & Bournaud (2020) with our findings, the values of we infer seem to be consistent with the simulations implementing a strong/medium SNe feedback.
Knowing the gas mass-loss rate, we can estimate the timescale needed for the stellar feedback to expel the gas from the clumps, thus quenching their star formation activity. We derive this quantity in the case of a 'semi-closed box' model, i.e. neglecting the possible presence of inflowing gas that could replenish the reservoir of clumps and therefore sustain star formation for a longer period (e.g. Dekel & Krumholz 2013;Bournaud 2016;Fensch & Bournaud 2020). Given this assumption, we derive a lower-limit on the gas removal timescale exp that is given by exp = M mol / , where M mol is the clumps' molecular gas mass. We estimate M mol by considering the integrated Schmidt-Kennicutt relation reported by Sargent et al. (2014). In particular, according to our findings on the clumps' sSFR and supported by recent studies targeting young clumps (e.g. Guo et al. 2012;Wuyts et al. 2012Wuyts et al. , 2013Bournaud et al. 2015;Zanella et al. 2015;Mieda et al. 2016;Cibinel et al. 2017;Zanella et al. 2019), we assume that our clumps form stars in a starbursting mode 13 . In this case, the amount of molecular gas locked into the clumps would be M mol = (7.19 +9.46 −2.55 ) × 10 8 M . In Figure 11, we present the dependence of exp from . Also in this case we limit the track to the minimum value of = 0.06. Independently on , exp is always below 100 Myr. In particular, assuming the estimates of presented above, exp ranges between 20 − 50 Myr. According to these values, the detected clumps would expel their gas on a very short timescale thus stopping their star formation activity in a few tens of Myr.
CONCLUSIONS
In this paper, we have examined the physical properties of a triplyimaged line-emitting galaxy at redshift 3.4 and withdrawn from the sample of lensed clumpy galaxies by Livermore et al. (2015). Thanks to our analysis of integral-field spectroscopic data from VLT/MUSE and SINFONI, as well as HST rest-frame FUV imaging, we found that: • The three multiple images of the galaxy show an irregular FUV morphology that is constituted by four compact clumps whose light accounts for ∼ 60% of the total galaxy FUV emission and that have sizes 280 pc and stellar masses 2 × 10 8 M .
• The galaxy FUV and optical spectra feature a wide variety of lines both in emission (the brightest are Ly , H , and [OIII] 4959, 5008) and absorption (e.g. SiII, SiIII, SiIV, as well as other fainter metal lines such as O, Al, Fe). The absorption lines have a wider velocity dispersion ( ∼ 90km/s) if compared to the FUV and optical emission lines ( ∼ 40km/s). This suggests that the galaxy ISM is characterised by the presence of outflows. From the stacking of all absorption lines, we recover a mild blue asymmetry consistent with outflows with a terminal velocity 350 km/s.
• Our target is a star-forming galaxy. The blue slope of the FUV stellar continuum ( = −2.51 ± 0.12), the relatively low metallicity (≤ 0.2 Z ), and high (H )/ (1500Å) ratio suggest that the galaxy hosts a young stellar population (age 10 Myr). From the conversion of the H luminosity into SFR (Kennicutt 1998), we derive a star formation rate of ∼ 10 M /yr. If we assume that the galaxy star formation activity is limited to the clumps (e.g. Genzel et al. 2011;Förster Schreiber et al. 2011b;Zanella et al. 2015Zanella et al. , 2019, we find that clumps are starbursting, having a sSFR ≥ 1.25 × 10 −8 yr −1 . • As typical of LAEs (e.g. Shibuya et al. 2014;Hoag et al. 2019), the Ly is extended and offset with respect to the galaxy FUV emission. Besides, the Ly spectral profile is redshifted (Δ = 403± 4 km/s) and asymmetric, as expected in case of outflowing gas. The Ly radiative transfer modelling (e.g. Gronke et al. 2015) estimates the expansion velocity of the outflowing material ∼ 211 ± 4 km/s. This value is in good agreement with the estimate derived by the analysis of the shape of FUV absorption lines. We obtain a mass loading factor ∼ 1.8 − 2.4. These values are consistent with those found in the hydro-dynamical simulations of clumpy galaxies that 13 For the sake of completeness, we report in Appendix E the dependence of the gas removal timescale on if the clumps form stars in a 'mainsequence' mode (from the stellar mass -SFR relation, e.g. Elbaz et al. 2007;Rodighiero et al. 2011;Whitaker et al. 2012;Sargent et al. 2014). assume strong/medium SNe feedback (e.g. Bournaud et al. 2014;Fensch & Bournaud 2020).
• We estimate the molecular gas mass of clumps by considering the Schmidt-Kennicutt relation (Sargent et al. 2014) and obtain M mol = (7.19 +9.41 −2.58 ) × 10 8 M (starburst case). Assuming that the detected outflows are the consequence of star formation feedback, the timescale over which the outflows expel the clumps' gas reservoir is 50 Myr. We however highlight that our estimate do not take into account the possibility of inflows that could lengthen the clumps' gas expulsion timescale.
The results recovered by this study highlight how high-quality multi-wavelength datasets from state-of-the-art instrumentation are essential tools to investigate the properties of clumpy galaxies and understand the nature, and fate of clumps. Despite the fact that current studies are still limited by the spatial resolution achievable with state-of-the-art instrumentation, in the next years both JWST and ELT are foreseen to profoundly revolutionise clumps studies opening a new window on the rest-frame optical/NIR properties of clumpy galaxies at redshift ≥ 2.
DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
APPENDIX A: CURVE OF GROWTH METHOD
In this Section, we report the alternative methodology to the modelling we follow to estimate the contribution of clumps to the total galaxy FUV emission detected by HST.
As a first step, we estimate the total FUV flux of our target (clumps plus diffuse emission). To this aim, we consider the BCGsubtracted image and construct a curve of growth measuring the galaxy flux encircled in concentric circular apertures with radii ranging from 0.15 to 2 (i.e. ∼ 15.8 kpc). From the plateau of the curve of growth, we determine the total galaxy flux (∼ 6.9 × 10 −19 erg/s/cm 2 /Å) and size, ∼ 0.65 (i.e. ∼ 4.8 kpc).
In principle, the total FUV flux of the galaxy measured in the HST image could be biased due to the contribution of the Ly emission that, at the redshift of our target, falls within the ACS/WFC F606W bandpass. Hence, we compute the contribution of the Ly emission to the F606W by considering the transmission function of the filter. However, the emission line contribution at the location of the FUV continuum is negligible ( 1%).
We estimate the flux of each individual clump by considering non-overlapping apertures with size = 0.1 , consistent with the FWHM of the HST PSF, see Figure 12. Hence, we apply aperture correction 14 to the estimated flux taking into account the HST PSF. When summing the flux of all the star-forming regions and comparing it to the total flux of the galaxy, we obtain that clumps constitute 60% of the HST observed light, whereas the remaining 40% of the UV continuum is likely emitted by a diffuse, low surface brightness component. The result obtained from the curve of growth method are hence in perfect agreement to those obtained with the modelling, see Section 3.2.
APPENDIX B: Ly , H AND [OIII] 5008 PSEUDO-NB IMAGES
In this Section, we present the pseudo-NB images of the Ly , H , and [OIII] 5008 emissions derived following the methodology presented in Section 3.3. The pseudo-NB image of the Ly emission is presented in Figure 13, while the pseudo-NB images of H and [OIII] 5008 are shown in Figure 14. Because of the wider FoV of MUSE observations (1 × 1 ), in Figure 13 we present a cutout of the MUSE FoV. On top of each image, we report the contours of the galaxy FUV emission (in black), as observed with HST, and the size of the PSF FWHM (red circle). 14 We infer that the energy encircled in a radius of 0.1 in the HST PSF is of about 67% with respect to the total. The estimate is in good agreement with what found by Bohlin (2016), i.e. 66-75%. Fensch & Bournaud (2020) implementing weak ( = 0.3), medium ( = 1) and strong ( = 3.5) stellar feedback calibrations, respectively. The horizontal grey shaded area shows the range of that can be obtain only by hydrodynamical simulations implementing recipes of strong supernovae feedback (e.g. G1, G2 and G3 models) in Bournaud et al. (2014). Bottom panel: Diagram of the timescale of survival to star formation feedback of clumps ( exp ) as a function of the fraction of HI in the outflow , and depending on different outflow geometries.
APPENDIX C: DERIVING THE EQUATION FOR THE MASS-LOSS RATE OF CLUMPS
We estimate the gas mass-loss rate ( ) of clumps due to star formation feedback from the equation by Pettini et al. (2000): where is the surface of the expanding region (that depends on the geometry of the outflow), is the matter density, is the average mass of the particles that constitute the swept up material and exp is the speed of the outflow. If we assume that all material within the expanding region is swept up into a shell of thickness Δ and density , we have: where is the total column density of the gas within the shell, and is the volume of the region cleared by the outflow. Hence, we can rewrite Equation C1 as: where = 2 / and depends on the geometry of the outflow. In our study, we consider three different geometries that could match our observations and are usually adopted when describing feedback solutions: a sphere, a double cone, and a double spherical sector. Depending on the geometry, is a function of the distance swept by the outflowing material and, possibly, the opening angle (only in the biconical and double spherical sector cases). In particular can be equal to 12 (sphere), 6 tan 2 ( /2) (double cone), or 12 (1 − cos( /2)) (double spherical sector).
In Equation C3, both and depend on the chemical composition of the ejected material. However, we can rewrite the equation as a function of the HI mass ( HI ) and column density ( HI ), introducing a new parameter = HI / (e.g. Swinbank et al. 2007
APPENDIX D: ALTERNATIVE OUTFLOW GEOMETRIES
In this Section, we briefly investigate the impact of the outflow geometry (see Appendix C) on the estimates of both the mass loading factor and gas removal timescale exp . In particular, we examine the case of bipolar outflows with a biconical and double spherical sector geometry. Similarly to Figure 10, we present how both and exp vary as a function of . As in Section 5.2, we set the radius swept by the outflowing medium = 2.2 ± 0.2 kpc, while we arbitrarily assume an opening angle = 60 • (e.g. Swinbank et al. 2007) since no direct estimates are available based on our dataset. For the mass loading factor (top panel of Figure 15), is always grater than unity in the case of a spherical geometry. On the contrary, the biconical and double spherical sector solutions have mass-loss rates comparable to the SFR (or even larger) only for ≤ 0.55 and ≤ 0.20, respectively, while, in the range of confidence = 0.6 − 0.8, both tracks assume lower values ( = 0.2 − 0.7). In this case, star formation feedback would be less effective in expelling the gas content of the clumps.
A similar but opposite trend is observed for exp (bottom panel of Figure 15), since a lowering of translates into an increase of the timescale over which the gas is expelled from the clumps. In this case, while the spherical solution returns exp < 50 Myr, the bipolar geometries foresee gas removal timescale up to 400 Myr (100-300 Myr in the -range of confidence).
We highlight how these results are mainly driven by the choice of the outflow opening angle . In fact, an increase in brings the tracks closer to the spherical case (the two solutions coincide when = 180 • ).
APPENDIX E: GAS REMOVAL TIMESCALE IN MAIN-SEQUENCE CLUMPS
In this Section, we report the dependence of the gas removal timescale ( exp ) on , and for three different outflow geometries (spherical, biconical, double spherical sector), in the case of clumps forming stars in a 'main-sequence' mode, i.e. supposing that they lie on the stellar mass -SFR relation of star-forming galaxies (e.g. Elbaz et al. 2007;Rodighiero et al. 2011;Whitaker et al. 2012;Sargent et al. 2014) . In the case of main-sequence clumps, the prescriptions by Sargent et al. (2014) predict a clumps' molecular gas mass M mol = (1.06 +0.36 −0.29 ) × 10 10 M , a value ∼ 15 times higher than the starbursting estimate reported in Section 5.2. Because of the significant increase of M mol , the tracks of the gas mass removal shift systematically towards longer timescales with the gas being expelled from the clumps by star formation feedback in several hundreds Myr, see Figure 16. Independently on the geometry and on , main-sequence clumps would retain their molecular gas long enough so that they could contribute to the morphological evolution of the galaxy centre (e.g. bulge growth, Noguchi 1999;Genzel et al. 2006;Elmegreen et al. 2008;Ceverino et al. 2010) as the consequence of their migration inward the galaxy disk because of dynamical friction and torques, and coalescence at the centre of the galaxy. | 2021-09-16T01:15:35.637Z | 2021-08-23T00:00:00.000 | {
"year": 2021,
"sha1": "109e9a93eff7aa567108481f76f9b83edcb2d184",
"oa_license": null,
"oa_url": "http://dro.dur.ac.uk/34840/1/34840.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "109e9a93eff7aa567108481f76f9b83edcb2d184",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265631322 | pes2o/s2orc | v3-fos-license | Anesthetic Challenge in a Parturient With Von Hippel-Lindau Disease
Von Hippel-Lindau (VHL) syndrome is a rare autosomal dominant disease with incomplete penetrance and variable expression. The features of cerebellar and spinal tumors, pheochromocytomas, and increased intracranial pressure complicate the anesthetic management of such patients. This report describes the anesthetic management of a parturient with VHL disease and highlights the importance of proper surveillance, vigilant management, and individualized treatment plans from a multidisciplinary team.
Introduction
The prevalence of von Hippel-Lindau (VHL) has internationally been reported to be 1:36,000-91,000 [1].The clinical features of the disorder include tumors or cysts arising most commonly in the cerebellum, brainstem, spinal cord, and retina.They can also be found in the adrenal glands, pancreas, kidneys, endolymphatic sac of the middle ear, broad ligament, and epididymis [2].Anesthetists managing patients with VHL need to consider the multi-systemic involvement of the disease and carefully follow up with them throughout pregnancy, with careful surveillance to look out for potential related complications during pregnancy that may endanger the patient and fetus.
Case Presentation
A 40-year-old primipara was seen in the Singapore General Hospital High-Risk Obstetrics Pre-anaesthesia clinic at 24 weeks.She was diagnosed with type 1 VHL disease at 25 years old when she first presented with an eye infection and was diagnosed with a left optic hemangioblastoma from a CT scan.She had been closely followed up by an endocrinologist and geneticist with annual whole-body MRI surveillance scans.Her clinical features include multiple pancreatic cysts, left clear-cell papillary renal cell carcinoma (RCC), for which she underwent a partial nephrectomy in 2019, breast fibroadenoma lumps, several bilateral 0.2-0.4cm cerebellar hemangioblastomas, and a cervicomedullary junction hemangioma with no hydrocephalus.She had no neurological symptoms and was leading an active lifestyle before conception at the time of consultation.
She conceived via in-vitro fertilization and her antenatal care involved a multidisciplinary team consisting of Obstetrics, Anaesthesia, Endocrinology, Genetics, and Neurosurgery.The main concern was suitability for normal vaginal delivery and the safety of central neuraxial anesthesia.Risk assessment was done with a surveillance brain MRI at 20 weeks gestation (Figure 1).It showed no new/worsening brain/spine hemangioblastomas with no hydrocephalus, mass effect, or intracranial hemorrhage.Neurosurgery deemed it safe for her to undergo both normal vaginal delivery and have spinal or epidural anesthesia as her cerebellar hemangioblastomas were small with no edema or radiological features of a tight posterior fossa, which may suggest elevated intracranial pressure.Ophthalmology confirmed no signs of papilledema.Anesthesia concurred as the hemangioblastomas were far from the site of epidural puncture at L3/4.With clearance from the multidisciplinary team for normal vaginal delivery and neuraxial labor analgesia, the proposed delivery plan was expectant management and normal vaginal birth under epidural anesthesia.She was followed up with obstetrics routinely until delivery.Eventually, she had a vacuum-assisted normal vaginal delivery safely at 38 weeks when she presented with contractions.She opted to proceed without epidural anesthesia and birthed a 3.2 kg live male with no complications.
Discussion
A pregnant VHL patient can be challenging for an anesthetist and requires assessment by a multidisciplinary team for various reasons: (1) Determining the mode of delivery.(2) Preventing raised intracranial pressure while straining and pushing during normal vaginal delivery.(3) The presence of cerebellum/spinal hemangiomas that may hinder the technical performance of neuraxial anesthesia.
The mode of delivery is a discussion of risks and benefits among the multidisciplinary team, and there should be shared decision-making based on the patient's clinical condition and preferences, for example, the presence or absence of intracranial tumors, raised intracranial pressure, and pheochromocytoma [3].The most common causes of death are complications associated with RCC and central nervous system hemangioblastomas [4].Elevated intracranial pressure and severe hypertension during stage 2 of labor can cause rupture of hemangiomas.There is prolonged exposure to these cumulative risks during labor, which can be as long as 14 hours for a primipara [5].If these red flags are present, it would not be unreasonable to opt for C-section instead as anesthesia would ablate these sympathetic discharges.As the patient in this case report was asymptomatic, it was reasonable to recommend normal vaginal delivery.
Next, the gold standard for labor analgesia is a lumbar epidural.This would also be excellent for mediating pain and sympathetic discharge from contractions, thereby avoiding surges in blood pressure and intracranial pressure.Most hemangioblastomas are located in the cervical and thoracic region and are within the posterior medullary cord [6].These vascular tumors may have a mass effect or bleed if traumatized by the spinal needle or epidural catheter.It has been proposed that an epidural is preferable to spinal anesthesia as the dura is not intentionally punctured, resulting in less chance of hemangioblastoma penetration [6].We recommend evaluating the safety of neuraxial anesthesia depending on the size and location of hemangioblastomas and the presence of raised intracranial pressure, as well as considering a plain epidural rather than a combined spinal-epidural technique.It is unclear whether pregnancy is associated with accelerated hemangioblastoma progression [7][8][9], but the VHL Alliance Handbook recommended a non-contrast MRI during the fourth month of pregnancy to check any known lesions of the brain and spine [10].
There is no consensus in the literature regarding the technique for obstetric anesthesia in VHL patients.Epidural, spinal, and general anesthesia have all been described in case reports with acceptable outcomes [11,12].During general anesthesia, care should be taken to ablate the sympathoadrenergic response during intubation and surgical incision.Controlled hyperventilation can also help to control any raised intracranial pressure.
It would be worthwhile to assess risks and outline plans for both elective and emergency C-sections in case of unforeseen circumstances.The concerns and considerations should be documented so that even if the original multidisciplinary specialists are not present during non-office-hour emergencies, the emergency team is still able to achieve the intended hemodynamic targets and goals of care for the patient.
Conclusions
A parturient with VHL presents unique challenges to the anesthetist.Early assessment by the anesthetist is recommended to allow a detailed assessment of the location and size of hemangiomas.We recommend a surveillance MRI scan in the second trimester to look for enlargement of neuraxial hemangiomas and signs of increased intracranial pressure that may affect the mode of delivery and neuraxial anesthesia.Early assessment also allows time for counseling the patient regarding the risks and benefits of each mode of delivery and labor analgesia options.Proper surveillance, vigilant management, and individualized treatment plans from a multidisciplinary team will likely have favorable maternal and fetal outcomes in this rare disorder.
FIGURE 1 :
FIGURE 1: Surveillance MRI of the brain done at 20 weeks gestation, showing stable hemangioma (0.3 cm) in the cerebellar with no new mass and no features of increased intracranial pressure. | 2023-12-05T16:19:54.807Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "1d2e8856950b3baaa9332e4f1911e79f32f2cdb0",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/205571/20231130-19813-dit3pf.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21ad62d1b3439f6f57730babc6b20eba26cd5a3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
238236967 | pes2o/s2orc | v3-fos-license | Paper : Effect of Toe Only Rocker at 10 and 15 Degrees on Balance and Walking Speed in Elderly Adults
Objective: Balance is one of the indicators to determine independence in performing daily activities in the elderly. One of the influential factors in postural control and balance is walking speed. This study aims to evaluate the effect of rocker soles at two degrees of 10 and 15 on the walking speed and balance of the elderly. Materials & Methods: The study participants were 19 older adults aged 60 years or older (13 women and 6 men; mean age=66.1 years, mean height=1.63 m, mean weight=70.3 kg). Three models of shoes were used: shoes with a 10-degree rocker sole angle, shoes with a 15-degree rocker sole angle, and control shoes. Walknig speed was evaluated with 10-m walking test, while balance and dynamic postural control were assessed with the Berg balance Scale (BBS) and Star Excursion balance test, respectively. The Shapiro-Wilks test was used to examine the normality of data distribution, and repeated-measures ANOVA and Wilcoxon test were used to compare the effects of different rocker angles on balance and walking speed. The obtained data were analyzed in SPSS v. 22. Results: There was no significant difference in walking speed (P= 0.993), dynamic postural balance at anterior (P= 0.835), posterolateral (P= 0.86), and posteromedial (P= 0.598) directions and balance obtained from BBS (P= 0.625) among the groups using three shoe models. Conclusion: It seems that the use of rocker sole shoes does not affect the balance and walking speed of the elderly. This study supports the administration of shoes with toe-only rocker soles to the elderly.
Introduction
he world's elderly population has grown significantly over the past few decades [1][2][3] and is expected to increase in the future [2]. With aging, all factors involved in balance, including walking speed [1], are affected [4]. About 27%-65% of the elderly experience falls at least once a year [5,6]. Fear of falling is the most common fear T among the elderly [7,8]. Several interventions have been used to maintain and improve walking speed and balance in the elderly. These interventions include vestibular neurosurgery, physiotherapy, exercise therapy, orthosis interventions, and using special shoes [11][12][13][14][15]. Since the feet have direct contact with the ground during walking, any change in the space between the sole and the ground can affect the postural stability of people [9,10]. Many features in shoe design can affect balance and walking speed [11][12][13]. In assessing the characteristics of the sole of the shoes, one of the common interventions used for a wide range of problems and target groups is the addition of rocker soles to the shoes [14,15]. Other rockers have various therapeutic effects on the target groups [19]. The geometric characteristics of toe rockers in the anterior part are determined by three variables of apex angle, apex position, and rocker angle [21]. The rise and function of the apex change the rate of movement of the lower extremity joints, especially the ankle, walking speed, kinematic gait, and the patterns of rocker during the gait cycle. In this way, a change in the apex angle and position can improve or weaken these variables [21,22] and, thus, affect people's balance and walking speed.
In examining the characteristics of the toe rocker, Chapman's study showed that the use of a rocker with an apex angle of 95-90 degrees had good equilibrium effects on the study groups [16]. Meyer et al. and van Schie et al.' studies showed that, for optimal balance performance, the most effective apex position is at 55%-65% of the shoe length [17,18]. In another study, the effect of rocker angles on the degree of dorsiflexion and toe clearance was investigated. The results showed that, regardless of the ground inclination, shoes with toe rockers at angles of 10-15 degrees compared to other rockers significantly increased toe clearance in the elderly and consequently reduced the risk of falling [19].
Given the growing rate of the aging population and the increasing injuries caused by poor postural balance, a study in the field of balance and walking speed of the elderly can be of great value to this group. The use of rocker sole shoe is one of the common interventions for increasing muscle strength in the elderly and young people. Despite the few studies on the effect of rockers in the elderly population, some have concluded that adding a rocker to the sole of the shoe can improve muscle strength in the long term and, thereby, improve postural stability. Achieving the desired effects on muscle strength requires rocker use on the sole for at least 6 months [20][21][22]. However, there is a concern that during this 6-month period, older people may have balance problems and slow walking. Thus, the present study aims to evaluate the short-term effect of toe rockers with specific angles on walking speed and postural control of the elderly.
Materials and Methods
This research is a quasi-experimental study with a pretestposttest design. The inclusion criteria were being healthy and older than 60 years, while the exclusion criteria were having neuromuscular diseases, peripheral neuropathy, diabetes, acute musculoskeletal injuries, acute pain in the lower limbs and lower back, using walking aids, balance problems, and acute heart or lung diseases. To determine the sample size, first, a pilot study was performed on five older people. Assuming an effect size of 0.5, the minimum sample size was 19 to achieve a test power of 0.86. The participants' dynamic postural stability was assessed by the Star Excursion Balance Test (SEBT). It is a reliable test with an intraclass correlation coefficient of 0.89-0.93 and a coefficient of variation of 0.3-4.6 [23]. To normalize the data, the SEBT results were divided by the leg length, and their mean values were used in the analysis [24]. The balance of participants was evaluated using the Persian version of the Berg Balance Scale (BBS), whose psychometric properties have been evaluated in a previous study [25]. The total score of BBS is 56, which shows the highest balance ability. The internal and external reliabilities of this test in the elderly are 0.98 and 0.99, respectively [26]. The walking speed was assessed using the 10-m walking test where the participant walks a distance of 20 m at a safe speed. The test was repeated 3 times, and the best record was set as the test score [27].
Ethyl vinyl acetate rubber was used to prepare the toe rocker in shoes with similar insoles and soles. The location of the rocker head was at 65% of the shoe length [28]. To maintain a safe environment for the elderly, the rocker angles were set at the maximum values, which did not increase the risk of falls in the elderly in previous studies [19,29]. In the first test model, the rocker angle was 10 degrees (Figure 1), and in the second test model, it was 15 degrees (Figure 2). The shoe's sole was completely thick at the heel, at 65% of the shoe length. A shoe model similar to the test shoes, but different only in type and thickness of the sole, was used as the control shoe ( Figure 3). To analyze the data, first, the normality of their distribution was examined using the Shapiro-Wilk test, whose results showed a normal distribution for all data (P=0.2) except for the BBS data (P<0.001). Therefore, ANOVA and the Wilcoxon test were used to analyze the collected data.
Results
Out of 19 participants, 6(31.6%) were men, and 13(68.4%) were women. Their age ranges, mean of height, weight, Body Mass Index (BMI), and lower limb length ranges are presented in tables 1, 2, and 3. The Mean±SD BBS score in the control shoe group was 55.46±0.96, ranging from 52 to 56. This score was 55.21±0.98 in the shoe group with a 10-degree toe rocker and 55.26±1.05 in the shoe group with a 15-degree toe rocker; both ranged from 52 to 56. The result of the Wilcoxon test showed no statistically significant difference between the three groups regarding this variable (P=0.625). The Mean±SD score of the 10-m walking test in the control group was 1.14±0.24 m/s, ranging from 0.59 to 1.6 m/s. This score was 1.15±0.18 m/s in the shoe group with 10-degree toe rocker (range: 0.83 to 1.38 m/s) and 1.15(0.22) m/s in the shoe group with 15-degree toe rocker (range: 0.73 to 1.72 m/s). The result of the Wilcoxon test showed no statistically significant difference between the three groups regarding this variable (P=0.993). The result of the Wilcoxon test showed no statistically significant difference in SEBT score between the three groups (Table 4).
Discussion and Conclusion
The present study revealed that adding a toe rocker with angles of 10 or 15 degrees had no adverse effect on the balance and the walking speed of the elderly. The use of two rocker angles with certain values was based on previous studies, which showed that these rocker angles positively affected toe clearance during the swing phase in the elderly [19,29]. Wearing rocker shoes reduces the person's awareness of foot conditions by affecting the movement of the ankle joints and reducing the level of reliance [30]. Thus, due to compensatory mechanisms and caution, the travel distance in SEBT might be reduced after using rocker shoes compared to control shoes. However, the participants traveled relatively the same distance with all three shoe models. One of the reasons for the lack of significant differences in the results can be an increase in the person's muscle activity to achieve a stable condition. Ghomian et al. [25], in a study on 17 patients with diabetic neuropathy, stated that when wearing rocker shoes, muscular strength increases in response to anterior and posterior stimuli to reach a stable condition. In other words, the lack of change in postural stability while wearing rocker shoes can be due to increased muscle activity and the person's effort to maintain a stable posture. In their study, no significant difference was observed between shoes with toe rockers and control shoes regarding postural stability [31]. Brenton-Rule et al. evaluated the walking footwear on the postural stability of 21 healthy older adults. They also found no significant difference between the use of rocker shoes and regular walking shoes [32]. Ramstrand's study of 31 women over the age of 50 showed that eight weeks of wearing MBT (Masai Barefoot Technology) shoes had no significant effect on the static stability of participants [21]. At the same time, the results of Albright et al. [44], Demura et al. [45], and Arazpour et al. [46] are contrary to our results. used shoes with heel-to-toe rocker soles. The discrepancy in results may also be due to the difference in the postural stability system between the elderly and young people. In all three studies, the participants were 20-25 years old, while the participants in our study were 66 years or older.
The results obtained from the 10-m walking test showed that the use of shoes with a 10-or 15-degree toe rocker did not cause a significant change in the walking speed of the elderly. Adding a rocker to the sole of the shoe increases the activity of ankle plantar flexors. If the rocker material is hard, it will prevent metatarsal joint movement, and, hence, the so-called metatarsal bone fracture will not occur. This status will increase the moment arm of plantar flexor muscles and eventually needs extra effort to lift the heel off the ground [36,37]. On the other hand, the addition of a toe rocker increases the angle of hip extension in the middle and end of the static phase; as a result, the step length decreases [38]. At a certain distance, if the step length decreases and the cadence increases, the walking speed will not change. Forgani et al. and Arazpour et al. reported the same walking speed of the participants using control shoes and rocker shoes [35,39]. Similar results were also observed in studies by Meyer et al. and Van Bogart [15,38]. However, they suggested that the reason for the same speed when walking with rocker shoes was the increase of cadence and the decrease of stride length. One of the confounding variables and limitations of this study was the difference in the sole thickness in rocker-soled and control shoes. Although the effect of increasing the sole thickness on the weight of the shoes is statistically insignificant, the difference in the effect of the sole thickness on the balance is still debated. Since the results of clinical trials depend on the carefulness of the examiner and the location of the tests, all tests were performed in a place with standard conditions provided by an orthotist. It is recommended that other temporal and spatial parameters of walking in the elderly be evaluated in future studies by using rocker shoes.
Compliance with ethical guidelines
All ethical principles are considered in this article. The participants were informed about the purpose of the research and its implementation stages. They were also assured about the confidentiality of their information and were free to leave the study whenever they wished, and if desired, the research results would be available to them.
Funding
This research did not receive any grant from funding agencies in the public, commercial, or non-profit sectors.
Conflict of interest
The authors declared no conflict of interest. | 2021-09-29T15:26:11.888Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "6276039c23001e210cb941497ace2cb3853563f3",
"oa_license": "CCBYNC",
"oa_url": "http://rehabilitationj.uswr.ac.ir/files/site1/user_files_581925/hodahashemi-A-10-2883-2-22409ef.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9ec672f8645b2824e18119000c9787763b9f4e4d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": []
} |
250173911 | pes2o/s2orc | v3-fos-license | Correlations between performance and shift work in the nursing activities: a pilot approach
Background and aim of the work. Performance assessment is a key administrative function and an essential component of organizational quality programs, by quantifying it in relation to set goals, standards, expectations and guides to improvement initiatives. The present study aimed to assess any differences existing in nursing performance levels perceived according to shift work. Methods. An on-line questionnaire was administered from June to August 2021 through nursing groups present on the Facebook and Instagram pages to all Italian nurses who voluntary agreed to participate. The questionnaire collected both socio-demographic information and nursing performance evaluations, assessed thanks to the “Six-Dimension scale of nursing performance”, such as: leadership, critical care, teaching/collaboration, planning/evaluation, interpersonal relations/communications and personal development. Results. 305 nurses were recruited in this research. By considering nursing performance according to shift work, significant differences were recorded in the “critical care frequency” sub dimension (p=.001), in the “interpersonal relations” sub dimension (p=.018) and the frequency dimension of the six-dimension scale of nursing performance (p=.018). Meanwhile, as regards the quality sub dimension in the six-dimension scale of nursing performance, none significant differences were reported according to shift variable. Conclusions. Professional commitment and performance in nursing appeared to be influenced by several organizational factors. Therefore, further studies in this field will be desire with the inclusion of a wide variety of variables, too. (www.actabiomedica.it)
et al. (26) identified a different conceptual framework for nursing services with a combined total of 51 performance measures. However, no single picture was identified that actually photographed the full aim of nursing services; therefore, each picture was linked to discrepancies in performance assessment. Moreover, nursing performance was largely defined as "the demonstrated ability of an organization or organizational unit to acquire the necessary nursing resources and use them sustainably to produce nursing services that effectively improve patients' conditions" (26). Literature suggested three subsystems of nursing care, as: the acquisition and implementation or maintenance of nursing facilities; the transformation of nursing resources into nursing services; and the production of changes in patients' conditions, by highlighting multiple dimensions and hypothesizing inter-functional correlations. Additionally, studies supported that, despite coherent relations between nursing staff assessments and patient outcomes, causal mechanisms by which staff influenced consequences have not been sufficiently explained (6,(27)(28). Probably, nursing staff, which included the recruitment and assignment of nurses, influenced patient outcomes through the ability to adequately and promptly employ nursing processes. However, this supposed cross-functional relationship could not be considered without clear conceptualizations and strong appraisals of nursing procedures (29). Additionally, literature suggested also the negative effect of night shiftwork in healthcare workers, especially nurses, usually provoking tiredness, sleepiness, humor alteration and weight increase (30)(31)(32)(33) and many problems in job performances and psychosocial health (31,34). Night shiftwork, also significantly modified the circadian rhythm of influenced individuals (35). Some studies reported that night shiftwork was correlated to reduced performance (36,37). Although, there were very few evidences concerning the impact of night shiftwork on nurses (37,38) and their consequential challenges in the nursing job performance linked to dissatisfaction and absenteeism (39). However, none of the previous researchers specifically assessed the impact of the night shift work on nursing performance (40). Therefore, the present study aimed to assess any differences existing in nursing performance levels perceived according to shifting work in Italian nurses.
in the different healthcare settings. Furthermore, literature highlighted how nurses always represented the largest body of health care professionals and played a crucial role in the health care delivery system (2,3), including both nursing interventions and coordination and administration of all the interventions indicated by physicians and other members of the healthcare team. In this way nurses acted as guardians of health care as well nursing systems served as important leverage points for amelioration. Therefore, literature reported the centrality of nursing in the health care system, but also large gaps in performance appraisal policies (4,5). In fact, most health care practices included general quality performance programs which very often did not include any nursing advantages. Health policies might aim a robust ongoing development in performance measurement system for nursing, as the current set of approved measures did not adequately reflect the complexity of the nursing care system and the range of contributions nursing care by patients (6)(7)(8) and, at the same time, it represented an opportunity for the introduction of new performance indicators (9). The development of new measures was restrained by a poor conceptualization of how nursing services were delivered and how they potentially affected patient and organizational outcomes. Since 1978, Schwirian (10) tried to develop a model for nurses' job performance by defining it as how well the job was done in accordance to established standards. So, job performance was defined as an action that could be observed and assessed (11,12). However, job performance was always a complex phenomenon (13), mostly in the nursing context, in which there were multiple variables which positively influenced nurses' job performance and at the same time predicted nurses' job performance (14), as: young age (15), recognition of achievement (16,17), work satisfaction and employee's educational level and training (12,13), social support (18), supportive communication and feedback (17), and competent nursing practice (19)(20)(21). Also nursing experience was important for better performance of jobs. On the other hand, long working shifts and heavy workload (22)(23)(24)(25), job stress (18,23), punitive corrective actions and motivational and skill difficulties (11), and older shift workers (23) were reported to negatively influence nurses' job performance. Additionally, Dubois "very well". Additionally, in the second part relating to the quality of the nursing performance perceived, it was included additional 10 items, which regarded the "professional Development" sub dimension. The scale revealed self-evaluations of performance, employer assessments of performance, or perceived adequacy of nursing school training for performance. The condition was composed by the frame and pragmatic validity of the Six-Dimension Scale, and all six sub-dimensions demonstrated high reliability and validity. The instrument was recognized as suitable for performance assessment as well as a helpful research tool (10).
Recruitment and Ethical considerations
All Italian nurses who voluntary agreed to participate in this survey were included. The questionnaire was created and administered thanks to the Google Modules function from June 2021 to August 2021 through some pages and nursing groups present on the following Facebook and Instagram.
All the information collected were treated confidentially, guaranteeing complete anonymity. The study was evaluated and approved by the Ethics Committee of the University Hospital of the Policlinic of Bari, Italy (ID number: 6885/2021).
Data analysis
Data were collected in an Excel spreadsheet and subsequently statistically processed thanks to the IBM statistic SPPS program, version 20.
Categorical variables, such as: sex, shift work typology and instruction levels were reported as frequencies and percentages the continuous variables, such as: age and years of work experience have been assessed with means (µ) and standard deviations (SD). Descriptive analysis, with means (µ) and standard deviations (SD) were also performed for the Six dimension scales, both for the Frequency and Quality dimensions and also t-test for independent sample was performed according to shift variable. All p values < .05 were considered as statistically significant.
The questionnaire
The questionnaire was divided into two main sections. The first part included some socio-demographic information, as: -sex, between female and male; -age expressed in years; -years of work experience, expressed also in years; -shift work performed, as the interviewee worked only during the morning and the evening (1 or 2 shifts) or also during the night; -the nursing education level, as the nurse had basic training (3 years), or the interview had post-basic training up to 5 years of nursing training or if the nurse had a consolidated post-basic training exceeding 5 years of training nursing, only considering university nursing education.
In the second section the "Six-Dimension scale of nursing performance" was administered (10,(41)(42). This questionnaire consisted in a list of activities in which nurses engaged with varying degrees of frequency and skill. It included a total of 52 nurse behaviors grouped into six performance subscales, as: leadership (5 items), critical care (7 items), teaching/collaboration (11 items), planning/evaluation (7 items), interpersonal relations/communications (12 items) and personal development (10 items). The scale was used to obtain self-assessment of performance or perceived adequacy. Specifically, for the first 42 items nurses were invited to answer twice, as the first answer regarded the number that better described how often the interview performed the activities in the performance of the current nursing activity. Therefore, for each item a Likert scale was associated that varied from "1", as "not excepted in this job" to "4", as "frequently". Whereas, the second answer concerned how well nurses performed these activities in the current nursing activity and all the same answers were associated also to a Likert scale which ranged from "1", as "not very well" to "4", as
Discussion
The present study aimed to assess any differences existing in nursing performance levels perceived according to shift work in Italian nurses, especially if the night shift work could influence the frequency or the quality in the nursing performance appraisal.
The present findings showed that, according to shift work, significant differences were recorded in the "critical care-frequency" sub dimension (p=.001), as nurses who worked also during the night shift reported higher levels in this aspect than nurses who worked only during the daily shift, respectively. Additionally, nurses who worked during the night shift also reported significantly higher levels in "interpersonal relationsfrequency" sub dimension than their daily colleagues (p=.018). Finally, by considering total values in the frequency sub dimension of the six-dimension scale of nursing performance, nurses who worked also during the night shift reported higher levels than their daily colleagues (p=.018). Meanwhile, as regards the quality sub dimension in the six-dimension scale of nursing Results 305 nurses were recruited in this research. 157(51.5%) were females and 148(48.5%) were males. All socio-demographic characteristics of the respondents were collected in the Table 1.
For each sub-dimension of the Six-Dimension of nursing performance questionnaire means and standard deviations were assessed according to shift variable and t-test for independent sample was performed for each sub dimension ( Table 2).
By considering nursing performance according to shift work, significant differences were recorded in the "critical care-frequency" sub dimension (p=.001), as nurses who worked also during the night shift reported higher levels in this aspect (3.49±.39) than nurses who worked only during the daily shift (3.30±.54), respectively. Additionally, nurses who worked during the night shift also reported significantly higher levels in "interpersonal relations-frequency" sub dimension than their daily colleagues (p=.018). Finally, by considering total values in the frequency sub dimension of the six-dimension scale of nursing performance, nurses who worked also during the night shift reported higher levels than their daily colleagues (p=.018). Meanwhile, as regards the quality sub dimension in the six-dimension scale of nursing performance, none significant differences were reported according to shift work variable ( Table 2). performance, none significant differences were reported according to shift work variable.
In literature there were no overlapping studies to the present, both for method and purpose. In fact, in the current literature the aspect of nursing performance was a topic characterized by wide complexity of the subject, due to both the important number of nursing services that should be considered, areas of application that are vast in the entire health organization of any country all around the world. In this regard, the World Health Organization considered nurses to be one of the most important work forces in the healthcare sector, as nurses played a vital role in the supplying of healthcare worldwide, connecting to the productivity and quality of care provided by healthcare institutions (43). Therefore, nurses could be considered as the starting point of healthcare systems and might be provided with the best conditions allowing them to achieve their tasks in the best possible way (10,(41)(42). In fact, the success of healthcare organizations depended on several significant elements and nurses' devotion to their organizations played an essential role, which helped the organization in realizing its goals, encouraging organizational efficiency and effectiveness and developing the quality of healthcare services. However, the 2030 Agenda for sustainable development goals (SDGs) report indicated that nursing staff was understaffed and unbalanced distributed (10).
Our data supported evidence demonstrated in the scientific literature, as nursing performance in the frequency dimension differentiated both in the critical care, in interpersonal relations and also in the total dimension of frequency of the nursing performance scale, reported significant higher levels than the other colleagues who worked only during the daily shift (44). In this regard, high stressful nursing job (45) influenced the physical, mental, and awareness skills of the individual. In fact, nurses who performed long hours (46), were stressed or sleep deprived (47,48), supporting heavy workloads (49)(50)(51). All these aspects negatively influenced nursing performance during their job hours and affected the timely provision. Anyway, all the literature reviewed were in agreement to consider that the nursing performance was poorly defined in the literature in terms of skills, nursing-sensitive quality indicators, and task-specific performance assessments (1).
Moreover, literature reviews encouraged research in shift work research approach and methodology, by comparing studies available (52) and searching more information which could benefit the nursing management, too (53).
Conclusions
Managerial interventions will be need to improve nursing performance. Moreover, professional nursing commitment and performance appeared to be influenced by several organizational factors; therefore, further studies in this field will be desire with the inclusion of a wide variety of variables (54). Although the present study could be considered pilot for method and purpose, it will be necessary to perform further studies that will include a larger sample size in participants to generalize data and the trend between nursing performance and shift work.
Finally, the present findings might offer useful information for nursing leaders. For example, with regard to the significant predictors obtained from the current analyses. However, literature suggested that there was no perfect schedule (55) and recommendation should be pertinent to specific groups and work systems. It is to be noted that each setting has its own specific requirements. | 2022-07-02T06:17:23.526Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "ed1962dd469fbd3a99b3ed4b157990a849876a4b",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "031b4c6423e6c79fb1965bb3eea2e8b2589ca131",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14191596 | pes2o/s2orc | v3-fos-license | Erratum to: Theobroma cacao L. pathogenesis-related gene tandem array members show diverse expression dynamics in response to pathogen colonization
Erratum The original version of the manuscript [1] contained an incorrectly named Criollo gene ID on chromosome 1 in the first sentence, under the subheading " Organization of PR gene families into tandem arrays ". The second gene on chromosome 1, Tc##_g######, should therefore be Tc01_g000020. References 1. Fister AS, et al. Theobroma cacao L. pathogenesis-related gene tandem array members show diverse expression dynamics in response to pathogen colonization.
Background
Plant-microbe interactions leading to pathogenesis or immunity rely on a complex series of interactions between host and microbial molecules. The process begins when plant membrane-bound pattern recognition receptors (PRRs) detect microbial-or pathogen-associated molecular patterns (MAMPs or PAMPs) [1], or intracellular R genes bind secreted microbial effector proteins [2][3][4]. Recognition of pathogen presence activates multiple signal transduction cascades, including several interacting phytohormone signaling systems [5], which organize local and systemic responses to the infection including the activation of genes encoding antimicrobial proteins and enzymes involved in the synthesis of secondary metabolites with antimicrobial activities [3,[6][7][8][9]. Ultimately, the plant's survival hinges on its ability to rapidly produce peptides and chemicals with antimicrobial properties. Understanding this process is integral to breeding for or engineering more resistant plant cultivars, a dire need for improved global food security and sustainable agriculture.
Pathogenesis-related (PR) proteins, or as they have more recently been called, inducible defense-related proteins, have long been studied with regard to their importance in plant immunity [10,11]. The 17 families of genes that fall under the broad 'PR' classification encode a group of proteins with various antimicrobial properties and that were originally identified because certain family members show strong induction in response to biotic stress associated with activation of systemic acquired resistance signaling [10]. Table 1 summarizes the roles of the 17 most commonly acknowledged PR families based on extensive work in a variety of species. Overall, the PR families encode a diverse array of proteins involved in pathogen defense though multiple mechanisms.
A better understanding of the defense response in crop plants is integral to increasing the sustainability of food and feed production. Cacao production around the world is severely inhibited by cacao's susceptibility to pathogens, with roughly 40 % of the crop lost annually, accounting for a multi-billion dollar loss of cocoa trade and chocolate industry annually [12]. Two high-quality cacao genome sequences have been acquired, that of the fine-flavor Belizean Criollo genotype [13] and the widelycultivated Matina genotype [14]. These resources enable new genome-wide strategies for characterizing the cacao defense response. To date, a handful of cacao PR genes have been studied, providing strong evidence that they play important roles in the response of cacao plants to pathogen infection. Application of glycerol to cacao leaves was recently found to promote defense and induce PR genes, likely through a fatty-acid-related signaling pathway [15]. The PR-1 s of cacao were recently identified, with at least one showing induction by Moniliopthora perniciosa, the causal agent of cacao's witches broom disease [16]. Specific members of the PR-3 [17,18],PR-4 [19], and PR-10 [20,21] families have also been the subject of functional characterization, focusing on enzymatic properties and roles in defense. The results of a recent RNA-seq study measuring induction of genes by witches' broom revealed that PR gene expression was elevated in infected tissues, but their induction (and induction of other known defense-related genes) was not sufficient to halt disease progression [22]. A study by our group used a microarray to measure the effect of salicylic acid treatment on two cacao genotypes [23]. Notably we found that PR gene induction levels differed between two contrasting genotypes, and surprisingly that more PR family members were induced in the more susceptible variety, ICS1, indicating that PR induction is only one piece of a successful defense response. Previously generated EST libraries [24,25] and focused gene expression measurements [19,23] have begun to characterize genotype specificity of the defense response in cacao, but much more work is required to characterize defense mechanisms across the described cacao populations [26]. Much more work is required to characterize the tissue specificity, induction, and function of these genes in cacao to understand and harness their potential for combating the diversity of cacao pathogens. [11,62,94] PR-17 Putative Zinc-metalloproteinase Proteinase function probable, mechanism unclear. [11,95] With the goal of better understanding the evolution, structure, and expression dynamics of the cacao PR gene families, we carried out a comprehensive annotation and analysis of all PR gene families and characterized their genomic organization and expression in response to pathogens. Using a comparative genomics approach, we found that in cacao and in five other diverse plant species (Arabidopsis thaliana, Brachypodium distachyon, Oryza sativa, Populus trichocarpa, and Vitis vinifera), PR gene family sizes are similar and members are often physically clustered in tandem arrays, with more than half of the family members existing in these arrays. Analyzing existing EST databases, we found support for expression of 62 % of the T. cacao PR genes and identified many with expression limited to a specific tissues. Using a whole-genome microarray, we also identified PR gene family members induced by two major cacao pathogens, Phytophthora palmivora [27,28] and Colletotrichum theobromicola [29], the causal agents of black pod rot and anthracnose, respectively. Comparing our new dataset to existing cacao transcriptomic analyses, we identified several PR genes strongly induced by multiple pathogens and treatments, suggesting potential roles as broad-spectrum defense response genes.
Identification of cacao PR gene families
Using the Criollo cacao genome database (cocoagendb.cirad.fr/) [30], we developed a strategy for PR gene identification using the family type members described in van Loon et al. [11]. This bioinformatics approach resulted in a total of 359 PR genes identified in the Criollo genome (Table 2). Graphic representation of the genomic organization of these genes and the chromosomal positions of each of these loci is included in Fig. 1 and detailed information including gene IDs and chromosomal positions is provided in Additional file 1: Table S2. The process of gene identification was repeated for the Matina cacao genome [31].The Matina PR chromosomal distribution is plotted in Additional file 2: Figure S1 and Matina gene IDs and their positions are listed in Additional file 3: Table S3. Overall, the family sizes and genomic organization of the gene families in the two genomes was similar, however we observed some differences that could be the result of either chromosomal rearrangements or assembly errors. For the subsequent analysis, we focused on the genes identified in the Criollo genome assembly.
In order to determine whether PR family sizes in cacao were similar to those in other species, we next applied the PR gene identification pipeline to the Arabidopsis thaliana [32], Brachypodium distachyon [33], Populus trichocarpa [34], Oryza sativa [35], and Vitis vinifera [36] genomes. PR genes identified in these species are listed in Additional file 4: Table S4, Additional file 5: Table S5, Additional file 6: Table S6, Additional file 7: Table S7, Additional file 8: Table S8. We found that in these species as in cacao, PR genes typically existed as families rather than as single genes, with a notable exception being that our strategy only identified one PR-4, PR-8, and PR-10 gene in the Arabidopsis genome. The size of (38 unassembled) gene families in cacao correlated well (R 2 > .85, p < 0.001) with PR family sizes in the other species (Fig. 2). Family sizes in cacao were typical of those in the other dicots, with no major species-specific family expansions or reductions. We also noticed trends of family conservation across the plant genomes; PR-11 s were not found in the monocots (Brachypodium distachyon and Oryza sativa) surveyed, PR-12 s were only in Arabidopsis and cacao, and PR-13 s were found only in the monocots and Arabidopsis. The largest size disparity was in the PR-9 s, where the two monocots had~150 members while the dicots had less than 100 members.
Organization of PR gene families into tandem arrays
Criollo gene IDs indicate their order on chromosomes, where the first gene on chromosome 1 is Tc01_g000010, the second Tc02_g000020, etc. We noticed that many of the cacao PR genes were clustered with other members of the same family. To quantify this phenomenon, we defined a tandem array as any two or more genes of the same family that are located within 10 genes of one another [37,38]. Using this parameter, we identified 46 PR tandem arrays containing a total 181 genes, distributed across all chromosomes ( Fig. 1 and Additional file 1: Table S2). The number of genes within each tandem array ranged from2 to 16 across the families. The largest tandem arrays were a group of PR-10s on chromosome 4 (Chr4PR-10.6, 15 members), a group of PR-16 s on chromosome 5 (Chr5PR-16.3, 14 members), a group of PR-11 s on chromosome 9 (Chr9PR-11.1, 9 members), and a group of PR-9 s on chromosome 2 (Chr2PR-9.5, 9 members). Next, using JBrowse [39] we manually identified tandem arrays for each of the additional five species surveyed. We found that tandem arrays were very common across PR gene families in the diverse plant taxa surveyed (Additional file 9: Table S9), with more than half of the genes for most classes existing in tandem arrays. Proportions of PR family members found in tandem arrays, particularly among dicots, were also similar.
To investigate this phenomenon, we created maximumlikelihood trees for the PR-3 family (Fig. 3), the PR-1 family (Additional file 10: Figure S2, and the PR-4 family (Additional file 11: Figure S3), which include the gene family members from cacao and Arabidopsis thaliana. The phylogeny has several well-supported nodes indicating multiple PR-3 family members existed when Arabidopsis and cacao diverged. Further, the support for the tree suggests that there are three clades within the family. Cacao has tandem arrays in both clades B and C. Bootstrap support in clade B, interestingly, suggests that Tc01_g000770 is more closely related to Tc01_g010350 than it is to its tandem array members, Tc01_g000800. This suggests that in this scenario, a duplication led to the formation of an additional chitinase gene at the distal end of chromosome 1 after the tandem array had formed. Clade C contains tandem arrays of cacao and Arabidopsis genes. The branch support suggests that members of the Arabidopsis tandem array have continually expanded and diverged over evolutionary time, with strong support for array members split between three subclades. AT1G56690 presents another likely case of a recent non-local duplication, this one to a different chromosome. A fourth subclade contains the four Table S10, Additional file 13: Table S11, Additional file 14: Table S12 include matrices of percentage identity for these three PR families, and further demonstrate that tandem array members are often, but not always, most closely related to one another.
Induction of cacao PR gene expression by pathogen colonization
To further our understanding of PR gene expression in cacao, we measured global gene expression after treating plants with two pathogens, P. palmivora and C. theobromicola. Figure 4a and b show scatterplots of log 2 normalized expression for P. palmivora and C. theobromicola treatment, respectively, compared to water treatment for all probes corresponding to PR genes on a whole genome microarray, revealing that normalized expression values detected by the microarray reflect transcript abundance ranging from very low to very high (Additional file 15: Table S13) in all treatments. As expected, a similar trend was noted when analyzing all probes on the microarray (Additional file 16: Figure S4). For both pathogens, the majority of PR gene probes revealed constitutive expression across treatments, a large number of genes being up-regulated in pathogen-treated samples, and only a few examples of PR gene down-regulation. A total of 67 PR genes were induced by P. palmivora and 45 were induced by C. theobromicola (Benjamini-Hochberg-corrected p < 0.05 [40]) ( Table 3). Of the two pathogen treatments, P. palmivora had a stronger effect in that in generally induced more genes per family and the increase in transcript abundance relative to water-treated samples was greater (Fig. 4c, Additional file 17: Table S14). One exception was the PR-10s; while more of the PR-10 genes were induced by P. palmivora, those induced by both pathogens were equally or more strongly induced by C. theobromicola. A single PR-10 gene (Tc04_g028940) was strongly induced by C. theobromicola (log 2 3.6-fold increase) but not Fig. 4 Microarray analysis of pathogen treatment on cacao PR gene expression. Scatterplots of normalized expression value for all probes for PR genes, comparing a P. palmivora treatment and water-treated control and b C. theobromicola with water-treated control. c Heatmap showing fold change in transcript abundance after pathogen treatments compared to water-treated control for all 359 Criollo PR genes. Black bars correspond to genes with non-significant (Benjamini-Hochberg p > 0.05) fold change or genes removed from analysis in background filtration induced by P. palmivora. For both pathogens, statistically significant PR gene down-regulation was rare, as only 7 genes (2 PR-2 s, 3 PR-7 s, 1 PR-9, and 1 PR-16) were repressed by P. palmivora and none were by C. theobromicola. There was also significant overlap in genes differentially regulated by the two pathogens. Forty two PR genes were affected by both treatments, 32 were uniquely affected by P. palmivora, and 3 were unique to C. theobromicola. A large set of PR genes (159 in P. palmivora-treated samples and 188 in C. theobromicola-treated samples) were found to be expressed at similar levels in water and in pathogen treated tissues, suggesting that these genes may encode a set of proteins involved in basal defense in cacao, or they could be specifically induced in other tissues.
qRT-PCR validation of microarray results
To support the findings of our microarray analysis, we performed qRT-PCR on select genes from three families. Because family members, and tandem array members in particular, often have high similarity, with this analysis we sought to verify specificity of microarray probes, as well as to confirm induction of genes of interest. Our analysis included 30 genes: 14 PR-1 s, 6 PR-3 s, 7 PR-4 s and 3 PR-10s (Table 4). Primer sequences for qRT-PCR are listed in Additional file 18: Table S15 Generally the qRT-PCR results verified the induction of genes with statistically significant induction detected on the microarray, although the degree of induction was often underestimated by microarray measurement, as is often observed. By designing highly specific qRT-PCR primers, we were able to verify induction of multiple gene family members, and even tandem array members, in the PR-3 and PR-4 families. Members of a single array showed induction ranging from 8 -fold to 5000-fold. Of the tested PR-10s, all verified the trend of equally strong induction by the two pathogens or greater induction by C. theobromicola.
Discussion
The role of PR genes in mediating resistance to disease has been well studied in a wide variety of model and crop plant species [11,[41][42][43]. These proteins are grouped together based on their increased accumulation in response to activation of systemic acquired resistance pathways and their roles in plant defense. Our analysis of the PR gene families of T. cacao resulted in the identification of multigene families for 15 families of PR proteins. These gene families include about 350 genes that are distributed throughout the genome. About 50 % of the cacao PR genes are found in arrays of tandemly duplicated genes, and many family members, even within tandem arrays, exhibited varying levels of inducibility by pathogen treatment. The structure of the PR gene families of five other plant species shared these features with cacao, suggesting that PR tandem arrays are features highly conserved within most if not all higher plants. The high degree of correlation in family sizes suggests that similar evolutionary forces have likely acted on diverse plant genera, likely indicating that PR family expansions have been beneficial to land plant survival. This body of work provides strong evidence that gene duplication and neo-functionalization, particularly with regard to expression dynamics, have played major roles in shaping the genomics of the plant defense response. Local duplications arise through various mechanisms including polymerase slippage, unequal crossing over, and transposon movement, and local duplications are known to contribute to eukaryotic evolution by increasing genetic diversity [37,44]. Organization of PR genes into tandem arrays has been described for several plants and PR families, including PR-7 s in tomato [45], PR-10s in grape [46], PR-12 s in Arabidopsis [47], and PR-1 s in Arabidopsis and rice [11], and PR-16 s in rice [48]. The physical clustering of PR-4 s in cacao was also previously described [19]. Tandem duplications have also been shown to play a key role in evolution of Resistance (R) gene families [49,50] and they are particularly common in the NBS-LRR class of R genes, as well as in PR-1 s, thaumatins, germins, and major latex proteins in Arabidopsis [51]. Here we demonstrate that this clustering is common across PR families. Correlation analysis of family size indicates that sizes are similar across diverse plant taxa, indicating that expanded family sizes are common and are likely selectively beneficial in higher plants. Our phylogenetic analysis of the PR-1, PR-3, and PR-4 families suggests that the families have continually expanded both locally and inter-chromosomally over land plant evolution, although further investigation of expansions of certain sub-clades in different species is necessary to explain functional dynamics of family expansion. Gene family expansions have a complicated interplay with expression dynamics. Employing our microarray analyses, unique expression dynamics within groups of family members with very high percent identity. The data presented here suggest that in some cases single genes within tandem arrays are induced by a given pathogen, while in other tandem arrays two or more genes can be induced by the same stimulus. Large tandem arrays for PR-10s (Chr4PR-10.6, 15 members) and PR-16 s (Chr5PR-16.3, 14 members) have members ranging from constitutive low expression to constitutive high expression, with a few showing inducibility by pathogens. Consequently, evolutionary dynamics of family members after a duplication event remain unclear, but several mechanisms are likely at play in a scenariospecific manner. First, selection could favor greater concentration of antimicrobial peptides produced in a given tissue, leading to multiple family members exhibiting similar protein structure and expression patterns. Our microarray analyses revealed several cases that could support this model; for example four PR-3 s that make up a tandem array were all induced by P. palmivora. Alternatively, mutations affecting nearby regulatory machinery or the coding sequence of the gene could result in new tissue specificity or binding/enzymatic activity of a protein. Our microarray dataset found that only one of six PR-1 s in a tandem array was induced by pathogen, suggesting the others have alternative functions, tissue specificities, or are in the process of becoming pseudogenes. Evolutionary studies have revealed that products of small-scale duplications diverge in expression more rapidly than they do in terms of protein structure [52] ,with age of paralogs correlating with their divergence in expression in Arabidopsis [53,54] and rice [55]. For defense genes, divergence in expression patterns could be beneficial, decreasing metabolic burden associated with mounting a defense response in tissues distal to the site of infection. Further work, particularly RNA-seq experiments across a wide range of tissue types, would allow more comprehensive dissection of functional patterns associated with this gene organization. In silico promoter analysis may be a means of identifying a mechanism underlying expression dynamics of tandem arrays.
Teixeira et al. [22] previously reported the induction of more than 67 PR genes after infection of cacao plants with Moniliophthora perniciosa, but that the induction did not eliminate pathogen colonization. Similarly, the induction that we see here did not halt infection, but likely slow the pathogens' progress. These transcriptomic experiments identify candidate genes that require functional characterization to better understand roles of PR proteins against the diversity of cacao's pathogens. The infection and microarray analysis we performed with oomycete (P. palmivora) and fungal (C. theobromicola) pathogens confirms the induction of 67 and 45 PR genes by the respective pathogen treatments. However, the majority of the PR genes had stable expression across treatments under our experimental conditions. Analysis of other tissues may reveal that a subset of those genes have tissue specificity in their basal expression and inducibility. The existence of PR family members with constitutively high expression could suggest that certain family members have evolved to act as a preliminary line of defense. For example, two PR-3 s (Tc06_g000490 and Tc04_g029180) had very high expression in water treated samples. Constitutive high-level expression in leaves may allow the plant to begin degrading chitin of invading pathogens before PAMP or R-gene mediated signal transduction can elevate expression of induced defenses. Knockdown or deletion of these constitutive highexpressors followed by pathogen challenge would demonstrate the role of basal defense components. Broadly, we saw a more dramatic defense response in samples infected with P. palmivora than in those infected with C. theobromicola, with more genes being up-regulated and their degree of induction being greater. The microarray and qRT-PCR analysis indicated that the PR-10 family deviates from this trend, with members showing equal or more dramatic induction by C. theobromicola than by P. palmivora. The PR-10 member Tc04_g028860 is particularly noteworthy, showing 96-fold induction by C. theobromicola treatment, about four times the induction by P. palmivora treatment. While it is possible that these differences reflect pathogen-specific responses, we cannot rule out the possibility they result from different speeds with which the two pathogens colonize the host.
Induction of PR-1 genes is a hallmark of plant defense activation. While they belong to the well-studied Sperm Coating Protein/Tpx-1/Ag5/PR-1/Sc7 (SCP/TAPS) [56], a sub-group of the Cysteine-rich secretory protein superfamily, little is known about their biological function [57]. Our analysis indicates that TcPR1-g (Tc10_g000980) that was previously reported to be induced in tissue infected with witches' broom [16], was not induced under our experimental conditions. This lack of induction by P. palmivora and C. theobromicola suggests that family member activation may differ for certain pathogens. Another example is the induction of the PR-1 Tc02_g002410, which was not induced by witches' broom, by P. palmivora and C. theobromicola. Our qRT-PCR experiment validated strong induction of only this gene (>700 fold by P. palmivora and > 50 fold by C. theobromicola), and confirmed low expression of Tc10_g000980 across all samples. The specificity of the reaction is interesting, but even more puzzling as the function of PR-1 s in plants remains unclear.
PR-3 family member expression was also of particular interest because of our prior work with a class I chitinase (Tc02_g003890) [17]. Here we report induction of several other PR-3 s. A tandem array on chromosome four (Chr4PR-3.4) was notable in that multiple members were found to be induced by both pathogens, suggesting that, in this case, proximity may be contributing to their co-expression, and that these proteins may act in a coordinated fashion to defend the plant against both of the tested pathogens. While chitin is significantly less abundant in the cell walls of oomycetes than fungi, and its function in oomycetes is not well understood, recent evidence suggests that chitin synthase enzymes are active in hyphal tips, where chitin may play a role in cell wall structure [58]. Further, inhibition of these chitin synthases with nikkomycin Z led to bursting of hyphal tips and cell death. Accordingly, induction of chitinases in plants by oomycete treatment may reflect an important defense process, inhibition of hyphal tip growth.
Interestingly, our earlier work described that stable overexpression of Tc02_g003890, a class I chitinase, in transgenic cacao plants resulted in an increased resistance of leaves to Colletotrichum gloeosporioides [17]. The same gene was also upregulated in the highly disease-susceptible genotype ICS1 by treating leaves with salicylic acid [23], and we found that its transient overexpression in cacao leaves increases resistance to P. capsici [18]. The qRT-PCR we performed here did not verify its induction by treatment with P. palmivora or C. theobromicola, suggesting that this gene may respond to SA but not these two pathogens. This result suggests that the underlying mechanisms of these plant pathogen interactions are complex and that further research is necessary to unravel the specific mechanisms involved. One possibility is that the pathogens are able to suppress the mechanisms of SA induced gene expression via secretion of pathogen effector proteins as has been seen with other systems [59].
Cacao PR-4s were also recently identified [19] Pereira-Menezes et al.'s [19] work built upon an earlier EST database [25] by characterizing genotype specificity in the speed and level of induction of PR-4b (Tc05_g027210), which shows anti-fungal activity dependent on its RNase activity, in a resistant (TSH1188) and a susceptible (Catongo) genotype. Our microarray and qRT-PCR indicates that the gene was also induced by P. palmivora (more than 1000-fold and C. theobromicola (roughly 20fold), showing one of the strongest inductions of the genes tested with qRT-PCR. Its induction by a variety of pathogens makes it a critical candidate for further study. Analyses similar to Pereira-Menezes et al.'s work across a broader background of genotypes are required to validate the importance of genes described here. Assaying the effect of over-expression or knockout of this gene would be useful for defining roles of single genes within these families.
We observed a few differences in organization when comparing two different varieties of cacao. The two varieties compared in this study are representatives of distinct genetic clusters that developed over T. cacao's evolution and are thought to have diverged because of the presence of geological barriers [31]. Consequently, it is possible that these two genotypes, having been subjected to different pathogens over their evolutionary history and having unique selective pressures applied by domestication after cultivation of cacao began, have undergone unique duplications or translocations altering gene organization. Indeed, our identification of PR genes in the two genomes may support this hypothesis, as gene counts within families differ for the two genomes, and while the positions of the genes are generally consistent, some chromosomal rearrangement appears to have occurred. It is possible however, that these are differences resulting from genome assembly strategies. Analysis of additional cacao genome sequences from other genetic groups [31] would help resolve these possibilities.
As induction of PR genes is a hallmark of the defense response in many plant species, their identification in cacao is critical to the study of cacao's defense response. Our finding that PR gene family size and organization into tandem arrays is consistent across diverse plant species suggests that the diverse expression patterns seen within families in other species are likely similar to those we have described in cacao. Therefore, this study lays a foundational knowledge of defense gene expression upon which functional molecular genetic approaches can be based. Genes identified here, once functionally verified, will be useful in the breeding cacao cultivars with superior resistance to pathogens.
Conclusions
In this study we identified 359 PR genes in the cacao genome, and found that approximately half of these physically cluster into tandem arrays with other members of the same PR family. Physical clustering of PR genes into tandem arrays was also identified in five diverse plant species. Using a whole genome microarray and qRT-PCR to measure the induction of genes by two cacao pathogens, we identified which PR genes are induced in leaf tissue by pathogens, and we identified differences in basal expression within PR families. This work is critical in improving the understanding of the defense response in cacao, and it provides a list of key candidate defense genes that will be the focus of future molecular characterization.
Theobroma cacao PR gene identification and filtration
Amino acid sequences for the type members of each PR gene family (Additional file 19: Table S1) were used as queries to search the Criollo genome database using BLASTp (cutoff E < 1e −5 , BLOSUM62 matrix) [60]. Using this strategy, we identified putative genes in 15 of the 17 known plant PR protein classes. PR-13 s were not identified in the Criollo genome (they are specific to monocots and a subset of dicots [61]), and PR-15 s are also considered to be monocot specific, although the BLASTp search finds them in the Criollo genome because of their homology with PR-16 s [62]. Next, a custom Python (python.org) [63] script (PRAminoacidgetterASF) was used to extract protein IDs from the BLASTp output and use them to extract the peptide sequences available in the Criollo cacao genome database.
The list of amino acid sequences was uploaded to the NCBI Batch Web CD-Search Tool (v3.13) [64] with an e-value cutoff of 0.01. Another script (PRdomainsorter-ASF) was used to sort the output of the CD-Search with gene IDs and BLASTp E-values of putative PR genes. Polypeptides were manually curated for the presence of domains used in Wanderly-Nogueira et al. [43] to classify each family. For the PR-6 family, we used presence of the "potato-inhibitor family domain" (pfam00280) to screen putative cacao PR genes, as it is the only domain found in the type member sequence. Putative PR genes missing the characteristic domains were removed, and the remaining genes are listed in Additional file 1: Table S2.
This process was repeated for the Matina cacao genome [14]. In order to compare PR gene distribution in the genomes, a third python script was used to retrieve positional information from the Criollo and Matina GFF files (PRstartstopfinderASF). This data was plotted in Fig. 1 (Criollo) and Additional file 2: Figure S1 (Matina) using the R packages ggplot2 [65] and ggbio [66], and gene positional information is also included in Additional file 1:
PR gene identification in other plant species
Using the same type member queries, BLASTp searches were against predicted polypeptide sequences downloaded from Phytozome v10.3 (Goodstein et al., 2012) from the Arabidopsis thaliana (TAIR10), Brachypodium distachyon (v3.1), Oryza sativa (v7.0), Populus trichocarpa (v3.0), and Vitis vinifera (Genoscope 12×) genomes using the same parameters. The procedure described above was used to curate, use CD-Search, and organize PR genes in order to count the number of genes per class. Tandem arrays were manually identified using JBrowse [39] in Phytozome v10.3 [67]. For all species, the PR-15 and PR-16 lists were largely redundant because of homology of the families, but PR-15 s are monocot specific and should therefore only be present in Brachypodium distachyon and Oryza sativa. Therefore, for plotting gene family sizes in Fig. 2, these two families were combined. Gene IDs and BLASTp e-values for identified genes for these species are listed in Additional file 4: Table S4, Additional file 5: Table S5, Additional file 6: Table S6, Additional file 7: Table S7, Additional file 8: Table S8.
Building PR-1, PR-3 and PR-4 phylogenies
To construct phylogenies, nucleotide sequences of family members for PR-1, PR-3, and PR-4 from the Criollo genome and primary transcripts from Arabidopsis (TAIR10) [32] were aligned using the MUSCLE [68] translational alignment function in Geneious [69] with eight iterations. Alignments were manually curated. No adjustments were made to the PR-1 or PR-3 families, but Tc05_g027340 was removed from the PR-4 alignment as it appears to have annotation errors in intron prediction. Maximum likelihood trees were generated in Geneious using a RAxML [70] plugin.
Plant growth, infection, and RNA extraction
The seeds used for generating the plants for the experiment were collected under Panamanian Authority of the Environment (ANAM) scientific permit SE/AH-1-11. Seeds from open pollinated T. cacao mother trees, accession UF12, were collected from a plantation in Charagre, Bocas del Toro province, Panama. The seeds were surface sterilized by immersing them in 0.5 % sodium hypochlorite for three minutes and rinsed with sterile water before being placed for germination in plastic trays with soil (2:1 mixture of clay rich soil from Barro Colorado Island, Panama and rinsed river sand) and incubated in Percival growth chambers. One-month-old seedlings were transplanted to individual pots (600 ml volume) containing the same soil mixture and kept in the growth chambers. Germination of seeds and seedling growth was done in growth chambers (model I35LL, 115 volts, 1/4 Hp, series: 8503122.16, Percival Scientific, Inc., Perry IA) with 12/12 h light/dark photoperiod and temperatures of 30°C and 26°C respectively [71].
Two month old seedlings, with approximately six leaves each, were spray-inoculated with conidia of Colletotrichum theobromicola or zoospores of Phytophthora palmivora. Conidia of C. theobromicola were produced using the same methods as in [71] for production of other species of Colletotrichum and zoospores were produced as in [72]. Whole seedlings were sprayed either with pathogen inoculum (P. palmivora isolate PTP zoospores at 5 × 10 4 per ml or C. theobromicola isolate ER08-11 conidia at 2 × 10 7 per ml) or sterile distilled water (controls) and then placed back into the growing chamber, but only leaves in stage C [73] at the time of inoculation were considered as a target for the experiment. Pathogens C. theobromicola and P. palmivora were re-isolated from lesions developed in inoculated Samples were harvested from 72 h post-inoculation for RNA extraction, and tissue at this time point was used to re-isolate pathogen, which was considered as a measure of successful inoculation. Leaves sprayed with water remained healthy, did not develop lesions, and no pathogens were re-isolated from them. Representative photographs of infected and control leaves are shown in Additional file 20: Figure S5. Four seedlings received each treatment, and five leaf samples were collected from each group of four seedlings. Each biological replicate consisted of a single individual leaf. Target leaves were cut with scissors from the plant, immediately weighed, and placed in RNAlater solution in borosilicate vials following manufacturer's instructions (Applied Biosystems/Ambion, Austin, TX). Vials containing samples were shipped to PSU on dry ice where RNA extractions were performed using a previously described protocol [74]. Total RNA sample concentration and purity was assessed using a NanoDrop spectrophotometer and RNA quality was determined using an Agilent Bioanalzyer.
Microarray analysis
Transcriptomic analysis was performed using a wholegenome Roche NimbleGen custom oligo expression array (platform GPL18356), which was previously described in [75]. Probe labeling, hybridization, and detection were performed at the Penn State Genomics Core Facility, and the statistical analysis of the microarray data were performed as previously described [75].Briefly, the Bioconductor package [76] was used in R to perform quality control checks and calculate normalized expression values using the RMA procedure. Normalized expression values were plotted to ensure all replicates for a given treatment had similar expression patterns. These data are available on GEO (GSE73804). In calculating fold induction, probes with mean log 2 expression values across all probes less than 6 were removed. The LIMMA package [77,78] was then used to calculate fold induction on a per-probe basis and to calculate a Bayesian moderated test statistic for each comparison (pathogen-treatments relative to water-treatment). A Benjamini-Hochberg multiple testing correction [40] was then applied. Probes with Benjamini-Hochberg p < 0.05 were considered significant. In identifying individual PR genes with statistically significant differential regulation, any gene with multiple probes showing statistically significant change had fold change recalculated by averaging across all significant probes.
CDNA synthesis and qRT-PCR validation of microarray
One microgram of RNA from each of the five samples from each treatment were reverse transcribed by M-MuLV Reverse Transcriptase (New England Biolabs, Ipswich, MA, USA) with oligo-(dT) 15 primers to obtain cDNA. To create highly specific primers for PR gene family members, nucleotide sequences of for the PR-1, PR-3, PR-4, and PR-10 families were aligned using MUSCLE [72] in Geneious [73]. qRT-PCR primers were designed to target bases that differentiate family members. Primer sequences are listed in Additional file 18: Table S15. qRT-PCR was performed in a total reaction volume of 10 μL Data normalization, a statistical randomization test, and relative pathogen-treated vs. water-treated expression ratios were computed using REST [64]. Fold changes with p-values less than 0.05 were considered significant.
Ethics approval
As the study did not include any human or animal participants, no ethics approval was required.
Consent to publish
As no human participants were involved in the study, no consent was required.
Availability of data
Microarray data are available at NCBI (GEO: GSE73804). The Criollo cacao genome is available at http://cocoagendb.cirad.fr/ and the Matina cacao genome, A. thaliana, B. distachyon, O. sativa, P. trichocarpa, and V. vinifera genomes are accessible through Phytozome.
Additional files
Additional file 1: Table S2. Gene IDs and positions of Criollo PR genes mapped to the ten cacao chromosomes. Those not mapped to the ten chromosomes are appended to the bottom of the list without positional information. (PDF 4169 kb) Additional file 2: Figure S1. Karyogram depicting the position of PR genes along the length of chromosomes based on the Matina genome sequence. Due to resolution of the image lines representing nearby genes partially overlap. (PDF 4169 kb) Additional file 3: Table S3. Gene IDs and positions of Matina PR genes mapped to the ten cacao chromosomes. Those not mapped to the ten | 2017-08-03T01:29:56.746Z | 2016-09-07T00:00:00.000 | {
"year": 2016,
"sha1": "523d1f3e59bed9c63bfda837d18a928c27830d7f",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-016-3073-8",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cac333f30ce5aea02441d1f5b468ab80ea227e61",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
55643442 | pes2o/s2orc | v3-fos-license | Symmetry analysis of magnetic structures on the microscopic and macroscopic methods of E.F. BERTAUT
Magnetic long range ordering was questioned experimentally and theoretically, very early by Weiss [1], Dirac [2], Heisenberg [3], Van Vleck [4] and others, before the hypothesis of antiferromagnetism was developed by Néel [5], as well as before the hypothesis of Rutherford concerning [6] the existence of the neutron was confirmed by Chadwick [7]. As a revelation, the experimental proof of antiferromagnetic (AF) ordering in MnO demonstrated by Shull and Smart [8] enlightened the unique powerfulness of neutron scattering in asserting definitively the prediction of Néel. At that critical period, Bertaut emerged rapidly as one of the most prolific and brilliant solid state chemists and physicists aiming to relate systematically the physical properties of solid state materials to crystal structure symmetry.
INTRODUCTION
Magnetic long range ordering was questioned experimentally and theoretically, very early by Weiss [1], Dirac [2], Heisenberg [3], Van Vleck [4] and others, before the hypothesis of antiferromagnetism was developed by Néel [5], as well as before the hypothesis of Rutherford concerning [6] the existence of the neutron was confirmed by Chadwick [7].As a revelation, the experimental proof of antiferromagnetic (AF) ordering in MnO demonstrated by Shull and Smart [8] enlightened the unique powerfulness of neutron scattering in asserting definitively the prediction of Néel.At that critical period, Bertaut emerged rapidly as one of the most prolific and brilliant solid state chemists and physicists aiming to relate systematically the physical properties of solid state materials to crystal structure symmetry.
E. F. BERTAUT'S LIFE BEFORE ACADEMIC RESEARCH
Erwin Lewy was born in a Jewish germanophile family on Feb. 9 th 1913 in the small town of Leobschutz (now Gklubczyce, Poland), High Silesia.Excellent in music and mathematics, however he began lawyer studies in Breslau (Wroclaw).He red "Mein Kampf" and experienced the fascist climate developing at the period.After being injured by "Youth Nazis", in 1933 he decided to migrate to Paris then to Bordeaux (France) where he studied chemistry at the Ecole de Chimie de Bordeaux.Graduate Chemist Engineer, he got the French citizenship in 1936.He worked as a chemist at the company "La Cellulose du Pin" where, among several activities, he investigated hydrogenation processes under high pressure assisted by catalysts such as Raney nickel.
Most of his close family who had joined him in Bordeaux were arrested in 1940, then deported and exterminated in Auschwitz.
Early he engaged in the French army, and later when demobilised, an officer proposed him a new identity card with name Felix Bertaut.Working in Paris at the Service de Poudres in 1941-1942, Bertaut learned crystallography under the supervision of Prof. J.P. Matthieu.But, Prof. A. Kastler suggested him to leave Paris and he recommend to Prof L. Néel to host him in Grenoble, an unforgettable facility opening the exceptional career of Erwin Felix Lewy-Bertaut.
E. F. BERTAUT'S TRIBUTE TO SOLID STATE CHEMISTRY AND PHYSICS
Bertaut was soon introduced in the field of magnetic properties of materials since his thesis work [9] supervised by Néel has consisted mostly in the determination of the crystallite size in ferromagnetic compounds, a dimension parameter making sense for coercivity, as an extrinsic property of hard magnets.Immediately Bertaut has developed an original approach in computing the broaden profile EPJ Web of Conferences of the X-ray diffraction lines by Fourier's transform analysis, his method being more effective than the most referred by Warren and Averbach [10].
As said above, the same year, the pioneer neutron diffraction experiment by Shull and Smart [8] was published to confirm, with elegancy, the theoretical prediction of antiferromagnetic sublattices by Néel [5].So in 1951 Bertaut moved first to USA, then in 1953 he was granted as Fullbright fellowship to spend a year at Brookhaven National Laboratory to study neutron scattering sciences.Two years after, Néel initiated the creation of the Centre d'Etudes Nucléaires de Grenoble (CENG later CEA-Grenoble) where two neutron research reactors Mélusine (up to 12 MW -1958) and Siloé (up to 32 MW -1962) were successively operated.Bertaut was in charge of their instrumental development.As well during this period, Bertaut started his intense continuous work on the symmetry analysis of magnetic structures.Those milestones of his long-term academic activity will be more detailed in the following parts of the present article.
Parallel to his main activity devoted to neutron diffraction and symmetry analysis of magnetic configurations, Bertaut continued using his impressive facilities for crystallography.First he built a new reciprocal space formalism to determine the electrostatic energy of crystalline systems [11].The Bertaut's method was demonstrated to have a much better effective convergence than that of Ewald, early built on the basis of direct space operations [12].During this period, another and important contribution of Bertaut to crystal structure determination was to build a general algebra of the structure factors for a given crystal accounting for the equivalent site positions.The linearised products built from these structure factors led him thus to determine statistics and extinctions rules [13].His contribution to the so-called direct method for crystal structure determination, just proposed by e.g.Zachariasen [14] and Wilson [15], was based on the search for symmetry relationships existing between the structure factors.From this early and many following works as eminent crystallographer, Bertaut was invited to refund the basis of the International Tables for Crystallography in straight cooperation with Hahn [16].
All along his prolific career, systematically Bertaut felt immediately concerned for all aspects of solid state chemistry and solid state physics, promptly inciting researches on new materials, to use original synthesis methods (i.e.large single crystals, e.g. at LETI), to study novel phenomena in magnetism, superconductivity, metal-insulator transitions, X-ray magnetic scattering (experimental evidence in his laboratory by de Bergevin and Brunel [17]).His interests to experimental chemistry and crystallography concerned large panels of solid state compounds from alloys and intermetallics, hydrides, carbides, pnictides, chalcogenides, halides, complex oxides (e.g. the famous garnets, also many others with rare earth and actinides elements), borates, silicates, phosphates, etc.
To serve both his large experimental and conceptual scientific appetites, he actively promoted the development of novel techniques such as neutron time of flight, the use of polarised neutrons at Mélusine, in laboratory and under beam high pressure equipments, computer crystal structure determination in 1964 etc.In parallel, Bertaut engaged the premises of new large array instruments, directly participating to initiate, early 1963, the Institute Laue-Langevin erected in 1971.In 1981 he supported strongly the project SIREM (Source Intense de Rayonnement Electromagnétique), a proposal for a French synchrotron in Grenoble, a few time after, rescaled in the international proposal for the European Synchrotron Radiation Facilities (1984).
However during this period of intense multidisciplinary activities, symmetry analysis was his main guideline aiming to better understand a wide panel of solid state materials, from chemistry principles up to sophisticated instruments built to establish and finely detail the correlations and in the matter.
BACKGROUND
Magnetic energy, magnetic couplings, magnetic ordering, group theory representations, etc., are master expressions which were employed for different purposes, at different periods, for times used opposite to built various models, then admitted to converge for a unique point of view, thanks to well known pioneers in magnetism and crystallography, thanks to Bertaut due to its synthetic lecture and interpretation.
Contribution of Symmetries in Condensed Matter
Early after the definition of the electron and orbital moments and the spectroscopic rules (Pauli principle, Hund's rules) leading to attribute a permanent magnetic moment to well defined classes of elements, thermal analysis of magnetism by Weiss opened the question of exchange forces [1] whereas Dirac [2] then Heisenberg [3] formulated the notion of exchange hamiltonian to describe a ferromagnetic material via a representative energy.This can be called the analytic or thermodynamic point of view on magnetic correlations and magnetic ordering stability.This results from the concept of molecular field used successfully by Néel to predict antiferromagnetism [5] and developed successively by Van Vleck [4], Kittel [18], Villain [19] etc. for more and more different cases up to non-collinear magnetic configurations.
On the other side, a fully mathematical analysis of symmetry was proposed by Herring, as a theory of group representations [20].The Landau's school of phase transition aimed at describing magnetic space invariant expressions to be considered in a more generalised hamiltonian.Then Belov [21], Kovalev [22], Dzyaloshinski [23] opened the way of a crystallographic approach.They were followed in more synthetic analysis in terms of symmetry operators by Vonsovski and Turov [24], Moriya [25], Opechowski and Guccione [26], etc.
Since the hypothesis of neutron by Rutherford [6] that Chadwick confirmed experimentally [7], Shull and Smart have proven neutron scattering to be a unique tool to visualise magnetic material orderings (e.g.AF of MnO) [8].Then, during the period 1953-1958, Bertaut as a fresh crystallographer has balanced very actively from experiments using neutron diffraction to build configuration models by using the two main approaches, and publishing first synthetic analyses in 1960-1969, then rationalised in several main papers [27].
The microscopic method of Bertaut [28]
Roughly between 1961 [29] and 1973 [28], Bertaut has developed successively more and more sophisticated mathematics enabling to determine all possible sets of moment configurations according to the local environment of a given magnetic atom R, as the early basis of molecular field model expressed in the Hamiltonian of Eq. (I).The method accounts for the different Bravais lattices as S R carrying magnetic moments and then only the permutation group operators are involved and J is a scalar.
To determine the equilibrium condition of S R "oscillating" moment at position R one differentiates (I) under the constraint S 2 R = constant: At equilibrium (dS R /dt = 0), S R must be parallel to the molecular field R J RR S R where parallelism condition corresponds to R R = R J (RR ) R with J (RR ) = S R J RR S R , the unit spin being defined as = S/S.R is a real number, as Bertaut said a Lagrange parameter, or the contribution of the R atom to the exchange energy via: R and H are invariant under the crystal symmetry.If one considers the different Bravais lattices R (now labeled r with indices i, j = 1 to n), i i (r i ) is multiplied by exp(iq.ri ), and summing on all r i of the 00003-p.3EPJ Web of Conferences i lattice leads defining: where q is the propagation vector.As J (r i r j ) only depends on the distance |r i − r j |, for the whole crystal of N unit cells, one gets n equations like: with T i (q) the Fourier transform of spin i (r i ) and ij (q) the interaction matrix.Then one fixes r i and sums on all equivalent r j around the r i .This can be written in the form: where (q) is a hermitic matrix of order n, ( ) being a diagonal matrix and T(q) a vector of order n.This column vector has components T i (q) which are Fourier transforms of real spin components i = (r i ).
So one can write: This is the basis of a first series of analyses written by Bertaut thus allowing to determine and justify collinear and non-collinear magnetic moment configurations.Bertaut also proposed another writing and easiest to use formalism.
Since the (q) matrix is atom coordinate depending, the new (q) defined by ij (q) = ij (q) exp[iq.(ri0 − r j 0 )] only depends on the Bravais translations (r i0 and r j 0 being the origins of the Bravais lattices).New eigenvectors are defined Q j (q) = T j (q) exp(−iq.rj 0 ) leading to a similar form as (VI) but a simplest interaction matrix: The real magnetic components can be developed as (assuming only one propagation vector q): If one considers general solutions such as Q j (q) = 1 2 (u + iv)Q j where Q j is a phase factor in form Q j = exp (−i j ).
The magnetic moment can be written as follows: S(r j 0 ) = S j (u cos j + v sin j ) ( X )
The macroscopic method of Bertaut [30]
Here the whole set of symmetry operators of a space group is considered and J can have the form of a 2-dimension tensor.Again anisotropy operators can be considered.Magnetic moments are effectively treated as axial vectors, for which Belov [21] and Kovalev [22] have already defined the effects of operators and anti-operators, depending on the time reversal operator effect.This method likes to be rigorously absolute as based on the full symmetry of a considered space group, which symmetry must be well identified first.The method is better explicitly called "theory of space group representations" as for the most extended version [30].
Given a space group G, a representation (A) of dimension d of this space group is a set of square matrices d × d (one for each symmetry operator A) which is isomorphic to G. The table of characters (A) for a representation is defined by the traces of the A matrices, a quantity which is invariant for all equivalent representations and is used in fine to build the basis vectors of magnetic configurations.The
00003-p.4
Contribution of Symmetries in Condensed Matter purpose of the reduction procedure of a representation (A) is to find a new basis allowing to form a direct sum of diagonal matrices representation such as: When it becomes impossible to operate a full reduction (non-diagonal matrix block, indices (i, j )), the processed representation is said irreducible, the corresponding sum (XI) ending as a set of ν (A), d ν being the multiplicity of identical sub-matrices.Then building the table of characters ν (A n ) for the ν blocks of irreversibility and the n symmetry operators A n of the group G allows to define the number and the dimension of the basis vectors.The corresponding magnetic configurations are formed from invariant linear combinations (in G) of moment components S m where = x, y, z and m numbers the multiplicity of a magnetic site position.
To derive the definitive set of the basis vectors V n , and then to build the Heisenberg type hamiltonian of order 2 in terms of eigenvector contributions, one has to use the projection operator technique (R A being the matrices of the full space group representation): with g the order of the group.As detailed in those following courses, expressly dedicated to the theory of space group representations, the here above description would like to serve only as an elementary report of the succession of algebraic mechanisms to be considered for.In fact, application of the theory of space group representations is more complicated owing to different criteria: for instance, the G group might be symmorphic or not; the magnetic cell may coincide with the structural cell or not; in the latter case, the propagation vector k (q = 2 k) is non-zero and might be commensurate, incommensurate or join the origin to a point of the Brillouin zone surface (the little group associated to the propagation vector G 0k must then be considered instead of the point group G 0 ); this might lead to loaded representations; the magnetic structure might be described by several propagation vectors; if G allows for more than one equivalent propagation vector (belonging to k*, the star of k) this might lead to a possible mixture of elementary eigenvectors.If the phase transition is first order instead of second order (in which case the magnetic structure is described with a single irreducible representation), then several irreducible representations might be mixed, although not necessarily.Besides, through additional magnetoelectric or magnetoelastic couplings, there might be a lowering of the crystal symmetry associated to the magnetic ordering.So, lowering the G space group symmetry is a procedure that can be considered when considering the full symmetry of the starting space group does not lead to formulate eigenvectors in agreement with the neutron diffraction data.
The Phase Comparison Method:
, where R = Y or a Rare Earth element Bertaut presented in this example a simple method which does not need explicit knowledge of the eigenvectors and eigenvalues derived in the Microscopic approach but uses the fact that the linear equations belonging to the same eigenvalue can be made either identical or conjugate [31].The space group Pbam of this series of oxides allows localising the cations in specific Wyckoff positions 1 : Mn 2+ in site 4g (Mn I ) x, y, 0 (1); −x, −y, 0 (2); 1 For a reminder on crystallography, see chapter by Grenier and Ballou.
For RMn 2 O 5 , the propagation vector derived from low temperature neutron diffraction patterns is q = 2 .( 1 a * + c * ).First let us consider YMn 2 O 5 : since Y does not carry any magnetic moment, according to (VIII), the (Mn) magnetic interaction is a 8 × 8 matrix.System Mn I : One can derive phase relationships, with i the phase angles defined part 4.1, as: where m and n are integers.Solutions are given by: System Mn I I : No phase relationship can be established between moments onto sites 5 and 7! This indetermination can be lifted from the study of the fully Mn I -Mn II coupled system: where u and v are two orthogonal unit vectors of the (x, y) plane.Please notice the orthogonal directions of the 1 and 4 (2 and 3) magnetic moments as well as of the 5 and 7 (6 and 8) magnetic moments corresponding to rise of un-determination as remarked above.All details of the magnetic structure can be found in ref. [32].
System R 3+ : For the magnetic rare-earth atoms, analysis of the (R) interaction matrix leads to build relationships similar to (XIV).However the R 3+ sites are submitted to strong crystal field effects with an in-plane anisotropy direction along [ 210] for the R 9 and R 10 sites and along [210] for the R 11 and R 12 sites, both latter being much marked owing to different O − neighbouring.Strong anisotropy effect results in a marked deviation from collinearity and in an oscillating structure along c of the R magnetic moment system.The total interaction matrix allows to describe the magnetic moment arrangement, e.g. of the Mn 4+ II system, which obeys the orthogonal arrangement issued from (XVI) before the magnetic ordering of the R 3+ sublattice, but is coupled via new phase relationships at low temperature when 00003-p.6
Contribution of Symmetries in Condensed Matter
the R 3+ moments order, as: II , − II , − II , + II for Mn II sites 5 to 8 at −z, z, 1 − z and 1+z respectively, being introduced to account for the Mn magnetic coupling with the R 3+ moments.
The stability of these complex non-commensurate magnetic structures was confirmed by solving the determinant of second order derivations (see XVII) of the global interaction Hamiltonian W versus three phases I , II and : Therefore, the application of the Microscopic method allows justifying: (i) the orthogonal couplings found experimentally between the magnetic moments of the two types of Mn sublattices when there is no ordered magnetic moment onto the R 3+ sites (R = Y or for the high temperature magnetic structure); (ii) the low temperature long range oscillating magnetic structure, when R 3+ carry an ordered magnetic 4f moment, moreover submitted to a crystal electric field producing a strong magnetic anisotropy.In order to achieve a complete understanding of these particularly extended applications of the microscopic "phase comparison" method, please refer to ref. [31].
The Macroscopic or Group Theory Method: Mn 3 GaC
The compound Mn 3 GaC is cubic (space group P m 3m, with Ga in site 1a, C in site 1b and Mn in site 3c).It is antiferromagnetic (AF) below T AF ∼ 170 K, then ferromagnetic (F) up to T C ∼ 240 K, and finally paramagnetic at higher temperatures.Low temperature neutron diffraction patterns indicate the antiferromagnetic propagation vector k = ( 1 2 , 1 2 , 1 2 ) [33].The analysis of the reduction conditions of the (G) complete transformation matrix and the search of the corresponding eigenvectors leads to its decomposition into irreducible blocks such as: Using the conventional spectroscopic notations, the four blocks form respectively 1 (A 2g ), 2 (E g ) and two 3 (T 1g and T 2g ) dimensional systems.The corresponding eigenvectors are: Pure modes with equal Mn moments can be successively derived as: E g does not allow building an equal moments mode, as that experimentally determined.
To the magnetic arrangements that can be built, as shown in Fig. 1 (modes a to g), from the here above given pure modes (and their linear combination with respect to equal Mn moments), corresponds the same and unique value of the exchange energy when considering a Heisenberg type interaction hamiltonian of order 2 in moment components.Only some additional anisotropic terms should lead to different solutions.So, a question was: Why is the [111]-antiferromagnetic mode found experimentally stable by neutron diffraction?This magnetic arrangement corresponds to a stacking along the [111] direction of ferromagnetic planes of alternating sign, the moment direction being along the [111] direction.
Then lowering of the crystal symmetry was then critically analysed with respect to k = ( 1 2 , 1 2 , 1 2 ).The possible modes are the following: the mode a occurs in the identity representation A 1g of P 23 or P m3 and also in the A 2g irreducible representation of P 432, P 43m, R3m, P m3m, R 3m.The modes b and c occur in T 2g of P 23 and P m3.The mode d is the sum of the moment configuration b and c and occurs in T 1g of P 432, P 43m, P m3m.It occurs as well in A 2g of the rhombohedral groups R3m and R 3m.The mode e is the difference of moment configuration b and c and occurs in T 2g of P 432, P 43m, P m3m and also in A 1g of the rhombohedral group R3m an R 3m.The mode f is the sum of moment configuration a + b + c (= a + d) belonging to different representations in the cubic groups, and a + d belonging to the same representation A 2g in R3m and R 3m.The mode g is a triangular arrangement (Yafet-Kittel Contribution of Symmetries in Condensed Matter type) in the (111) plane and belongs to the representation E g of the rhombohedral groups R3m and R 3m.The simple ferromagnetic (111) plane belonging to E g of R3m and R 3m is not represented here.
As discussed here above, it comes that none of the cubic sub-groups leads to develop new noncollinear antiferromagnetic configurations than those already built from the P m3m symmetry.The immediately first non-cubic group with the highest symmetry is the rhombohedral one R 3m leading to: with configuration vectors: ) and V 2 (A 2 ) belonging to the same irreducible representation A 2 leads exactly to the collinear configuration directed along [111] as experimentally deduced from neutron diffraction data.As a matter of proof: 1 -After this analysis, a very precise X-ray diffraction experiment was conducted with K 1 (Cr) to realise the highest resolution conditions.Actually, a weak rhombohedral distortion of about 89.7 degrees was evidenced, occurring down to T AF , thus confirming the direction [111] as the easy axis of the antiferromagnetic magnetic structure [33].
2 -Back to group theory considerations, it is worth noting that the antiferromagnetic mode can be generated only from different irreducible representations [34] when starting from the cubic space group P m3m.Since a 3D irreducible representation is concerned, as the considered sub-group contains the identity representation, it means that, at T AF , the transition cannot be of second order type.Actually, it is a first order transition, with a clear hysteresis in the experimental magnetisation traces and a net change in the Mn magnetic moment from 1.8 B (AF phase) to 1.2 B (F phase).Moreover, recent measurements by X-Ray Magnetic Circular Dichroism [35] have confirmed a strong magnetocaloric effect at T AF to be related to the total entropy change from the magnetic (AF ↔ F ordering), electronic (1.8 ↔ 1.2 B ) and a lattice contributions (P m3m ↔ R 3m).
The theory of space group representations as developed by Bertaut revealed to be a powerful tool, not only to determine univocally the magnetic configuration according to the space group symmetry operators, but also to account for additional effects influencing the magnetic ordering (e.g.anisotropy tensors), as taught in specific papers (and herein references) of E. F. Bertaut [30,36].
CONCLUSION
At first, Bertaut seemed fully convinced by the Microscopic point of view: ". . .they search, using group theory to establish a basis of irreducible representations allowing them to built a phenomenological Hamiltonian, which is invariant under the symmetry group operators.Our Method is Microscopic since it accounts for elementary operations.In our opinion, it is more powerful, . . .since it allows to distinguish different modes from stability conditions, and to determine possible ranges of effectiveness for exchange integrals" (in French in [29]).
But rapidly he understood that according to Dirac, mathematics is a unique key for physics ("this results is too beautiful to be false, it is more important to have beauty in one's equation than to have them fit experiment" [37]).Its growing interest in the Macroscopic method was also supported by the impressive work realised by his groups, both at Laboratoire de Cristallographie du CNRS and at Laboratoire de Diffraction Neutronique du Centre d'Etudes Nucléaires de Grenoble (Commissariat à l'Energie Atomique).
1 =
u cos I + v sin I 2 = −u cos I + v sin I 3 = u sin I + v cos I 4 = u sin I − v cos I 5 = u cos II + v sin II 6 = u cos II − v sin II 7 = −u sin II + v cos II 8 = u sin II + v cos II (XVI)
Figure 1 .
Figure 1.Antiferromagnetic modes than can be built from the representation theory analysis. | 2018-12-08T02:19:51.089Z | 2012-03-01T00:00:00.000 | {
"year": 2012,
"sha1": "5e836641766de684872d4d99391ad50c6aa02206",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2012/04/epjconf_cscm2012_00003.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5e836641766de684872d4d99391ad50c6aa02206",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259626909 | pes2o/s2orc | v3-fos-license | Exploring the Impact of Vehicle Lightweighting in Terms of Energy Consumption: Analysis and Simulation
: Nowadays, the topic of reducing vehicles’ energy consumption is very important. In particular, for electric vehicles, the reduction of energy consumption is necessary to remedy the most critical problems associated with this type of vehicle: the problem of the limited range of the electric traction, also associated with the long recharging times of the battery packs. To reduce use-phase impacts and energy consumptions of vehicles, it is useful to reduce the vehicle mass (lightweighting). The aim of this work is to analyze the parameters of a vehicle which influence the results of lightweighting, in order to provide guidelines for the creation of a vehicle model suitable for studying the effects of lightweighting. This study was carried out through two borderline case models, a compact car and an N1 vehicle, and simulating these through a consolidated vehicle simulation tool useful for consumption estimations. This study shows that the parameters that most influence the outcome of lightweighting are the rolling resistance, the battery pack characteristics, the aerodynamic coefficients, and the transmission efficiency, while the inertia contributions can be considered negligible. An analysis was also carried out with the variation of the driving cycle considered.
Introduction
Nowadays, the topic of reducing consumption is very important, both for internal combustion vehicles and for electric vehicles. In fact, fuel saving for internal combustion vehicles is very important, in particular due to the issue of emissions and the related stringent laws [1,2]. Meanwhile, for electric vehicles, the reduction of energy consumption is necessary to remedy the most critical problems associated with this type of vehicle: the problem of the limited range of the electric traction, also associated with the long recharging times of the battery packs [3].
The fuel or energy consumption of vehicles is due to two components [4]: the displacement of the mass of the vehicle and the contribution given by various losses (for example, aerodynamic drag, accessories, engine, and powertrain friction) [5][6][7][8][9].
To reduce the use-phase impacts and fuel or energy consumptions of vehicles [4,10] the following are useful:
However, it is good to keep in mind that any intervention must be compatible with other needs, in particular with safety [10,23].
In this paper, we will focus on the lightweighting of electric vehicles as a method for reducing consumption. The latter issue is very important, considering the low traction range and long battery recharge times associated with fully electric vehicles. Deepening the topic of electric vehicles is very important. In fact, the European Parliament voted, on Wednesday, 8 June 2022, to stop sales of new ICE cars and vans in EU starting from 2035 (ordinary legislative procedure 2021/0197(COD) [24]).
In particular, lightweighting can be achieved in several ways: • The most common method is material substitution as well as design and construction changes [4,12,18,[25][26][27] (considering also the role of plastics in lightweighting [28]); • Adopting solutions with alternative powertrains, for example, a fuel cell/battery pack hybrid electric system, adding the complexity and weight given by the additional components of the system, can allow the total weight of the powertrain to be reduced thanks to the downsizing of the battery pack; • Implementing suitable regenerative braking logics and range management [29]; • Improving energy dense battery chemistries [29] and, in general, battery weight optimization; • Improving the battery efficiency, for example, through different and more efficient systems of battery cooling, in such a way as to be able to reduce battery size for the same vehicle range; • Adopting other, more weight-efficient battery forms and shapes, such as blade batteries and structural battery packs; • Secondary mass saving and resizes.
Paper [4] describes 10 lightweighting principles, focused on environmental sustainability, but also considering economic and social aspects. Principle 9 of [4] is about the evaluation of additional benefits resulting from component and vehicle lightweighting, such as secondary mass saving and resizes and alternative powertrains [27,30,31]. In fact, the mass of some vehicle components depends on the mass of others [30,31]. The total mass reduction in these components is known as secondary mass savings. Mass reduction alters the vehicle's performance, so the powertrains can be resized to re-establish the original vehicle performance. This results in improved fuel efficiency [6,[32][33][34]. Increased fuel efficiency enables an increase in vehicle range, with the same capacity of the tank or the same capacity of the battery pack.
Paper [10] investigates the lightweighting strategy of material substitution and mass reduction, but without ignoring shape optimization, the controls, and the production processes. In particular, the study discriminates the environmental benefits according to the size of the vehicle and its power supply (i.e., gasoline, hybrid, and electric). Paper [10] distinguishes vehicles according to their size but does not specify whether it is the mass of the vehicle that causes the different results of lightweighting or whether some other parameter of the vehicle which varies according to the vehicle class. In this paper, therefore, we ask ourselves why, by reducing the mass of two vehicles of different classes by the same amount, the variation in consumption is different. In particular, we want to understand if this aspect is due to the non-linearity of the vehicle consumption-weight curve or if there are other vehicle parameters that come into play. The answer lies also in this second aspect. In this paper, the parameters of the vehicle which lead to a different behavior following lightweighting will therefore be investigated.
Furthermore, the study [10] covers the lightweighting aspects associated with different components of a vehicle and adds up all its beneficial contributions:
•
Engine compartment, where the improvement concerns the aesthetic cover of the engine, replacing the traditional one in fiberglass with one made with bio-based fiber materials [35]; • Frame, in particular the substitution of its main constituting parts; • Bodywork, considering the material substitution, the production processes, and modifying the geometries; • Wheels, considering different type of tires, brakes, and suspension arms; • Passenger compartment; • Electronics and electrical system, considering the introduction of a speed control system [36] (reduction of the maximum speed of the vehicle on motorways from 130 to 120 km/h and reduction of consumption by 6% in a medium-sized gasoline car), and the replacement of traditional copper electrical cables with those in copper-tin (Cu-Sn) of reduced diameter and mass [37]; although, it should be noted that Cu-Sn can only be used in low current or signal applications (e.g., measurement signals of the voltages of the single cells of the battery pack) and not in the power connection cables due to a resistance increase [38].
However, this last aspect does not concern lightweighting and also makes changes to the vehicle's maximum performance.
Ref. [10] says that the greatest advantage obtained thanks to lightweighting is found in internal combustion vehicles, while there is a lesser advantage on electric vehicles. Meanwhile, in terms of size, it is small cars that benefit most from weight reduction [10]. Considering that due to the recent stringent laws, thermal combustion vehicles are destined to disappear, it is important to analyze, in more detail, the benefits that various lightweighting techniques can bring to electric vehicles. In this paper, we will focus precisely on the latter.
Paper [39] considers different alloy and technologies of components manufacturing with the aim of lightweighting, considering the transition from an internal combustion engine to electric vehicles. Ref. [39] says that the goal of a lightweight design is to build structures with minimal use of materials and an optimized use of material strength.
In the literature, many papers express the results of lightweighting using the Fuel Reduction Value (FRV), expressed in L/(100 km · 100 kg), where L represents the liter of gasoline or diesel, saved to travel 100 km, following a vehicle mass reduction of 100 kg. Typically, in the literature, FRV indices calculated through experimental tests in previous works are used [34].
Paper [34] estimates fuel consumption during use-phase, associated with a vehicle lightweighting process, calculating the FRV using a method created ad hoc, based on the U.S. Environmental Protection Agency (EPA) databases. Paper [6] presents a work similar to what is reported in [34] but is specific to electric vehicles, thus considering the FRV expressed in equivalent liters per 100 km and per 100 kg of lightweighting, since the traction of electric vehicles is guaranteed by the electric energy of the battery pack and not from the liters of fuel used to feed the internal combustion engine. Both articles ( [6,34]) evaluate the aspect of lightweighting from an LCA (Life Cycle Assessment) perspective [40][41][42].
Papers [43,44] also evaluate the effects of lightweighting in the automotive sector from an LCA perspective. Ref. [43] calculates the FRV coefficient for a wide range of gasoline turbocharged vehicle case studies and [44] for diesel turbocharged vehicles. In particular, both papers show how the FRV varies according to the vehicle class considered, distinguishing vehicles in A/B, C, and D classes, but without showing which vehicle parameters actually lead to this variation. Instead, the research and analysis of the vehicle parameters that influence the results of lightweighting represent the work proposed in this paper.
Other scientific articles that deal with the lightweighting topic from an LCA perspective are [5,45]. Ref. [5] calculates the FRV coefficient by also considering the secondary lightweight effects. Ref. [45] evaluates the vehicle use stage for both internal combustion engine vehicles (ICEVs) and electric vehicles (EVs): in particular, the classical FRV coefficient is used for ICEVs, while the ERV index is used for EVs. In fact, the ERV coefficient is more suitable when electric vehicles are considered, being expressed in kWh/(100 km · 100 kg). Indeed, it is possible to have a more immediate representation, considering the energy consumption savings (expressed in kWh/100 km) associated with 100 kg of mass reduction, without having to go through the equivalent liters of fuel.
Finally, paper [46] focuses on electric vehicles to evaluate the results of lightweighting, proposing a methodology for calculating the ERV index. In addition, in the work proposed in our paper, we focus on electric vehicles and, therefore, it was decided to evaluate the effects of lightweighting based on the ERV coefficient instead of the FRV, considering the ERV index, expressed in kWh/(100 km · 100 kg), to be more suitable and more comfortable, not having to refer to the equivalent liters of petrol or diesel which are not directly involved in electric traction.
As seen above, the topic of vehicle lightweighting is treated in the literature in various forms, often referring to or calculating the FRV index (or ERV in the case of electric vehicles). These indices differ according to the class of the vehicle being studied, but there is no study in the literature concerning which vehicle parameters lead to this variability. This last aspect is precisely the subject matter of the study presented in this paper, which, being focused on full electric vehicles, deals with the lightweighting topic by referring to the ERV coefficient. The ultimate objective of this work is therefore to evaluate which are the parameters of the vehicle to be estimated more accurately for the realization of a model useful for evaluating the effects of lightweighting.
This paper is organized as follows: • Section 2 shows the methodology adopted, in particular the reference vehicles of this study, the driving cycle used for the energy consumption estimation, the simulation tool adopted, the vehicle parameters that are the object of investigation, and a brief explanation of the simulations carried out; • Section 3 presents the results of the study and the considerations that derive from it; • In Section 4, the results obtained in Section 3 are discussed and reorganized, and some future works are presented; • In Section 5, some concluding remarks are reported, and the most relevant information in Section 4 is summarized.
In particular, it has been found that for a correct calculation of the ERV index, it is important, first, to establish the correct definition of the rolling resistance coefficient, followed by the aerodynamics, and then the battery pack parameters and the transmission efficiency. On the other hand, the inertia contribution can be considered negligible.
Materials and Methods
The objective of this research is to evaluate which are the parameters of the vehicle (and its model) that influence the results of lightweighting, all with reference to vehicle categories M and N1 [47]. For the analysis, various simulations were carried out with the model described in [48] (with the integrations described in [49,50]), for the estimation of energy consumption, on standard driving cycles, as the mass of the vehicle varies.
Reference Vehicles
Two opposite cases were considered, a utility car (compact car, segment B, category M) and a light commercial vehicle (category N1).
In particular, the N1 category vehicle is the vehicle adopted in [48] for the validation of the model with a low performance vehicle, but with a total vehicle transmission ratio equal to 6.22. Despite the modification of the transmission ratio, the vehicle in question fails to follow the standard driving cycles presented later for high vehicle weights (around 3500 kg). To exclude the variability of the results given by the limitations imposed by the maximum performance of the electric motor and the battery pack, appropriate measures have been adopted in order to avoid the occurrence of these limitations: an increase of the maximum motor torque and of the maximum current that can be supplied by the battery pack.
The compact vehicle of VI-CarRealTime (VI-Grade), the "CompactCar", was considered as a B-segment vehicle. VI-Grade vehicle models are validated. In fact, as the VI-CarRealTime documentation explains, all VI system data could either come from experimental tests performed in a lab or from a virtual test performed within Adams Car. However, this VI-Grade vehicle is equipped with an internal combustion engine; therefore, only the characteristics of the vehicle layout have been maintained (wheels, aerodynamics, etc.), while the driveline has been replaced with that of a compact electric car widely marketed in Italy and Europe, the "Fiat 500e Hatchback 42 kWh" [51,52], which has an electric motors power of 87 kW, a single transmission total reduction ratio of 9.6, and a battery pack of 42 kWh.
For more details on vehicle (compact car and N1 vehicle) parameters, see Table 1. 3 The N1 vehicle considered has an efficiency map for the electric motor [48]. The mean efficiency is approximately around 98%. For simplicity, this constant value was used as an approximation. 4 Frontal area (Af) multiplied by longitudinal aerodynamic coefficients (drag, Cx). 5 Internal resistance of the battery pack.
Initially, both vehicles (compact car and N1 vehicle) were considered equipped with a classic benchmark regenerative braking logic, with a trend as a function of time typically found in the literature [48,49]. In particular, the maximum possible regenerative torque is equal to 50 Nm for both vehicles, and the regenerative recovery begins when the accelerator pedal is released or, in any case, when the driver presses the brake pedal (and the accelerator pedal is not pressed), with a linear increment equal to 22.5 Nm/s. The simulations with vehicles equipped with a regenerative braking logic led to considerations (which we will see later) which meant that the analysis of the results of lightweighting was then carried out on the same vehicles, but without regenerative recovery under braking.
Driving Cycle
The effects of lightweighting on the two vehicles were evaluated on the following standard driving cycles:
Simulation Tool
For the simulation, the TEST (Target-speed EV Simulation Tool) model [48] was used, a vehicle longitudinal dynamics simulation tool that allows the simulation of both the mechanical and the electrical parts of full electric or hybrid electric vehicles.
However, further improvements have been made to the tool, aimed at facilitating the setting up of the simulations (and the vehicle model parameters) and their iterations, as the mass of the vehicle varies:
•
The possibility to save and use vehicle databases; • The automation of iterations according to three logics.
First, thanks to an additional panel of the graphical user interface, using an on/off switch, it is possible to choose whether to use a pre-set database or to manually enter the parameters of the vehicle being simulated. Through a list-by-list procedure, it is possible to choose one of the possible databases. Meanwhile, through a button, it is possible to create a new database or modify existing databases, regarding the constant parameters. Furthermore, the panel also has several other buttons for modifying the vectorial parameters of the vehicle model related to the chosen database. Instead, through another button, it is possible to charge the constant vehicle model parameters related to the database chosen and use them for the TEST model simulations.
The TEST model has also been integrated with the addition of the possibility of simulation iterations, through the following three logics:
•
Defining the number of simulations to be performed, where the initial SOC of the next simulation is equal to the final SOC of the previous simulation; • By defining a minimum SOC, the initial SOC of the next simulation is equal to the final SOC of the previous simulation; the iterations continue until the final SOC falls below the minimum SOC set; • Iterations by varying the weight of the vehicle, in particular, an iteration is performed with the empty weight of the vehicle, defined in the model variables. A settable number of iterations are also performed, with a constant weight increase to be defined for each simulation with respect to the previous one. The same procedure is conducted for weight reduction, starting from the unladen weight of the vehicle set as the default value.
The main features of this tool are the short computation times and the execution of closed-loop simulations that are more efficient than other tools reported in the literature [58]. This instrument is reliable, robust, and numerically stable. It is also intuitive and easy to use for people without specific training. Finally, the graphical user interface is simple and straightforward.
However, due to various approximations adopted (for example, the absence of the Pacejka for the tires, the approximation of the condition of perfect rolling of the tires without slip, and the fact of not considering the variation of the wheel radius during the simulations), the TEST model results are less accurate than other simulation tools, such as the one presented in [58], widely used by our research group. This is not a problem for the project proposed here. Since the TEST model simulates low performance road vehicles Energies 2023, 16, 5157 7 of 31 very well, it has limitations only for vehicles with very high performance (e.g., the hypercar considered in the validation phase of the model [48]), for which a calibration of the model is required.
Parameters of the Vehicle (and of Its Model) Which Can Affect the Lightweighting Results
Analyzing the parameters and the structure of the TEST model [48], the following parameters were identified, which can influence the variability of the results of lightweighting according to the type of vehicle and its characteristics.
•
Driving cycle considered: different phases of acceleration and deceleration, different intensities of the latter, different powers involved, and variation of the possibility of regenerative recovery. • Rolling resistance coefficient: in fact, the latter appears in the mathematical formula of the rolling resistance together with the mass of the vehicle.
•
Coefficients of aerodynamic resistance: these realize the aerodynamic resistance force acting on the vehicle. In the mathematical formula of the latter, the mass does not appear, but, for the same driving cycle, this force modifies the proportion between phases in which the electric motor is delivering torque (both in acceleration and deceleration) and those (in deceleration) in which the electric motor does not deliver torque or, alternatively, which acts as a generator recharging the battery pack. In fact, during deceleration, it is possible to have a lower deceleration than that which would occur in the event of an electric motor not delivering torque. Deceleration therefore is due solely to resisting forces, inertia, etc. Therefore, in the event of lower deceleration, the electric motor will still have to deliver torque. The result will therefore not be actual braking but a partial release of the accelerator pedal. Furthermore, aerodynamic resistance is a function of speed, and at different speeds of the driving cycle, it is possible to have different acceleration values, with which the contribution of the vehicle mass is correlated, due to the resulting inertia. The acceleration contribution can therefore have a different influence at different points in the driving cycle, as can the mass contribution but not in a corresponding way.
•
Other parameters that may be useful to investigate are listed below: • Inertias of the electric motor and of the rotating parts of the driveline. The contribution of the inertias appears in the equation of the resisting force, which is a function of the angular acceleration of the considered rotating component. • Gear ratios and wheel radius: these parameters modify the rotation speed of the various components of the driveline, at the same vehicle speed. • Type of battery pack, in particular the internal resistance of the cells of the pack itself. • Efficiencies of the transmission and the rest of the driveline (e.g., efficiency of the motor and inverter in charging and discharging).
The effect of the variation of all the parameters mentioned above is investigated below with regard to the modification to the results of a vehicle lightweighting action.
Set of Simulations
Simulations were initially carried out, using the TEST model, on the WLTC (class 3b) and US06 driving cycles, for the N1 category vehicle model and for the compact car model. These simulations were repeated, for both vehicles, setting all the inertia contributions to zero.
Further sets of simulations were also carried out, on the WLTC (class 3b) and US06 driving cycles, for the compact vehicle, in which one or more parameters of the vehicle under examination (the "CompactCar") were replaced by the corresponding values relating to the N1 category vehicle. In fact, the contribution provided by lightweighting is often differentiated in the literature according to the vehicle class. In this study, we therefore wanted to analyze how the different parameters of the vehicle model affect the results of lightweighting in such a way as to disengage from the vehicle class and monitor which parameters lead to this differentiation. The parameters initially chosen for comparison are the following: • Battery pack parameters (nominal voltage, capacity, and internal resistance); • Aerodynamics (Af · Cx, where Af is the frontal area of the vehicle, and Cx is the longitudinal aerodynamic coefficient); • Efficiency of the transmission; • Rolling resistance (in particular, in the reference models, this resistance is a function of a rolling friction coefficient. The analysis therefore focuses on the value of this coefficient).
The investigation on the WLTC and US06 cycles was further deepened thanks to simulation sets with the following vehicle models:
•
Compact car with the moments of inertia of the N1 vehicle; • Compact car with all previously listed parameters and moments of inertia of the N1 vehicle; • Compact car with the total traction ratio and wheel radii of the N1 vehicle; • Compact car with all previously listed parameters, moments of inertia, traction ratio, and wheel radii of the N1 vehicle.
Thanks to all the previous simulations mentioned, it was possible to define how, for a better analysis, it is more sensible to analyze the results of lightweighting for vehicle models without regenerative braking. In fact, the regenerative braking defined in Section 2.1 leads to a different entity for the regenerative recoveries according to the different transmission ratios. This aspect will be better explained in Section 3.1. All the previously presented simulations were therefore also repeated for the same vehicle models but without regenerative braking.
All previous simulations were performed for a vehicle weight of 700 to 3500 kg for the N1 vehicle and 700 to 2500 kg for the compact car. This is in order to be able to analyze the behavior over the widest possible weight range, thus also analyzing 700 kg as an extreme case with regard to the compact car. In this way, any lightening considered is certainly included in the range analyzed in this paper. A study was also carried out for the N1 vehicle assuming lightening up to a vehicle weight of 700 kg, so as to be able to make a comparison with the results obtained for the compact car.
Finally, additional sets of simulations were carried out for the N1 vehicle and the compact car model under the conditions of an absence of regenerative recovery, on further regulated driving cycles: FTP75; HWFET; JC08; and the Artemis Urban, Rural Road, and Motorway (130) Cycle [57].
For the compact car, and for the simulations on the further driving cycles, a weight range from 700 to 2500 kg was considered. For the N1 category vehicle, however, it was considered sufficient to restrict the weight range to 1100 kg to 3500 kg.
Results
This section shows the results of the simulations carried out, obtaining the appropriate considerations.
Benchmark Regenerative Braking Logic
For the WLTC cycle, a set of simulations was carried out for the N1 category vehicle and with the same vehicle, but setting all the inertia contributions equal to zero, to see if the inertia is negligible for the consumption analysis and, in particular, for a study on lightweighting. The same process was conducted with regards to the compact car model. Figure 1 shows the results of the simulation sets described above, in particular, in terms of average energy consumption over the WLTC cycle, class 3b, as a function of vehicle weight.
Energies 2023, 16, 5157 9 of 31 the inertia is negligible for the consumption analysis and, in particular, for a study o lightweighting. The same process was conducted with regards to the compact car mode Figure 1 shows the results of the simulation sets described above, in particular, terms of average energy consumption over the WLTC cycle, class 3b, as a function of v hicle weight.
Figure 1.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weig for the N1 category vehicle model ("N1") and for the same model, but with zero inertia contribution ("N1-NO inertias"); for the compact vehicle model ("CompactCar") and for the same model, b with zero inertia contributions ("CompactCar-NO inertias").
From Figure 1, it can be seen how the inertia can be considered negligible for th study in question.
In particular, in the TEST model [48], the inertia contributions given by the wheel by the electric motor and by the rotating parts in input and in output to the motor reduce have been implemented. The resistive force, due to the inertia of each wheel, is calculate by multiplying the moment of inertia of the wheel by the angular velocity variation of th wheel and finally dividing this by the wheel radius. Similarly, the resistant torques, du to the inertia of the motor and of the other rotating parts, are calculated by multiplyin the relative moment of inertia by the relative variation of angular speed.
During several projects undertaken by our research team, including the validatio phase of the TEST model [48], it was observed that these inertia contributions are general negligible. Therefore, even by varying the transmission ratio of the vehicle, which mod fies the angular speeds, these contributions tend to remain negligible.
Other sets of simulations were also carried out for the compact car, in which the p rameters of only one aspect among those mentioned in Section "2.5 Sets of simulations were varied, setting the relative parameters equal to those of the N1 category vehicl These values, relating to the N1 category vehicle, have also all been set simultaneously the "CompactCar" model to obtain a further set of simulations. The results of all thes simulations, in terms of the average energy consumption of the WLTC (class 3b) cycle, a a function of the weight of the vehicle, are shown in Figure 2, together with the results the simulations carried out for the N1 category vehicle, so as to be able to make a compa ison.
Figure 1.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight for the N1 category vehicle model ("N1") and for the same model, but with zero inertia contributions ("N1-NO inertias"); for the compact vehicle model ("CompactCar") and for the same model, but with zero inertia contributions ("CompactCar-NO inertias").
From Figure 1, it can be seen how the inertia can be considered negligible for the study in question.
In particular, in the TEST model [48], the inertia contributions given by the wheels, by the electric motor and by the rotating parts in input and in output to the motor reducer, have been implemented. The resistive force, due to the inertia of each wheel, is calculated by multiplying the moment of inertia of the wheel by the angular velocity variation of the wheel and finally dividing this by the wheel radius. Similarly, the resistant torques, due to the inertia of the motor and of the other rotating parts, are calculated by multiplying the relative moment of inertia by the relative variation of angular speed.
During several projects undertaken by our research team, including the validation phase of the TEST model [48], it was observed that these inertia contributions are generally negligible. Therefore, even by varying the transmission ratio of the vehicle, which modifies the angular speeds, these contributions tend to remain negligible.
Other sets of simulations were also carried out for the compact car, in which the parameters of only one aspect among those mentioned in Section "2.5 Sets of simulations" were varied, setting the relative parameters equal to those of the N1 category vehicle. These values, relating to the N1 category vehicle, have also all been set simultaneously in the "CompactCar" model to obtain a further set of simulations. The results of all these simulations, in terms of the average energy consumption of the WLTC (class 3b) cycle, as a function of the weight of the vehicle, are shown in Figure 2, together with the results of the simulations carried out for the N1 category vehicle, so as to be able to make a comparison. Energies 2023, 16, x FOR PEER REVIEW 10 of 31
Figure 2.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with transmission efficiency equal to that of the N1 vehicle ("CompactCar-N1 transmission efficiency"); compact car with rolling resistance coefficient of N1 vehicle ("CompactCar-N1 rolling resistance"); and, finally, compact car with all the parameters previously mentioned equal to those of the N1 vehicle ("CompactCar-N1 values").
From Figure 2, the aspect that least affects consumption is the battery pack, followed by the efficiency of the transmission. The parameter that has the greatest influence is aerodynamics, followed by the rolling resistance coefficient.
In particular, the contribution given by aerodynamics significantly affects the increase in vehicle consumption, but, as we will see better below, it does not involve a particular variation of the slope of the original curve given by the results of the simulations on the "CompactCar" model. The other three components (rolling resistance, transmission efficiency, and battery pack) instead involve, in addition to an increase in consumption, also a variation in the slope of the curve relating to the original compact car model.
If the four aspects considered were actually the only ones to substantially act on the effects of lightweighting, one should expect a curve relating to the set of simulations with the compact car model, but with all the parameters mentioned above equal to those of the N1 category vehicle superimposed on the curve obtained by means of the set of simulations with the N1 vehicle model itself. What has been obtained (see Figure 2) does not perfectly reflect what has just been described. In particular, the two curves overlap well for a vehicle weight range between approximately 700 and 1000 kg. Above 1000 kg, the values of the two curves begin to diverge more and more as the mass increases. The two curves therefore have two different slopes. Therefore, a further aspect or parameter that justifies this behavior must be sought.
With the sets of simulations presented in Figure 1, it can be seen that the inertias are negligible with regard to the variation in consumption; however, it is worthwhile to investigate whether the latter can instead have an influence by modifying the slope of the Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with transmission efficiency equal to that of the N1 vehicle ("CompactCar-N1 transmission efficiency"); compact car with rolling resistance coefficient of N1 vehicle ("CompactCar-N1 rolling resistance"); and, finally, compact car with all the parameters previously mentioned equal to those of the N1 vehicle ("CompactCar-N1 values").
From Figure 2, the aspect that least affects consumption is the battery pack, followed by the efficiency of the transmission. The parameter that has the greatest influence is aerodynamics, followed by the rolling resistance coefficient.
In particular, the contribution given by aerodynamics significantly affects the increase in vehicle consumption, but, as we will see better below, it does not involve a particular variation of the slope of the original curve given by the results of the simulations on the "CompactCar" model. The other three components (rolling resistance, transmission efficiency, and battery pack) instead involve, in addition to an increase in consumption, also a variation in the slope of the curve relating to the original compact car model.
If the four aspects considered were actually the only ones to substantially act on the effects of lightweighting, one should expect a curve relating to the set of simulations with the compact car model, but with all the parameters mentioned above equal to those of the N1 category vehicle superimposed on the curve obtained by means of the set of simulations with the N1 vehicle model itself. What has been obtained (see Figure 2) does not perfectly reflect what has just been described. In particular, the two curves overlap well for a vehicle weight range between approximately 700 and 1000 kg. Above 1000 kg, the values of the two curves begin to diverge more and more as the mass increases. The two curves therefore have two different slopes. Therefore, a further aspect or parameter that justifies this behavior must be sought.
With the sets of simulations presented in Figure 1, it can be seen that the inertias are negligible with regard to the variation in consumption; however, it is worthwhile to investigate whether the latter can instead have an influence by modifying the slope of the consumption curve according to the vehicle weight ( Figure 3). Furthermore, it may also be useful to investigate the influence of the transmission ratios and wheel radii on the results since the wheel radii also act as a transmission ratio to discharge the forces to the ground ( Figure 3). consumption curve according to the vehicle weight ( Figure 3). Furthermore, it may also be useful to investigate the influence of the transmission ratios and wheel radii on the results since the wheel radii also act as a transmission ratio to discharge the forces to the ground ( Figure 3).
Figure 3.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the battery pack, with the aerodynamic coefficients, the efficiency of the transmission, and with the rolling resistance coefficient of the N1 vehicle ("CompactCar-N1 values); compact car with the same moments of inertia as vehicle N1 ("CompactCar-N1 inertias"); compact car with the battery pack, aerodynamics, transmission efficiency, rolling resistance, and moments of inertia of vehicle N1 ("CompactCar-N1 values (also inertia)"); compact car with the transmission ratios and wheel radii of the N1 vehicle ("CompactCar-N1 traction ratios"); and, finally, compact car with all the parameters related to the previously mentioned aspects equal to those of the N1 vehicle, i.e., the parameters relating to the battery pack, aerodynamics, transmission efficiency, rolling resistance, moments of inertia, transmission ratios, and wheel radii ("CompactCar-N1 values (all)").
From Figure 3, it can be seen that the contribution given by the moments of inertia alone is negligible as regards the average energy consumption of the WLTC cycle. The modification of the transmission ratios and of the wheel radii, equal to the values of the N1 class vehicle, instead causes a variation of the slope of the consumption curve, with a consequently more marked difference in consumption for higher vehicle weight values between the original compact car and with values of the N1 vehicle. Therefore, the transmission ratios (and the wheel radii, which also act as a transmission ratio) are also important parameters for monitoring the effects of a hypothetical lightweighting of the starting base vehicle. This can also be seen from the very good overlap of the curve of values relating to the light commercial vehicle (N1) compared to the curve relating to the compact car with the battery pack, aerodynamics, motor efficiency, rolling resistance, moments of inertia, gear ratios, and wheel radii of the N1 vehicle. All the latter aspects must therefore be taken into consideration in evaluating the benefits of lightweighting. Among all these Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the battery pack, with the aerodynamic coefficients, the efficiency of the transmission, and with the rolling resistance coefficient of the N1 vehicle ("CompactCar-N1 values); compact car with the same moments of inertia as vehicle N1 ("CompactCar-N1 inertias"); compact car with the battery pack, aerodynamics, transmission efficiency, rolling resistance, and moments of inertia of vehicle N1 ("CompactCar-N1 values (also inertia)"); compact car with the transmission ratios and wheel radii of the N1 vehicle ("CompactCar-N1 traction ratios"); and, finally, compact car with all the parameters related to the previously mentioned aspects equal to those of the N1 vehicle, i.e., the parameters relating to the battery pack, aerodynamics, transmission efficiency, rolling resistance, moments of inertia, transmission ratios, and wheel radii ("CompactCar-N1 values (all)").
From Figure 3, it can be seen that the contribution given by the moments of inertia alone is negligible as regards the average energy consumption of the WLTC cycle. The modification of the transmission ratios and of the wheel radii, equal to the values of the N1 class vehicle, instead causes a variation of the slope of the consumption curve, with a consequently more marked difference in consumption for higher vehicle weight values between the original compact car and with values of the N1 vehicle. Therefore, the transmission ratios (and the wheel radii, which also act as a transmission ratio) are also important parameters for monitoring the effects of a hypothetical lightweighting of the starting base vehicle. This can also be seen from the very good overlap of the curve of values relating to the light commercial vehicle (N1) compared to the curve relating to the compact car with the battery pack, aerodynamics, motor efficiency, rolling resistance, moments of inertia, gear ratios, and wheel radii of the N1 vehicle. All the latter aspects Energies 2023, 16, 5157 12 of 31 must therefore be taken into consideration in evaluating the benefits of lightweighting. Among all these aspects, the inertias are, in any case, the least influential and, therefore, the aspect that could possibly be neglected, as can also be seen from the graph in Figure 1.
The transmission ratios, by modifying the angular speeds of the wheels and of the various rotating parts of the transmission, cause the contribution made by the various moments of inertia to be modified. In fact, the resistant torques, due to inertia, are proportional to the moment of inertia and to the rate of change of the angular velocity of the affected component. It may be useful to investigate whether the only effect brought about by gear ratios is associated with inertias. A new set of simulations is therefore carried out, for the compact car, with all moments of inertia null and with the transmission ratios and wheel radii of the N1 vehicle, to obtain the graph shown in Figure 4. aspects, the inertias are, in any case, the least influential and, therefore, the aspect that could possibly be neglected, as can also be seen from the graph in Figure 1. The transmission ratios, by modifying the angular speeds of the wheels and of the various rotating parts of the transmission, cause the contribution made by the various moments of inertia to be modified. In fact, the resistant torques, due to inertia, are proportional to the moment of inertia and to the rate of change of the angular velocity of the affected component. It may be useful to investigate whether the only effect brought about by gear ratios is associated with inertias. A new set of simulations is therefore carried out, for the compact car, with all moments of inertia null and with the transmission ratios and wheel radii of the N1 vehicle, to obtain the graph shown in Figure 4. The two curves reported in Figure 4 do not overlap. In particular, they diverge more and more as the weight of the vehicle increases. Therefore, the transmission ratios necessarily involve a further effect on consumption, in addition to that associated with inertia. It is therefore necessary to identify the reason for this further effect. To do this, the individual simulations with a vehicle weight of 2500 kg were analyzed, for the compact car without inertia and for the compact car without inertia and with the total transmission ratio (and wheel radii) of the N1 vehicle ( Figure 5). The two curves reported in Figure 4 do not overlap. In particular, they diverge more and more as the weight of the vehicle increases. Therefore, the transmission ratios necessarily involve a further effect on consumption, in addition to that associated with inertia. It is therefore necessary to identify the reason for this further effect. To do this, the individual simulations with a vehicle weight of 2500 kg were analyzed, for the compact car without inertia and for the compact car without inertia and with the total transmission ratio (and wheel radii) of the N1 vehicle ( Figure 5).
From Figure 5, it can be seen how, for the two simulations, the traction powers (or rather the discharge powers of the battery pack) are superimposed on the graph. What varies is the charging power. This aspect is due to the different contributions that regenerative braking makes according to the transmission ratios. In fact, by varying the transmission ratio, covering the same driving cycle, the angular speeds involved vary, including the angular speed of the electric motor, as can be seen from Figure 6a. In each operating point of the driving cycle under examination, thus varying the transmission ratio and in particular the angular speed of the motor, with the same power required, the motor torque varies, as can be seen from the positive values (traction motor torque) of the graph in Figure 6b. Conversely, for the braking phases, as the regenerative braking logic is set, the motor reaches a maximum value (in module) of regenerative torque with a certain ramp as a function of time. However, depending on the angular speed of the wheels (and motor), this latter torque will translate into a different power sent to the battery pack ( Figure 5). Therefore, in the case of the regenerative braking logic defined as a function of time and of the maximum motor torque, with the same constant parameters of the logic (maximum torque and slope of the time-torque straight line before the plateau value), the transmission ratio will have an affect by modifying the energy recovery contribution brought about by the regenerative braking logic. From Figure 5, it can be seen how, for the two simulations, the traction powers (or rather the discharge powers of the battery pack) are superimposed on the graph. What varies is the charging power. This aspect is due to the different contributions that regenerative braking makes according to the transmission ratios. In fact, by varying the transmission ratio, covering the same driving cycle, the angular speeds involved vary, including the angular speed of the electric motor, as can be seen from Figure 6a. In each operating point of the driving cycle under examination, thus varying the transmission ratio and in particular the angular speed of the motor, with the same power required, the motor torque varies, as can be seen from the positive values (traction motor torque) of the graph in Figure 6b. Conversely, for the braking phases, as the regenerative braking logic is set, the motor reaches a maximum value (in module) of regenerative torque with a certain ramp as a function of time. However, depending on the angular speed of the wheels (and motor), this latter torque will translate into a different power sent to the battery pack (Figure 5). Therefore, in the case of the regenerative braking logic defined as a function of time and of the maximum motor torque, with the same constant parameters of the logic (maximum torque and slope of the time-torque straight line before the plateau value), the transmission ratio will have an affect by modifying the energy recovery contribution brought about by the regenerative braking logic. In practice, when a regenerative braking logic of the type presented in Section 2.1 is implemented in the vehicle control unit, the logic must be calibrated according to the vehicle parameters, in particular, according to the performance of the battery pack (maximum recharge power), and therefore also according to the transmission ratios. For this reason, it does not make much sense to consider two different vehicles but equipped with a regenerative braking logic characterized by the same parameters.
What has been shown in this section was also repeated for the US06 driving cycle, for which similar results and considerations were obtained. In practice, when a regenerative braking logic of the type presented in Section 2.1 is implemented in the vehicle control unit, the logic must be calibrated according to the vehicle parameters, in particular, according to the performance of the battery pack (maximum recharge power), and therefore also according to the transmission ratios. For this reason, it does not make much sense to consider two different vehicles but equipped with a regenerative braking logic characterized by the same parameters.
What has been shown in this section was also repeated for the US06 driving cycle, for which similar results and considerations were obtained.
Compact Car and N1 Vehicle in Absence of Regenerative Braking Recovery
Therefore, by varying the transmission ratios, the hypothesis according to which the vehicles are equipped with the same regenerative braking is no longer valid. The logic is
Compact Car and N1 Vehicle in Absence of Regenerative Braking Recovery
Therefore, by varying the transmission ratios, the hypothesis according to which the vehicles are equipped with the same regenerative braking is no longer valid. The logic is in fact the same, as are the constant parameters of the logic itself, but the energy recovery is different. For this reason, to avoid dependence on this aspect, the previously presented simulations are repeated, but with the relative vehicles without regenerative braking. Figure 7 shows the results of the simulation sets described above, in particular, in terms of average energy consumption over the WLTC cycle, class 3b, as a function of vehicle weight, for vehicle N1 and for the compact car without the regenerative braking logic, with Figure 7, it can be seen that the inertia can be considered negligible for the study in question.
Consumption Analysis
in fact the same, as are the constant parameters of the logic itself, but the energy recovery is different. For this reason, to avoid dependence on this aspect, the previously presented simulations are repeated, but with the relative vehicles without regenerative braking. Figure 7 shows the results of the simulation sets described above, in particular, in terms of average energy consumption over the WLTC cycle, class 3b, as a function of vehicle weight, for vehicle N1 and for the compact car without the regenerative braking logic, with and without inertia contributions. From Figure 7, it can be seen that the inertia can be considered negligible for the study in question.
Figure 7.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, with vehicles without regenerative braking for the N1 category vehicle model ("N1") and for the same model, but with zero inertia contributions ("N1-NO inertias"); for the compact car model ("CompactCar") and for the same model, but with zero inertia contributions ("CompactCar-NO inertias"). Figure 8 shows the results in terms of energy consumption on the WLTC cycle (class 3b) as a function of the vehicle weight, for the different sets of simulations, in which, for the compact car, the parameters relating to each of the aspects are considered significant, and the parameters of all aspects simultaneously are imposed, equal to the respective value of the N1 vehicle. The graph in Figure 8 also shows the results relating to the simulations carried out with the original compact car and with the N1 category vehicle. Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, with vehicles without regenerative braking for the N1 category vehicle model ("N1") and for the same model, but with zero inertia contributions ("N1-NO inertias"); for the compact car model ("CompactCar") and for the same model, but with zero inertia contributions ("CompactCar-NO inertias"). Figure 8 shows the results in terms of energy consumption on the WLTC cycle (class 3b) as a function of the vehicle weight, for the different sets of simulations, in which, for the compact car, the parameters relating to each of the aspects are considered significant, and the parameters of all aspects simultaneously are imposed, equal to the respective value of the N1 vehicle. The graph in Figure 8 also shows the results relating to the simulations carried out with the original compact car and with the N1 category vehicle.
Thanks to the analysis of the graphs shown in Figure 8, it is possible to draw the same considerations made for vehicles equipped with the regenerative braking logic (Figure 2), as regards the dependence of the four aspects considered (battery pack, aerodynamics, transmission efficiency, and rolling resistance). In the absence of regenerative braking, it is also possible to see that the correct setting of the parameters relating to the four aspects mentioned above is sufficient to correctly define the vehicle model and the related consumption in the function of the vehicle mass on the WLTC cycle (class 3b). In fact, in the graph of Figure 8, the curve relating to the compact car with the battery pack, aerodynamics, transmission efficiency, and rolling resistance parameters equal to those of the N1 vehicle matches well with the curve relating to the N1 class vehicle (both vehicles without regenerative braking). In fact, the inertia, as already defined, is negligible, and the transmission ratios (and the wheel radii) do not modify the contribution made by the regenerative braking logic, which is absent. The transmission ratios, in this case, affect only the contribution made by the inertia (see Figures 9 and 10). , as a function of vehicle weight, for the following vehicle models without regenerative braking: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with the transmission efficiency equal to that of the N1 vehicle ("Com-pactCar-N1 transmission efficiency"); compact car with the rolling resistance coefficient equal to that of the N1 vehicle ("CompactCar-N1 rolling resistance"); and, finally, compact car with all the parameters previously mentioned equal to those of vehicle N1 ("CompactCar-N1 values").
Thanks to the analysis of the graphs shown in Figure 8, it is possible to draw the same considerations made for vehicles equipped with the regenerative braking logic (Figure 2), as regards the dependence of the four aspects considered (battery pack, aerodynamics, transmission efficiency, and rolling resistance). In the absence of regenerative braking, it is also possible to see that the correct setting of the parameters relating to the four aspects mentioned above is sufficient to correctly define the vehicle model and the related consumption in the function of the vehicle mass on the WLTC cycle (class 3b). In fact, in the graph of Figure 8, the curve relating to the compact car with the battery pack, aerodynamics, transmission efficiency, and rolling resistance parameters equal to those of the N1 vehicle matches well with the curve relating to the N1 class vehicle (both vehicles without regenerative braking). In fact, the inertia, as already defined, is negligible, and the transmission ratios (and the wheel radii) do not modify the contribution made by the regenerative braking logic, which is absent. The transmission ratios, in this case, affect only the contribution made by the inertia (see Figures 9 and 10).
Figure 8.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models without regenerative braking: vehicle of N1 category ("N1"), compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with the transmission efficiency equal to that of the N1 vehicle ("CompactCar-N1 transmission efficiency"); compact car with the rolling resistance coefficient equal to that of the N1 vehicle ("CompactCar-N1 rolling resistance"); and, finally, compact car with all the parameters previously mentioned equal to those of vehicle N1 ("CompactCar-N1 values"). . Average energy consumption of the WLTC cycle (class 3b), according to the vehicle weight, for the following vehicle models without regenerative braking: compact car without inertia ("Com-pactCar-NO inertias"); compact car with transmission ratios (and wheel radii) equal to those of the N1 vehicle and without inertia ("CompactCar-NO inertias-N1 traction ratios").
Figure 9.
Average energy consumption of the WLTC cycle (class 3b), according to the vehicle weight, for the following vehicle models without regenerative braking: compact car without inertia ("CompactCar-NO inertias"); compact car with transmission ratios (and wheel radii) equal to those of the N1 vehicle and without inertia ("CompactCar-NO inertias-N1 traction ratios").
Figure 9.
Average energy consumption of the WLTC cycle (class 3b), according to the vehicle weight, for the following vehicle models without regenerative braking: compact car without inertia ("Com-pactCar-NO inertias"); compact car with transmission ratios (and wheel radii) equal to those of the N1 vehicle and without inertia ("CompactCar-NO inertias-N1 traction ratios").
Figure 10.
Average energy consumption of the WLTC cycle (class 3b), as a function of vehicle weight, for the following vehicle models without regenerative braking: vehicle of category N1 ("N1"), compact car ("CompactCar"); compact car with the battery pack of the N1 vehicle on board, with the aerodynamic coefficients, the efficiency of the transmission, and with the rolling resistance coefficient of the N1 vehicle ("CompactCar-N1 values); compact car with the same moments of inertia as vehicle N1 ("CompactCar-N1 inertias"); compact car with the battery pack, aerodynamics, transmission efficiency, rolling resistance, and moments of inertia of the N1 vehicle ("CompactCar-N1 values (also inertia)"); compact car with the transmission ratios and wheel radii of the N1 vehicle ("CompactCar-N1 traction ratios"); and, finally, compact car with all the previously mentioned parameters equal to those of the N1 vehicle, i.e., the parameters relating to the battery pack, aerodynamics, transmission efficiency, rolling resistance, moments of inertia, transmission ratios, and wheel radii ("CompactCar-N1 values (all)").
What has been shown in this section was also repeated for the US06 driving cycle, for which similar results and considerations were obtained.
Polynomial Interpolation and ERV Index
In this section, we will look at the polynomial functions that best represent the curves shown previously, in Section 3.2.1. In particular, the polynomial functions of first, second, and third degree were analyzed which approximate the chosen curve, whose parameters were obtained by means of the "polyfit" function of MATLAB ® .
Below, in Equation (1), the function used for polynomial interpolation is shown.
where y is the energy consumption expressed in kWh/100 km, relating to the curve chosen for the analysis, x corresponds to the vehicle weight (in 100 kg), and c 3 , c 2 , c 1 , and c 0 are the coefficients of the polynomial, identified by the "polyfit" MATLAB function. In particular, for the first-degree polynomial, c 3 and c 2 are equal to the null value; for the second-degree polynomial, in general, only c 3 is equal to the null value; and for the thirddegree polynomial, they are, in general, different, from zero to all four coefficients. Figure 11 shows the curves, relating, respectively, to the N1 vehicle and the compact car, both without regenerative braking, obtained by means of the polynomial functions. and third degree were analyzed which approximate the chosen curve, whose parameters were obtained by means of the "polyfit" function of MATLAB ® .
Below, in Equation (1), the function used for polynomial interpolation is shown.
where is the energy consumption expressed in kWh/100 km, relating to the curve chosen for the analysis, corresponds to the vehicle weight (in 100 kg), and 3 , 2 , 1 , and 0 are the coefficients of the polynomial, identified by the "polyfit" MATLAB function. In particular, for the first-degree polynomial, 3 and 2 are equal to the null value; for the second-degree polynomial, in general, only 3 is equal to the null value; and for the thirddegree polynomial, they are, in general, different, from zero to all four coefficients. Figure 11 shows the curves, relating, respectively, to the N1 vehicle and the compact car, both without regenerative braking, obtained by means of the polynomial functions.
(a) (b) Figure 11. Average energy consumption on WLTC cycle (class 3b), as a function of the vehicle weight, for vehicles without regenerative braking. Curves relating to the results were obtained by Figure 11. Average energy consumption on WLTC cycle (class 3b), as a function of the vehicle weight, for vehicles without regenerative braking. Curves relating to the results were obtained by means of simulations ("Results"), and curves were obtained thanks to the polynomial approximation of the first ("n = 1"), second ("n = 2"), and third degree ("n = 3") for (a) the N1 vehicle; (b) the compact car.
As can be seen from Figure 11, the third-degree and second-degree curves are those that more precisely approximate the curve of the results of the simulations carried out with the TEST model; however, even the first-degree polynomial can be useful for an analysis more approximate. The same result was also found for all the other simulations carried out in the absence of regenerative recovery (with changed parameters). Table 2 shows the values of the coefficients of the polynomials which approximate the consumption-weight curves of the vehicle, with the coefficients obtained thanks to the "polyfit" MATLAB function. Only the coefficients relating to the first-and seconddegree polynomials have been reported, since the third-degree polynomials have the negligible c 3 coefficient, which is several orders of magnitude lower than the other three coefficients, thus reducing almost to a second-degree polynomial. In fact, as can be seen from Figure 11, the second-and third-degree polynomial coincide quite well, and the curve under examination can therefore be approximated with sufficient precision simply by the second-degree polynomial. The same situation is found for the polynomials relating to the curves of all the simulations previously carried out, with vehicles without regenerative braking.
As already mentioned, in the literature, reference is often made, when calculating the energy savings associated with vehicle lightweighting, to the FRV index. Considering electric vehicles, it is better to calculate an equivalent index, the ERV index, which corresponds to the c 1 coefficient of the first-degree polynomial, shown in Table 2 for each vehicle model simulated. Table 2. Coefficients of the polynomial functions that approximate the consumption curves, as a function of the vehicle weight obtained by means of simulations on the various vehicle models without regenerative braking. The items under the label "VEHICLE MODEL" refer to the polynomials that approximate the curves shown in Figures 7, 8 and 10 (in this table, the name associated with each vehicle model is the same shown in the legend of the graphs of these figures, with the addition of "CompactCar-N1 inertias and traction ratios", which refers to the compact car with the inertia values, transmission ratios, and wheel radii of the N1 vehicle).
Vehicle Model
Polynomial Degree Figure 12 shows the ERV index, for the WLTC cycle, calculated as in Equation (2), as a function of the vehicle weight, for vehicles without regenerative braking.
where ERV i , expressed in kWh /100 km·100 kg, is the ERV index associated with the vehicle weight of the i-th simulation; EC i is the energy consumption on the WLTC cycle (class 3b), expressed in kWh /100 km, of the i-th simulation; EC i−1 is the average energy consumption of the WLTC cycle (class 3b), expressed in kWh /100 km, of the simulation with a vehicle weight immediately lower than that of the i-th simulation; and ∆M (in kg) is the mass variation where , expressed in kWh 100 km • 100 kg ⁄ , is the ERV index associated with the vehicle weight of the i-th simulation; is the energy consumption on the WLTC cycle (class 3b), expressed in kWh 100 km ⁄ , of the i-th simulation; −1 is the average energy consumption of the WLTC cycle (class 3b), expressed in kWh 100 km ⁄ , of the simulation with a vehicle weight immediately lower than that of the i-th simulation; and ∆ (in kg) is the mass variation of the vehicle between the i-th simulation mass and the immediately lower mass being simulated. Figure 12. ERV index, calculated for the WLTC cycle (class 3b) and calculated between a simulation performed at a given vehicle weight and the simulation with the vehicle weight immediately lower than that under examination (considering the set of simulations performed), as a function of the vehicle weight, for the following vehicle models without regenerative braking: N1 category vehicle ("N1"); N1 vehicle with zero inertia ("N1-NO Inertias"); compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with Figure 12. ERV index, calculated for the WLTC cycle (class 3b) and calculated between a simulation performed at a given vehicle weight and the simulation with the vehicle weight immediately lower than that under examination (considering the set of simulations performed), as a function of the vehicle weight, for the following vehicle models without regenerative braking: N1 category vehicle ("N1"); N1 vehicle with zero inertia ("N1-NO Inertias"); compact car ("CompactCar"); compact car with the N1 battery pack on board ("CompactCar-N1 battery pack"); compact car with the aerodynamic coefficients of the N1 vehicle ("CompactCar-N1 aerodynamics"); compact car with transmission efficiency equal to that of the N1 vehicle ("CompactCar-N1 transmission efficiency"); compact car with the N1 vehicle rolling resistance coefficient ("CompactCar-N1 rolling resistance"); and, finally, compact car with moments of inertia equal to those of the N1 vehicle ("CompactCar-N1 inertia"). Also from Figure 12, it can be seen that the inertias have little influence as regards the variation of consumption and, consequently, are negligible as regards the study under examination, i.e., the evaluation of the results of the lightweighting of a vehicle. Furthermore, as could already be seen from the slight concavity of the curves presented in Figures 7, 8 and 10, the ERV index of the tangent line to the consumption curve increases as weight increases. This means that for greater vehicle weights, we can benefit more from lightweighting for the same weight reduction. In addition, it can be observed which parameters (if equal to those of a higher-class vehicle, class N1 for the case in question) have an effect by raising the ERV index and which by decreasing it. Now, we carry out an energy saving analysis following a hypothetical lightweighting of a specific vehicle. The results of lightweighting are analyzed considering the ERV index, obtained as is typically performed in the literature [46], therefore as the coefficient c 1 of the first-degree polynomial (see Table 2). Furthermore, the energy consumption of the real vehicle under examination, on a reference driving cycle, for example the WLTC (class 3b), is assumed. The objective is to ascertain how an incorrect setting of the model (following the implementation of incorrect parameters) can influence the evaluation of the results of lightweighting. A considerable reduction of 300 kg is therefore considered. The compact car is considered as a "real" vehicle, assuming that its vehicle model perfectly represents a corresponding real vehicle. Finally, the ERV indices obtained by means of the c 1 coefficients of the first-degree polynomials, shown in Table 2, for the various vehicle models simulated, are considered. Figure 13 shows the results of the 300 kg reduction, in terms of average energy consumption of the WLTC cycle (class 3b), calculated as defined above, starting from different vehicle weights. Now, we carry out an energy saving analysis following a hypothetical lightweighting of a specific vehicle. The results of lightweighting are analyzed considering the ERV index, obtained as is typically performed in the literature [46], therefore as the coefficient 1 of the first-degree polynomial (see Table 2). Furthermore, the energy consumption of the real vehicle under examination, on a reference driving cycle, for example the WLTC (class 3b), is assumed. The objective is to ascertain how an incorrect setting of the model (following the implementation of incorrect parameters) can influence the evaluation of the results of lightweighting. A considerable reduction of 300 kg is therefore considered. The compact car is considered as a "real" vehicle, assuming that its vehicle model perfectly represents a corresponding real vehicle. Finally, the ERV indices obtained by means of the 1 coefficients of the first-degree polynomials, shown in Table 2, for the various vehicle models simulated, are considered. Figure 13 shows the results of the 300 kg reduction, in terms of average energy consumption of the WLTC cycle (class 3b), calculated as defined above, starting from different vehicle weights. Figure 13. Average energy consumption of the WLTC cycle, as a function of vehicle weight, for the CompactCar without regenerative braking, obtained through simulations with the TEST model ("Real consumption") and considering the ERV obtained for the following vehicle models without Figure 13. Average energy consumption of the WLTC cycle, as a function of vehicle weight, for the CompactCar without regenerative braking, obtained through simulations with the TEST model ("Real consumption") and considering the ERV obtained for the following vehicle models without recovery regenerative braking: CompactCar ("Calculated consumption"); CompactCar with the vehicle battery pack N1 on board ("Calculated consumption (ERV with N1 battery pack)"); CompactCar with the aerodynamic coefficients of vehicle N1 ("Calculated consumption (ERV with the N1 aerodynamics)"); CompactCar with the transmission efficiency equal to that of the N1 vehicle ("Calculated consumption (ERV with N1 efficiencies)"); CompactCar with the vehicle rolling resistance coefficient N1 ("Calculated consumption (ERV with N1 rolling resistance)"); and, finally, CompactCar with moments of inertia of vehicle N1 ("Calculated consumption (ERV with N1 inertias)").
From Figure 13, it is possible to see how the incorrect definition of the model does not cause excessive damage as regards the evaluation of the results of a lightweighting of 300 kg, provided, however, that the real consumption of the vehicle on the cycle in question is known. Furthermore, the aspects that lead to fewer errors are inertia, the battery pack, and the efficiency of the transmission. The characteristics of the vehicle and its model that most alter the results, if the study is carried out by adopting the approach described above, are instead the aerodynamics and, above all, the rolling resistance. Therefore, adopting this approach reveals a contrasting situation with respect to evaluating lightweighting considering the consumption curve obtained by means of simulations with the vehicle model. In the latter case, as observed in Figure 8, the incorrect evaluation of the rolling resistance coefficient leads to only a marginal error in the evaluation of consumption compared to an incorrect definition of aerodynamics. In fact, the increase in the aerodynamic resistance coefficient, as can be seen from Figure 8, raises the consumption curve; however, it varies Therefore, if we have the consumption available for the reference cycle on which we want to evaluate the results of a hypothetical lightweighting, it is convenient to use the constant ERV index approach to draw considerations close to reality. However, as an alternative, it is also possible to use the known consumptions to calibrate the vehicle model and thus obtain, through simulations, a realistic consumption curve as a function of the weight of the vehicle.
The work relating to polynomial interpolation and ERV index was also repeated for the US06 driving cycle; this led to results and considerations similar to those obtained for the WLTC cycle.
Comparison between Different Driving Cycles
In this section, the lightweighting results evaluated on different standardized driving cycles will be compared [61], for the N1 vehicles and for the compact car, without regenerative braking. Figure 14 shows the average energy consumption obtained for different sets of simulations on different regulated cycles. In particular, the average energy consumption, represented by each point of the graph, corresponds to the average consumption on the cycle analyzed, obtained by carrying out a simulation with the TEST model on the cycle in question, with a pre-set weight (equal to that indicated by the abscissa axis of the graph).
From Figure 14, it is possible to observe how, as the driving cycle considered varies, the average consumption varies, as does the slope of the curves obtained and, therefore, the ERV index and the results of lightweighting. It can also be observed that one driving cycle is not more energy intensive than another in absolute terms but depends on the weight of the vehicle; in fact, for example, for the compact car, the US06 cycle is more intensive than the Artemis Motorway Cycle, for a vehicle mass greater than about 1750 kg, while it is less intensive below this weight.
From Figure 14a, it can be seen that the N1 vehicle without regenerative braking has a very similar energy consumption as the weight varies for the FTP75 and JC08 driving cycles; while for the compact car (see Figure 14b), the difference in consumption for the two cycles becomes more marked. Therefore, in addition to being dependent on the cycle considered, the difference between one cycle and another also depends on the vehicle in question.
Using the "polyfit" function of MATLAB, the polynomials of the first-, second-, and third-degree, which approximate the curves presented in Figure 14, were found. Moreover, this time, the functions which best approximate the curves are the polynomials of second and third degree, which are almost equivalent since the coefficient c 3 of the third-degree polynomial is approximately zero, while the straight line can still be significant for evaluating the results of lightweighting, in particular through its coefficient c 1 which represents the ERV index commonly used in the literature. Tables 3 and 4 show the coefficients of the first-and second-degree polynomials, obtained on the various standardized driving cycles, respectively, for the N1 category vehicle and for the compact car of M category ("CompactCar").
Discussion
In this paper the effects of vehicle lightweighting were analyzed, in particular by monitoring which parameters of the vehicle model have the greatest influence on the results and, therefore, which must be estimated more precisely for a correct study.
In particular, the inertia contribution can be considered negligible. Furthermore, considering the consumption curve (average energy consumption vs. vehicle mass), obtained by means of simulations with a consolidated model in the literature (TEST model [48]), it was found that the aspect that least affects consumption is the battery pack, followed by the efficiency of the transmission, while the parameter that has the greatest influence is aerodynamics, followed by the rolling resistance coefficient. The contribution given by the increase in aerodynamics significantly affects the increase in vehicle consumption, but it does not involve a particular variation of the slope of the consumption curve. Moreover, by increasing all previously mentioned contributions, i.e., placing them at a value closer to that of a vehicle of a higher class, obviously, consumption increases.
Finally, for the realization of a fairly precise consumption curve, the correct setting of the battery pack parameters, aerodynamic coefficients, transmission efficiency, and rolling resistance coefficient is sufficient to correctly define the vehicle model useful for a lightweighting study and the related energy consumption in function of the vehicle mass. In fact, the inertia contribution is negligible and, in absence of the regenerative braking logic (or with an implementation of a regenerative recovery which does not depend on the transmission ratio), the transmission ratios affect only the negligible inertia contribution.
Then, the polynomials that best approximate the consumption curves identified through simulations were investigated. The third-degree and second-degree curves are those that more precisely approximate the curve of the results of the simulations carried out with the TEST model; however, even the first-degree polynomial can be useful for a more approximate analysis. In particular, for an accurate approximation, it is sufficient to consider the second-degree polynomial. In fact, the third-degree polynomials have the negligible c 3 coefficient (coefficient that multiplies the cube of the mass), several orders of magnitude lower than the other three coefficients.
In the literature, reference is often made to the FRV index, expressed in L/(100 km · 100 kg), when calculating the energy savings associated with vehicle lightweighting. Considering electric vehicles, it is better to calculate an equivalent index, the ERV index, expressed in kWh/(100 km · 100 kg), which approximately corresponds to the c 1 coefficient (coefficient that multiplies the mass) of the first-degree polynomial.
The real ERV increases as vehicle weight increases for the same vehicle model. This means that for greater vehicle weights, we can benefit more from lightweighting for the same weight reduction.
The inertia contribution can also be considered negligible for the calculation of the ERV index. Furthermore, considering the ERV index for evaluating the results of a hypothetical vehicle lightweighting, knowing the real consumption of the baseline vehicle, the aspects that lead to fewer errors are the incorrect definition of the battery pack parameters and the efficiency of the transmission. The characteristics of the vehicle and its model that most alter the results, if the study is carried out by adopting the ERV approach, are instead the aerodynamics and, above all, the rolling resistance. Therefore, adopting this approach reveals a contrasting situation with respect to evaluating the lightweighting considering the consumption curve. In the latter case, the incorrect evaluation of the rolling resistance coefficient leads to only a marginal error in the evaluation of consumption compared to an incorrect definition of aerodynamics. In fact, increasing the aerodynamic resistance coefficient raises the consumption curve, but changes its slope to less than the contribution provided by the modified value of the rolling resistance coefficient. However, assuming that the real consumption of the baseline vehicle is known, it is possible to calibrate the coefficients of the model, and in particular the aerodynamic coefficients, in such a way as to obtain a more realistic consumption curve.
Therefore, having the vehicle consumption available on the reference cycle on which we want to evaluate the results of a hypothetical weight reduction, if we want to avoid calibrating the aerodynamic coefficients of the vehicle model, we should use the constant ERV index approach to draw considerations close to reality. Figure 15 summarizes the above, presenting the ERV indices obtained on the WLTC (class 3b) driving cycle, for the different vehicle models.
provided by the modified value of the rolling resistance coefficient. However, assuming that the real consumption of the baseline vehicle is known, it is possible to calibrate the coefficients of the model, and in particular the aerodynamic coefficients, in such a way as to obtain a more realistic consumption curve.
Therefore, having the vehicle consumption available on the reference cycle on which we want to evaluate the results of a hypothetical weight reduction, if we want to avoid calibrating the aerodynamic coefficients of the vehicle model, we should use the constant ERV index approach to draw considerations close to reality. Figure 15 summarizes the above, presenting the ERV indices obtained on the WLTC (class 3b) driving cycle, for the different vehicle models. Figure 15. ERV index, obtained from the "polyfit" MATLAB function, on the WLTC (class 3b) driving cycle, for the N1 vehicle, for the compact car, and for the compact car with the following parameters, aspects, and components of the N1 vehicle: aerodynamics; moments of inertia; battery pack; transmission efficiency; and rolling resistance.
Finally, an initial study was also carried out on the variability of the results of the vehicle lightweighting according to the driving cycle adopted as a reference to evaluate this result. This study will eventually be further explored in a future paper. Figure 16 summarizes the study on the different driving cycles, presenting the ERV indices obtained for the different cycles, for the N1 category vehicle and for the compact car.
As the driving cycle considered varies, average consumption varies, but also the slope of the curves obtained and, therefore, the ERV index and the results of lightweighting. It has also been observed that one driving cycle is not more energy intensive than another in absolute terms but depends on the weight of the vehicle; in fact, for example, for the compact car, the object of this study (without regenerative braking), the US06 cycle is more intensive than the Artemis Motorway Cycle for a vehicle mass greater than about 1750 kg, while it is less intensive below this weight. Furthermore, the N1 vehicle considered, without regenerative braking, has a very similar energy consumption as the weight varies for the FTP75 and JC08 driving cycles, while for the compact car, the difference in consumption for the two cycles becomes more marked. Therefore, in addition to being dependent on the cycle considered, the difference between one cycle and another also depends on the vehicle in question.
By means of the information obtained in the work presented in this paper, it is possible to obtain guidelines for the preparation of an effective vehicle model for the evaluation of lightweighting.
In future work, what is presented in this paper will be taken into account, and in particular, the influence on the results of lightweighting of the different parameters of the vehicle model, to build a database of models of electric vehicles of different classes, will be used it to calculate the relative ERV indices. The material obtained with this last work also depends on the vehicle in question.
By means of the information obtained in the work presented in this paper, it is possible to obtain guidelines for the preparation of an effective vehicle model for the evaluation of lightweighting.
In future work, what is presented in this paper will be taken into account, and in particular, the influence on the results of lightweighting of the different parameters of the vehicle model, to build a database of models of electric vehicles of different classes, will be used it to calculate the relative ERV indices. The material obtained with this last work will then be used for the evaluation of the lightweighting obtained by means of various technologies, currently in the research phase at the University of Brescia. Figure 16. ERV index, obtained from the "polyfit" MATLAB function, for different standard driving cycles, for the N1 vehicle and for the compact car.
Conclusions
In the literature, reference is often made to the FRV index, expressed in L/(100 km · 100 kg), when calculating the energy savings associated with vehicle lightweighting. Considering electric vehicles, it is better to calculate an equivalent index, the ERV index, expressed in kWh/(100 km · 100 kg). The real ERV, for the same vehicle model, increases as vehicle weight increases. This means that for greater vehicle weights, we can benefit more from lightweighting, for the same weight reduction. However, considering a constant ERV as the mass varies is, in any case, a good approximation for the same vehicle model.
In this work, it has been found that, for a correct calculation of the ERV index, it is important to establish the correct definition of the rolling resistance coefficient, followed by the aerodynamics, and then the battery pack parameters and the transmission efficiency. On the other hand, the inertia contribution can be considered negligible.
In general, for the realization of a fairly precise consumption curve, the correct setting of the battery pack parameters, aerodynamic coefficients, transmission efficiency, and rolling resistance coefficient are sufficient to correctly define the vehicle model useful for a lightweighting study and the related energy consumption in the function of the vehicle mass.
This work does not contribute to the creation of the databases of two vehicles but in the identification of the parameters that most influence the results of lightening, by means of an analytical method.
So, this work allows us to lay the foundations and guidelines for the identification of a vehicle model that reflects reality, as regards the evaluation of the results of lightweighting.
In fact, apparently, there is no similar study in the literature. Some works calculate the FRV [5,6,34,[43][44][45] and ERV [45,46] indices for different vehicles, in particular within various vehicle categories (e.g., A/B, C, and D classes). The work we propose is instead the first that observes, in more detail, what are the parameters that influence the variability of the results of vehicle lightweighting. This aspect is precisely the novelty of the proposed work.
A future project will consist precisely in the creation of a database of vehicles of different classes and, if possible, the obtained databases will be validated experimentally, also experimentally validating the truthfulness of the considerations obtained in the work proposed in this paper.
∆M
Vehicle mass variation between the i-th simulation and the immediately lower mass being simulated | 2023-07-11T15:55:57.297Z | 2023-07-04T00:00:00.000 | {
"year": 2023,
"sha1": "3f99367af9aa5cfb0342ccd63beaa1b2f52f4775",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/16/13/5157/pdf?version=1688548915",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cc3e14b355df8a6bfa772a9dbfcdb528c965942a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
119622758 | pes2o/s2orc | v3-fos-license | Normal forms of vector fields on Poisson manifolds
We study formal and analytic normal forms of radial and Hamiltonian vector fields on Poisson manifolds near a singular point.
Introduction
This paper is devoted to the study of normal formsà la Poincaré-Birkhoff for analytic or formal vector fields on Poisson manifolds. We will be interested in two kinds of vector fields, namely Hamiltonian vector fields, and "radial" vector fields, i.e. those vector fields X such that [X, Π] = L X Π = −Π, where Π denotes the Poisson structure, and the bracket is the Schouten bracket. Our motivation for studying radial vector fields comes from Jacobi structures [7], while of course the main motivation for studying Hamiltonian vector fields comes from Hamiltonian dynamics. We will assume that our vector field X vanishes at a point, X(0) = 0, and that the linear part of Π or of its transverse structure at 0 corresponds to a semisimple Lie algebra. In this case, it is well known [13,4] that Π admits a formal or analytic linearization in a neighborhood of 0. We are interested in a simultaneous linearization or normalization of Π and X.
In Section 2, we study the problem of simultaneous linearization of couples (Π, X) where Π is a Poisson structure and X is a vector field such that L X Π = −Π. Such couples are called homogeneous Poisson structures in the sense of Dazord, Lichnerowicz and Marle [7], and they are closely related to Jacobi manifolds. Such structures are closely related to Jacobi structures. More precisely, a 1-codimensional submanifold of a homogeneous Poisson manifold (M, Π, X) which is transverse to the vector field X has an induced Jacobi structure, and all Jacobi manifolds can be obtained in this way. On the other hand, a 1-codimensional submanifold of a Jacobi manifold (N, Λ, E) transverse to the structural vector field E has an induced homogeneous Poisson structure, and all homogeneous Poisson manifolds can be obtained in this way (see [7]). Our first result is the following (see Theorem 2.4): Theorem A. Let (Π, X) be a formal homogeneous Poisson structure on K n (where K is C or R) such that the linear part Π 1 of Π corresponds to a semisimple Lie algebra g. Suppose that its linear part (Π 1 , X (1) ) is semisimple nonresonant. Then there exists a formal diffeomorphism which sends (Π, X) to (Π 1 , X (1) ).
The semisimple nonresonant condition in the above theorem is a generic position on X (1) : the set of X (1) which does not satisfy this condition is of codimension 1, and moreover if X (1) − I is diagonalizable and small enough, where I = x i ∂ ∂xi denotes the standard radial (Euler) vector field, then the semisimple nonresonant is automatically satisfied.
For analytic linearization, due to possible presence of small divisors, we need a Diophantine-type condition. Here we choose to work with a modified Bruno's ω-condition [2,3] adapted to our case. See Definition 2.5 for the precise definition of our ω-condition. The set of (Π 1 , X (1) ) which satisfy this ω-condition is of full measure. We have (see Theorem 2.7): Theorem B. Let (Π, X) be an analytic homogeneous Poisson structure on K n (where K is C or R) such that the linear part Π 1 of Π corresponds to a semisimple Lie algebra. Suppose moreover that its linear part (Π 1 , X (1) ) is semisimple nonresonant and satisfies the ω-condition. Then there exists a local analytic diffeomorphism which sends (Π, X) to (Π 1 , X (1) ).
In Section 3, we study local normal forms of Hamiltonian systems on Poisson manifolds. According to Weinstein's splitting theorem [13], our local Poisson manifold ((K n , 0), Π), where K = R or C, is a direct product ((K 2l , 0), Π symp ) × ((K m , 0), Π trans ) of two Poisson manifolds, where the Poisson structure Π symp is nondegenerate (symplectic), and the Poisson structure Π trans (the transverse structure of Π at 0) vanishes at 0. If Π trans is trivial, i.e. the Poisson structure Π is regular near 0, then the problem local normal forms of Hamiltonian vector fields near 0 is reduced to the usual problem of normal forms Hamiltonian vector fields (with parameters) on a symplectic manifold. Here we are interested in the case when Π trans is not trivial. We will restrict our attention to the case when the linear part of Π trans corresponds to a semisimple Lie algebra g. According to linearization theorems of Weinstein [13] and Conn [4], we may identify ((K m , 0), Π trans ) with a neighborhood of 0 of the dual g * of g equipped with the associated linear (Lie-Poisson) structure. In other words, there is a local system of coordinates (x 1 , y 1 , . . . , x l , y l , z 1 , . . . , z m ) on K 2l+m such that Π symp = with c k ij being structural constants of g, and Such a coordinate system will be called a canonical coordinate system of Π near 0. Let H be a formal or analytic function on ((K n , 0), Π). We will assume that the Hamiltonian vector field X H of H vanishes at 0. Note that the differential of H does not necessarily vanish at 0 (for example, if n = 0 then we always have X H (0) = 0 for any H). We may assume that H(0) = 0.
We have the following generalization of Birkhoff normal form [1] (see Theorem 3.1): Theorem C. With the above notations and assumptions, there is a formal canonical coordinate system (x i ,ŷ i ,ẑ j ), in which H satisfies the following equation: where H ss is a (nonhomogeneous quadratic) function such that its Hamiltonian vector field X Hss is linear and is the semisimple part of the linear part X (1) H of X H (in this coordinate system). In particular, the semisimple part of the linear part of X H is a Hamiltonian vector field.
Note that the normalizing canonical coordinates given in the above theorem are only formal in general. The problem of existence of a local analytic normalization for a Hamiltonian vector field (even in the symplectic case) is much more delicate than for a general vector field, due to "auto-resonances" (e.g, if λ is an eigenvalue of a Hamiltonian vector field then −λ also is). However, there is one particular situation where one knows that a local analytic normalization always exists, namely when the Hamiltonian vector field is analytically integrable. See [16] for the case of integrable Hamiltonian vector fields on symplectic manifolds. Here we can generalize the main result of [16] to our situation (see Theorem 3.8): Theorem D. Assume that K = C, the Hamiltonian function H in Theorem C is locally analytic, and is analytically integrable in the generalized Liouville sense. Then the normalizing canonical coordinate system (x i ,ŷ i ,ẑ j ) can be chosen locally analytic.
We conjecture that the above theorem remains true in the real case (K = R). Recall (see, e.g., [15] and references therein) that a Hamiltonian vector field X H on a Poisson manifold (M, Π) of dimension n is called integrable in generalized Liouville sense if there are nonnegative integers p, q with p + q = n, p pairwise commuting Hamiltonian functions H 1 , . . . , H p ({H i , H j } = 0 ∀i, j) with H 1 = H and q first integrals F 1 , . . . , F p , such that X Hi (F j ) = 0 ∀ i, j, and dF 1 ∧. . .∧dF q = 0 and X H1 ∧ . . . ∧ X Hp = 0 almost everywhere. (The Liouville case corresponds to p = q = n/2 and F i = H i ). Analytic integrability means that all Hamiltonian functions and vector fields in question are analytic.
Homogeneous Poisson structures
Following [7], we will use the following terminology: a homogeneous Poisson structure on a manifold M is a couple (Π, X) where Π is a Poisson structure and X a vector field which satisfies the relation where the bracket is the Schouten bracket.
Remark 2.1. Poisson structures which satisfy the above condition are also called exact, in the sense that the Poisson tensor is a coboundary in the associated Lichnerowicz complex which defines Poisson cohomology. They have nothing to do with another kind of homogeneous spaces, namely those which admit a transitive group action.
An analog of Weinstein's splitting theorem for homogeneous Poisson structures is given in [7], and it reduces the study of normal forms of homogeneous Poisson structures to the case when both Π and X vanish at a point. So we will assume that (Π, X) is a homogeneous Poisson structure defined in a neighborhood of 0 in K n , where K = R or C, such that (2.2) Π(0) = 0 and X(0) = 0 .
We are interested in the linearization of these structures, i.e. simultaneous linearization of Π and X. Denote by Π 1 and X (1) the linear parts of Π and X respectively. Then the terms of degree 1 of Equation (2.1) imply that (Π 1 , X (1) ) is again a homogeneous Poisson structure.
In this paper, we will assume that the linear Poisson structure Π 1 corresponds to a semisimple Lie algebra, which we denote by g. Then, according to linearization results of Weinstein [13] (for the formal case) and Conn [4] (for the analytic case), the Poisson structure Π can be linearized. In other words, there is a local coordinate system (x 1 , . . . , x n ) on (K n , 0), in which where c k ij are structural constants of g. In order to linearize (Π, X) = (Π 1 , X), it remains to linearize X by local (formal or analytic) diffeomorphisms which preserve the linear Poisson structure Π 1 .
Formal linearization.
First consider the complex case (K = C). Let X be a formal vector field on C n such that (Π 1 , X) forms a homogeneous Poisson structure on C n . Denote by the Euler vector field written in coordinates (x 1 , . . . , x n ). Since this vector field satisfies the relation [I, Π 1 ] = −Π 1 , we can write X as where Y is a Poisson vector field with respect to Π 1 , i.e., [Y, Π 1 ] = 0. It is wellknown that, since the complex Lie algebra g is semisimple by assumptions, the first formal Poisson cohomology space of Π 1 is trivial (see, e.g., [5]), i.e. any formal Poisson vector field is Hamiltonian. In particular, we have for some formal function h. Writing the Taylor expansion h = h 1 + h 2 + h 3 + · · · where each h r is a polynomial of degree r, we have Denote by X (1) = I + X h1 the linear part of X. In order to linearize X (while preserving the linearity of Π = Π 1 ), we want to kill all the terms X hr with r ≥ 2, using a sequence of changes of coordinates defined by flows of Hamiltonian vector fields with respect to Π 1 . Working degree by degree, we want to find for each r a homogeneous polynomial g r of degree r such that (2.9) [X (1) , X gr ] = X hr .
Note that [X h1 , X gr ] = X {h1,gr} , and [I, X gr ] = (r − 1)X gr because X gr is homogeneous of degree r. Hence Relation (2.9) will be satisfied if g r satisfies the following relation: Remark that, h 1 can be viewed as an element of g, and h r , g r may be identified with elements of the symmetric power S r (g) of g. Under this identification, {h 1 , g r } is nothing but the result of the adjoint action of h 1 ∈ g on g r ∈ S r (g).
We will suppose that h 1 is a semisimple element of g, and denote by h a Cartan subalgebra of g which contains h 1 . According to the root decomposition of g with respect to h, we can choose a basis (x 1 , . . . , x n ) of g, and elements α 1 , . . . , α n of h * , such that (2.11) [y, Each α i is either 0 (in which case x i ∈ h) or a root of g (in which case x i belongs to the root subspace g αi of g).
We define for each r ≥ 2 the linear operator Each monomial i x λi i of degree |λ| = λ i = r is an eigenvector of this linear operator: Definition 2.2. With the above notations, we will say that (Π 1 , X (1) ) is semisimple nonresonant if h 1 is a semisimple element of g and the eigenvalues of Θ r don't vanish, i.e., for any r ≥ 2 and any (λ 1 , . . . , λ n ) ∈ Z n + such that λ i = r we have r − 1 + n i=1 λ i α i , h 1 = 0. Remark 2.3. It is easy to see that the above nonresonance condition is a generic position condition, and the subset of elements which do not satisfy this condition is of codimension 1. In fact, if the Cartan subalgebra h is fixed, then the set of elements h 1 ∈ h such that (Π 1 , I + X h1 ) is resonant is a countable union of affine hyperplanes in h which do not contain the origin, and there is a neighborhood of 0 in h such that if h 1 belongs to this neighborhood then (Π 1 , X (1) ) is automatically nonresonant.
The algorithm of formal linearization. We now show how to linearize (Π 1 , X), by killing the nonlinear terms of h step by step, provided that (Π 1 , X (1) ) is nonresonant. Actually, at each step, we will kill not just one term h d , but a whole block of 2 d consecutive terms. This "block killing" will be important in the next section when we want to show that, under some Diophantine-type condition, our formal linearization process actually yields a local analytic linearization.
For each q ≥ 0, denote byÔ q the space of formal power series on C n of order greater or equal to q, i.e. without terms of degree < q.
We begin with X = X (1) modÔ 2 , and will construct a sequence of formal vector fields (X d ) d and diffeomorphisms (ϕ d ) d , such that X 0 = X and, for all Assuming that we already have X d for some d ≥ 0, we will construct ϕ d (and H d is a sum of homogeneous polynomials of degrees between 2 d + 1 and 2 d+1 (we also could write d is homogeneous of degree u) and the same for H d , we have We then have where H d+1 is a polynomial of degree 2 d+2 inÔ 2 d+1 +1 .
Constructed in this way, it is clear that the successive compositions of the diffeomorphisms ϕ d converge in the formal category to a formal diffeomorphism Φ ∞ which satisfies Φ ∞ * X = X (1) and which preserves the linear Poisson structure Π 1 .
Consider now the real case (K = R, and g is a real semisimple Lie algebra). By complexication, we can view real objects as holomorphic objects with real coefficients, and then repeat the above algorithm. In particular, under the nonresonance condition, we will find homogeneous polynomials G is also real. This means that the coordinate transformations constructed above are real in the real case.
We have proved the following: Theorem 2.4. Let (Π, X) be a formal homogeneous Poisson structure on K n (where K is C or R) such that the linear part Π 1 of Π corresponds to a semisimple Lie algebra. Assume that its linear part (Π 1 , X (1) ) is semisimple nonresonant. Then there exists a formal diffeomorphism which sends (Π, X) to (Π 1 , X (1) ).
2.2.
Analytic linearization. Now we work in the local analytic context, i.e. the vector field X is supposed to be analytic on (K n , 0). In order to show that the algorithm given in the previous subsection leads to a local analytic linearization, in addition to the nonresonance condition we will need a Diophantine-type condition, similar to Bruno's ω-condition for the analytic linearization of vector fields [2,3].
Keeping the notations of the previous subsection, for each d ≥ 1, put (2.20) Definition 2.5. We will say that X (1) , or more precisely that a semisimple nonresonant linear homogeneous Poisson structure (Π 1 , X (1) ) satisfies the ω-condition if Remark that, similarly to other situations, the set of X (1) which satisfy the about ω-condition is of full measure. More precisely, we have: Proposition 2.6. The set of elements h of a given Cartan subalgebra h such that See the Appendix for a straightforward proof of the above proposition.
Using the same analytical tools as in the proof of Bruno's theorems about linearization of analytic vector fields [2,3], we will show the following theorem: Theorem 2.7. Let (Π, X) be an analytic homogeneous Poisson structure on (K n , 0) (where K is C or R) such that the linear part Π 1 of Π corresponds to a semisimple Lie algebra. Suppose that its linear part (Π 1 , X (1) ) is semisimple nonresonant and satisfies the ω-condition. Then there exists a local analytic diffeomorphism which sends (Π, X) to (Π 1 , X (1) ).
Proof. Due to Conn's theorem [4], we can assume that Π = Π 1 is already linear. The process to linearize the vector field X is the same as in the formal case, noting that if we start with an analytic vector field, the diffeomorphisms ϕ d that we constructed will be analytic too (as is the vector fields X d ). We just have to check the convergence of the sequence Φ d = ϕ d • . . . • ϕ 1 in the analytic setup.
We will assume that K = C (the real case can be reduced to the complex case by the same argument as given in the previous subsection). Denote by O q the vector space of local analytic functions of (K n , 0) of order greater or equal to q (i.e. without terms of degree < q).
For each positive real number ρ > 0, denote by D ρ the ball {x = (x 1 , . . . , x n ) ∈ C n ; |x i | < ρ} and if f = λ∈N n a λ x λ is an analytic function on D ρ we define the following norms: In the same way, if F = (F 1 , . . . , F n ) is a vector-valued local map then we put |F | ρ := max{|F 1 | ρ , . . . , |F n | ρ } and similarly for ||F || ρ . These norms satisfy the following properties.
Lemma 2.8. Let ρ and ρ ′ be two real numbers such that 0 < ρ ′ < ρ. If f ∈ O q is an analytic function on D ρ , then a) c) Let R > 0 be a positive constant. Then there is a natural number N such that for The proof of the above lemma is elementary (see the Appendix).
It is important to remark that, with the same notations as in the formal case, for ρ > 0, we have, by (2.12): Put ρ 0 = 1, and define the following two decreasing sequences of radii (r d ) d and (ρ d ) d by and it is clear, by the ω-condition (2.21), that the sequences (r d ) d and (ρ d ) d converge to a strictly positive limit R > 0. Moreover, they satisfy the following properties: The proof of Lemma 2.9 is elementary (see the Appendix).
and moreover, we have Proof . • We first prove the second inclusion : We have where H d is a polynomial formed by homogenous terms of degree between 2 d + 1 and 2 d+1 . By (2.27), we write Then, by (2.25), we get And, using the assumption This map is continuous and is the identity on the boundary of D r d thus, by As a conclusion, if y is in D r d+1 then, by the surjectivity ofφ d , y =φ d (z) with, a priori, z in D r d . We saw above that in fact z cannot be in D r d /D ρ d . Therefore, We write the obvious inequality By (2.25), we have To do that, we use the inequalities of Lemma 2.8. The drawback of these inequalities is that they sometimes induce a change of radius. Therefore, we define the following intermediar radii (between ρ d and r d ) : Let us explain a little bit the definitions of these radii : d is defined in order to use inequality (2.26). -Finally, if d is sufficiently large, the differences ρ d are strictly smaller than r d 5d 2 and then, We can write ϕ t d = Id + ξ t d where the n components of ξ t d are functions in O 2 d +1 . We have the estimates We then deduce by (2.43) that Finally, we just have to estimate [X G d , X d ]) r d . We first have by (2.24), which gives, by (2.26), recalling that ω d ≤ 1 2d , Using (2.27) and (2.25), we get and then In the same way, one can prove that In addition, by (2.25), we get We deduce finally that This gives the following estimate , and the conclusion follows.
End of the proof of Theorem 2.7. Let d 0 be a positive integer such that Lemmas 2.9 and 2.10 are satisfied for d ≥ d 0 . By the homothety trick (dilate a given coordinate system by appropriate linear transformations), we can assume that |X d0 − X (1) | ρ d 0 −1 < 1.
By recurrence, for all d ≥ d 0 , we have We consider the sequence (Ψ d ) d given by Let x be an element of D R ; recall that R > 0 is the limit of the decreasing sequences (r d ) d and (ρ d ) d . Then x belongs to the ball D r d+1 for any d and if d > d 0 , we get by (2.58), and iterating this process, we obtain we then obtain, for all x in D R and all d > d 0 , The theorem follows.
Hamiltonian vector fields on Poisson manifolds
In this section, we study normal forms of formal or analytic Hamiltonian vector fields in the neighborhood of the origin on the Poisson manifold (K 2l+m , Π), where Here Π symp = l i=1 ∂ ∂xi ∧ ∂ ∂yi is the standard symplectic Poisson structure on K 2n , and Π trans = Π g = 1 2 i,j,k c k ij z k ∂ ∂zi ∧ ∂ ∂zj is the associated linear Poisson structure on the dual of a given semisimple Lie algebra g of dimension m over K.
Let H : (K 2l+m , 0) → (K, 0) be a formal or local analytic function with H(0) = 0, and consider the Hamiltonian vector field X H of H with respect to the above Poisson structure Π = Π symp + Π g . If X H (0) = 0, then it is well-known that it can be rectified, i.e. there is a local canonical coordinate system (x 1 , y 1 , . . . , x l , y l , z 1 , . . . , z m ) in which H = x 1 and X H = ∂ ∂y1 . Here we will assume that X H (0) = 0 3.1. Formal Poincaré-Birkhoff normalization. In this subsection, we will show that the vector field X H can be put formally into Poincaré-Birkhoff normal form.
More precisely, we have: Theorem 3.1. With the above notations, for any formal or local analytic function H : (K 2l+m , 0) → (K, 0), there is a formal canonical coordinate system (x i ,ŷ i ,ẑ j ), in which the Poisson structure Π has the form and in which we have where H ss is a function such that X Hss is the semisimple part of the linear part of X H , and Proof. For any function f on K 2l+m , we can write the Hamiltonian vector field of f with respect to Π symp (resp. Π g ). We can write H = p,q H p,q where H p,q is a polynomial of degree p in x, y and of degree q in z.
A difficulty of our situation comes from the fact that Π is not homogeneous. If p > 0 then X H p,q is not a homogeneous vector field but the sum of a homogeneous vector field of degree p + q (given by X g H p,q ) and a homogeneous vector field of degree p + q − 1 (given by X symp H p,q ). Note that X g H 0,q is homogeneous of degree q and of course X symp H 0,q = 0. Denoting by X (1) the linear part of X H , we have This linear vector field X (1) is not a Hamiltonian vector field in general, but we will show that its semisimple part is Hamiltonian. By complexifying the system if necessary, we will assume that K = C. By a linear canonical change of coordinates, we can suppose that the semisimple part of X H 2,0 is X h2 where h 2 (x, y) = l j=1 γ i x j y j (γ j ∈ C) and that the semisimple part of X H 0,1 is X h1 where h 1 belongs to a Cartan subalgebra h of g. We write : Remark that we can assume that α s+1 = . . . = α m = 0 where m−s is the dimension of the Cartan subalgebra h. Denote α = (α 1 , . . . , α m ) and γ = (γ 1 , . . . , γ l ). If λ, µ ∈ Z l + and ν ∈ Z m + then where, for example, α, ν = α j ν j denotes the standard scalar product of α and ν. In particular, {h 1 + h 2 , .} acts in a "diagonal" way on monomials.
We can arrange so that, written as a matrix, the terms coming from X H 0,1 −h1 , X H 2,0 −h2 and X symp H 1,1 in the expression of X (1) are off-diagonal upper-triangular (and the terms coming from X h1+h2 are on the diagonal).
If {h 1 + h 2 , H 1,1 } = 0 then we can apply some canonical changes of coordinates to make (the new) H 1,1 commute with h 1 + h 2 as follows. According to (3.7), there exist two polynomials G 1,1 (1) and G 1,1 (1) of degree 1 in x, y and 1 in z such that Remark that, for any homogeneous polynomials K 0,1 , K 2,0 , K 1,1 of corresponding degrees in (x, y) and z, we have Change the coordinate system by the push-forward of the time-1 flow ϕ (1) = exp X G 1,1 (1) of the Hamiltonian vector field X G 1,1 (1) , i.e., x new i = x i • ϕ (1) and so on. The new coordinate system is still a canonical coordinate system, because ϕ (1) preserves the Poisson structure Π. By this canonical change of coordinates, we can replace H by H • ϕ (1) , and X H by (3.11) X new , , , X H ]] + . . .
It follows from (3.11) and (3.10) that the linear part of
(1) ,F1} . In particular, by the above canonical change of coordinates, we have replaced (1) , F 1 } is homogeneous of degree 1 in (x, y) and degree 1 in z).
Since F 1 is "nilpotent", by iterating the above process a finite number of times, By recurrence, assume that, for some r ≥ 2, we have {H ss , H k } = 0 for all k ≤ r − 1. We will change H r by a canonical coordinate transformation to get the same equality for k = r.
In order to put H r in normal form, we use the same method that we used to normalize H 1,1 . Similarly to (3.8), we can write where K r andK r are of the same type as H r (i.e., they are sums of monomials of bidegrees (0, r) and (p, r + 1 − p) with p > 0), {H ss ,K r } = 0. Note K r can be written as K r = {H ss , K (2)r } for some K (2)r .
The canonical coordinate transformation given by the time-1 flow exp X Kr of X Kr leaves H 1 , . . . , H r−1 intact, and changes H r =K r + {H ss , K r } to the sum of K r with the terms of appropriate bidegrees in {K r , F 1 + H 1,1 }. We will write it as (3.20)K r + {K r , F 1 + H 1,1 } mod (terms of higher bidegrees).
It can also be written as (3.21)K r + {H ss , {K (2)r , F 1 + H 1,1 }} mod (terms of higher bidegrees). Now apply the canonical coordinate transformation given by exp X {K (2)r ,F1+H 1,1 } , and so on. Since F 1 + H 1,1 is "nilpotent", after a finite number of coordinate transformations like that, we can change H r toK r , which commutes with H ss . Denote the composition of these coordinate changes (for a given r) as φ r . Note that φ r is of the type Thus, the sequence of local or formal Poisson-structure-preserving diffeomorphisms (Φ r ) r≥2 , where Φ r = φ r • . . . • φ 2 , converges formally and gives a formal normalization of H.
Finally, notice that, in the real case (K = R), by an argument similar to the one given in the previous section, all canonical coordinate transformations constructed above can be chosen real.
Theorem 3.1 is proved.
Remark 3.2. In Theorem 3.1, if we forget the Lie algebra g and just keep the symplectic structure, then we recover the classical Birkhoff normalization for Hamiltonian vector fields on symplectic manifolds (see, e.g., [1,3,11,16]). On the other hand, if we forget the symplectic part and just deal with g * then we get the following result as a particular case: Corollary 3.3. Let h be a local analytic or formal function , with h(0) = 0 and dh(0) = 0, on the dual g * of a semisimple Lie algebra with the associated Lie-Poisson structure. Then the Hamiltonian vector field X h admits a formal Poincaré-Birkhoff normalization, i.e., there exists a formal coordinate system in which the Poisson structure is linear and in which we have where h ss is the semisimple part of dh(0) in g.
Example 3.4. The monomials x λ y µ z ν such that γ, µ − λ + α, ν = 0 in (3.7) may be called resonant terms. In the two following examples we give the set of all resonant terms in the case of a trivial symplectic part. a) g = sl (2). In this case, a Cartan subalgebra h of g is of dimension 1 and there are only two roots {α, −α}. Denote by z 1 , z 2 , z 3 a basis of g (or a coordinate system on g * ) such that z 1 (resp. z 2 ) spans the root space associated to α (resp. −α) and z 3 spans the Cartan subalgebra. We suppose that in the decomposition (3.16) we have h 1 = z 3 . Then the resonant terms are formal power expansion in the variables ω = z 1 z 2 and z 3 . b) g = sl(3). Here a Cartan subalgebra h is of dimension 2 (see for instance [8]). There are 6 roots {α 1 , α 2 , α 3 , −α 1 , −α 2 , −α 3 } and the relations between these roots are of type If {ξ 1 , ξ 2 , ξ 3 , ζ 1 , ζ 2 , ζ 3 , z 1 , z 2 } is a basis of g such that ξ j (resp. ζ j ) spans the root space associated to α j (resp. −α j ) and {z 1 , z 2 } spans h, then supposing that in the decomposition (3.16) h 1 is a linear combination of z 1 and z 2 we may write the resonant terms as formal power expansion formed by monomials of type
Analytic normalization for integrable Hamiltonian systems.
Here, we assume that we work in the complex analytic setup.
Of course, the elements (λ, µ, ν) of R correspond to the resonant monomials i.e. terms of type x λ y µ z ν such that {H ss , x λ y µ z ν } = 0. The dimension of R may be called the degree of resonance of H. Now, we consider the sublattice Q of Z 2l+m formed by vectors a ∈ Z 2l+m such that a | u = 0 for all u in R. Let {ρ (1) , . . . , ρ (r) } be a basis of Q. The dimension r of Q is called the toric degree of X H at 0. We then put for all k = 1, . . . , r The vector fields iZ 1 , . . . , iZ r (i = √ −1) are periodic with a real period in the sense that the real part of these vector fields is a periodic real vector field in C 2l+m = R 2(2l+m) ; they commute pairwise and are linearly independent almost everywhere. Moreover, the vector field X Hss is a linear combination (with coefficients in C a priori) of the iZ k . We also have the trivial following property Proof : We just give here the idea of the proof of this lemma supposing that Λ is a 2-vector for instance ; but it works exactly in the same way for other multivectors. If Y is a vector field of type l j=1 a j x j ∂ ∂xj + l j=1 a l+j y j ∂ ∂yj + m j=1 a 2l+j z j ∂ ∂zj and Λ of type Λ = x λ y µ z ν ∂ ∂xu ∧ ∂ ∂xv , then = (0, . . . , 1, . . . , 0) is the vector of Z l whose unique nonzero component is the u-component. Of course we get the same type of relation with 2-vectors in ∂ ∂x ∧ ∂ ∂z , ∂ ∂x ∧ ∂ ∂x , etc.... Using this remark and the definition of the vectors ρ (1) , . . . , ρ (r) , the equivalence of the lemma is direct.
According to this Lemma, since X Hss preserves the Poisson structure, the vector fields Z 1 , . . . , Z r will be Poisson vector fields for (C 2l × C m , { , } symp + { , } g * ). But according to Proposition 4.1 (see the Appendix), the Poisson cohomology space is trivial therefore, these vector fields are actually Hamiltonian : Finally, we have r periodic Hamiltonian linear vector fields iZ k which commute pairwise, are linearly independent almost everywhere. The real parts of these vector fields generate a Hamiltonian action of the real torus T r on (C 2n × C m , { , } symp + { , } g ). With all these notations, we can state the following proposition : Proposition 3.6. With the above notation, the following conditions are equivalent : a) There exists a holomorphic Poincaré-Birkhoff normalization of X H in a neighborhood of 0 in C 2l+m . b) There exists an analytic Hamiltonian action of the real torus T r in a neighborhood of 0 in C 2l+m , which preserves X H and whose linear part is generated by the (Hamiltonian) vector fields iZ k , k = 1, . . . , r.
Proof : Suppose that H is in holomorphic Poincaré-Birkhoff normal form. By Lemma 3.5, since {H, H ss } = 0, the vector fields iZ k preserve X H .
Conversely, if the point b) is satisfied, then according to the holomorphic version of the Splitting Theorem (see [10]) we can consider that the action of the torus is "diagonal", i.e. the product of an action on (C 2l , { , } symp ) by an action on (C m , { , } g ) and moreover that the action on the symplectic part is linear. According to Proposition 4.2 (in Appendix), we can linearize the second part of the action by a Poisson diffeomorphism. We then can consider that the action of T r is generated by the vector fields iZ k , k = 1, . . . , r. This action preserves X H then we have [iZ k , X H ] = 0 for all k. To conclude, just recall that X Hss is a linear combination of the Z k . Now, we are going to use Proposition 3.6 to clarify a link between the integrability of a Hamiltonian vector field X H on an analytic Poisson manifold (K n , { , }) and the existence of a convergent Poincaré-Birkhoff normalization. Recall first the definition (see for instance [15]) of the word integrability used here : (3.32) b) The functions are common first integrals for X 1 , . . . , X p : and they are functionally independent almost everywhere : Of course this definition has a sense in the smooth category as well as in the analytic category. We can speak about smooth or analytic integrability.
where { , } symp is a symplectic Poisson structure and g is a semisimple Lie algebra and { , } g the standard Lie-Poisson structure on g * . If X H is integrable then, forgetting one moment the Hamiltonian feature, Theorem 1.1 and Proposition 2.1 in [14] give the existence of an action of a real torus T r on (K 2l+m , 0) generated by vector fields Y 1 , . . . , Y r (r is the toric degree of X H ) where the linear parts of these vector fields are the iZ k (see 3.28), and which preserves X H . Moreover, the semisimple part X ss H of X H is a linear combination of the Y j : X ss H = j β j Y j without any resonance relation between the β j . Now, let us recall that we work in a Poisson manifold with a Hamiltonian vector field. Since the vector field X H preserves the Poisson structure, its semisimple part also does and then we will have [Y j , Π] = 0 for all j = 1, . . . , r. Therefore, the action of the torus also preserves the Poisson structure. Proposition 3.6 allows to conclude. Remark 3.9. If we suppose that H and the Poisson structure are real then it is natural to ask if all that we made is still valid. Note that in this case, we can consider H (and the Poisson structure) as complex analytic, with real coefficients.
Actually, in the same way as in [14,16], we conjecture that we have the equivalence: A real analytic Hamiltonian vector field X H with respect to a real analytic Poisson structure admits a local real analytic Poincaré-Birkhoff normalization iff it admits a local holomorphic Poincaré-Birkhoff normalization.
Appendix
In this appendix, we give a proof of auxiliary results used in the previous sections. We first compute the first Poisson cohomology space of the Poisson manifold we consider in Section 3. Suppose that Π S is a symplectic (i.e. nondegenerate) Poisson structure on K 2l (K is R or C). If (x 1 , . . . , x l , y 1 , . . . , y l ) are coordinates on K 2l , we can write Let g be a m-dimensional (real or complex) semisimple Lie algebra and consider Π g the corresponding linear Poisson structure on K m . Suppose that (z 1 , . . . , z m ) are coordinates on K m . We then show the following : Under the hypotheses above, if H 1 (K 2l × K m , Π S + Π g ) denotes the first (formal or analytic) Poisson cohomology space of the product of (K 2l , Π S ) by (K m , Π g ) then Proof : If X is a (formal or analytic) vector field on K 2l × K m we write X = X S + X g where X S is a vector field which only has components in the ∂ ∂xi and ∂ ∂yi and, in the same way, X g only has components in the ∂ ∂zi . Before computing the Poisson cohomology space, let us make the following two remarks : If [X S , Π S ] = 0 then X S = [f, Π S ] where f is a (formal or analytic) function on K 2l+m . Indeed, recalling that (because Π S is symplectic) the Poisson cohomology of (K 2l , Π S ) is isomorphic to the de Rham cohomology of K 2l (see for instance [12]), the relation [X S , Π S ] = 0 may be translated as dα = 0 where α is a 1-form on K 2l depending (formally or analytically) on parameters z 1 , . . . , z m . Then we can write α = df where f is a function on K 2l depending (formally or analytically) on parameters z 1 , . . . , z m .
In the same way, if [X g , Π S ] = 0 then, writing X g = i X g i (x, y, z) ∂ ∂zi , we get [X g i , Π S ] = 0 for all i. Thus, each X g i depends only on z. Indeed, here X g i may be seen as a function on K 2l depending (formally or analytically) on parameters z 1 , . . . , z m such that dX g i = 0.
Now if X = X S + X g is a vector field on K 2l × K m , it is easy to see that [X, Π S + Π g ] = 0 is equivalent to the three equations According to the first remark we made above, equation (4.1) gives X S = [f, Π S ] where f is a (formal or analytic) function on K 2l+m . Now, replacing X S by [f, Π S ] in (4.2) and using the graded Jacobi identity of the Schouten bracket, we get (4.4) X g − [f, Π g ], Π S = 0 .
Since X g − [f, Π g ] is a vector field which only has component in ∂ ∂z , the second remark we made above gives where Y is a vector field on K m (i.e. only has components in ∂ ∂z and whose coefficients are functions of z). Finally, (4.3) gives [Y, Π g ] = 0 i.e. Y is a 1-cocycle for the Poisson cohomology of (K m , Π g ). Since the Lie algebra g is semisimple, the Poisson cohomology space H 1 (K m , Π g ) is trivial (see for instance [4]). We then obtain Y = [h, Π g ] where h is a function on K m .
To resume, we get which means that X is a 1-cobord for the Poisson cohomology of (K 2l × K m , Π S + Π g ).
The second result is an analytic version of a smooth linearization theorem due to V. Ginzburg. In the Appendix of [6], he states that the G-action of a compact Lie group on a Poisson manifold (P, Π) (everything is smooth here), fixing a point x of P and such that the Poisson structure is linearizeable at x, can be linearized by a diffeomorphism which preserves the Poisson structure. Here, we state the following : Consider an analytic action of a compact (analytic) Lie group on (K n , Π) (K is R or C), where Π is an analytic Poisson structure on K n . Suppose that the action fix the origin 0 and that the Poisson structure is linearizable at 0. Then, the action can be linearized by a Poisson diffeomorphim.
Proof : The proof is the same as in the smooth case : we use Moser's path method. If g is an element of G, we put ϕ g the corresponding diffeomorphism of K n and ϕ g lin its linear part at 0. We construct a path of analytic actions of G on (K n , Π) given by the following diffeomorphisms : for any g in G and x in K n . These actions preserve Π and fix 0. We want now to show that there exists a path of diffeomorphisms ψ t , with ψ 0 = Id, preserving the Poisson structure Π and such that (4.7) ψ t • ϕ g t • ψ −1 t = ϕ g 0 = ϕ g lin , for all t ∈ [0, 1] and all g in G.
Let C t (g) be the time-depending vector field associated to ϕ g t : (4.8) C t (g)(ϕ g t (x)) = ∂ϕ g t ∂t (x) .
Derivating the condition (4.7), we are led to look for a time-depending vector field X t (corresponding to ψ t ) verifying (4.9) C t (g) = ϕ g t * X t − X t , for all t ∈ [0, 1] and all g in G.
We put (4.10) dh is a bi-invariant Haar measure on G such that the volume of G is 1. This vector field is analytic and depends smoothly on t. Moreover, since each C t (h) preserves the Poisson structure Π, so does X t . Finally, one can check that X t satisfies the condition (4.9).
Proof of Proposition 2.6. We denote by α the linear application from the Cartan subalgebra h to K n defined by α(h) = (α 1 (h), . . . , α n (h)) for any h in h and by W its image. We show that the subset of W formed by the elements γ such that the ω d (γ) (defined as in (2.20) replacing α i , h 1 by γ i ) do not satisfy the ω-condition is of measure 0 (in W ). Since α is a linear surjection from h to W , it will show Proposition 2.6.
For any positive integer k and any positive real number c, if denotes the norm associated to , , we put Actually, we show here that W 1 ∩ V is of measure 0 but the same technic works to prove that W k ∩ V is also of measure 0 for each k. Therefore ∪ k (W k ∩ V ) is of measure 0 too, which proves the proposition. Now, for any λ in Z n + we consider the affine subspace V λ of K n formed by the vectors γ such that γ , λ = 1 − |λ| and we put for c > 0, (4.12) V λ,c = γ ∈ K n ; |λ| − 1 + γ , λ ≤ c |λ| s . This last set is like a tubular neighborhood of V λ of thickness 2c |λ| s . We look now at K λ,c = V λ,c ∩W 1 . If it is not empty, it is a kind of "band" in W 1 of thickness smaller than S 2c |λ| s where S is a positive constant which only depends on the dimension of W (and on the metric). Therefore, we get This latest sum converges (because s > n) and we then get V ol(W 1 ∩ V ) = V ol(∩ c>0 W 1 ∩ V c ) = 0.
The point b) is obvious.
It is easy to see that these numbers are smaller than 1, provided that d is large enough.
Proof of Lemma 2.9. a) Since the sequence (r d ) d decreases and converges to a positive real number R > 0, we have r d > R for all d. We write r d −ρ d = r d 1 d 2 > R d 2 , thus for d sufficiently large, we get r d − ρ d > 1 2 d . b) We have ρ d − r d+1 = ρ d 1 − ω d+1 2 d+1 1 2 d+1 +1 . Since the sequence (ρ d ) d decreases and converges to R > 0, we have ρ d > R > 0 for all d. We then show that if d is sufficiently large, then We have ). By the ω-condition, the sequence (γ d ) d converges to 0 and is negative for all d sufficiently large. Then, if ε is a small positive real number (for instance ε = 1/2), we have for all d sufficiently large, 1 − e γ d > −(1 − ε)γ d . We deduce that (4.14) R Therefore, for d sufficiently large, R(1 − e γ d ) > 1 2 d . | 2014-10-01T00:00:00.000Z | 2005-09-07T00:00:00.000 | {
"year": 2005,
"sha1": "8c65e9833c1bc87b7d8b06eb48f1a6a8e3993178",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5802/ambp.221",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "8c65e9833c1bc87b7d8b06eb48f1a6a8e3993178",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
225313399 | pes2o/s2orc | v3-fos-license | A Novel Grape Downy Mildew Resistance Locus from Vitis rupestris
The viticulture industry needs advanced grape cultivars with genes that enhance disease resistance and environmental stress tolerance to meet the challenges of a changing climate. To discover beneficial allelic variants of grape genes, we established an F1 mapping population from a cross between two North American grapevines, Vitis rupestris Scheele and Vitis riparia Michx. We generated genotyping-by-sequencing (GBS) markers and constructed parental linkage maps consisting of 1177 and 1115 GBS markers, respectively (LOD threshold ≥ 14), which were validated by mapping the sex-determining locus to chromosome 2. Taking advantage of loci heterozygous in both parents, we also constructed an integrated map containing 2583 markers. We mapped a major quantitative trait locus (QTL) for downy mildew (Plasmopara viticola) resistance to chromosome 10 of V. rupestris using both greenhouse- and in vitro-generated leaf resistance data. This QTL explains 66.5% of the phenotypic variance under greenhouse conditions, and its 2-LOD confidence interval corresponds to region 2,470,297 to 3,024,940 bp on chromosome 10 in the Vitis vinifera L. PN40024 reference genome sequence (assembly 12X.v2). We provide PN40024-projected positions of the GBS markers, which can be used as anchors to develop additional markers for the introgression of this V. rupestris haplotype into cultivated grape varieties.
Grape (Vitis vinifera L.) cultivation relies heavily on the recurrent application of fungicides, a disease-control method that is both costly and potentially harmful to the environment and human health. Cultivation of disease-resistant grape varieties carrying genes that enhance defense against pathogens is an approach that helps reduce the amount of fungicides applied in viticulture. Traditionally, the source of such defense-related genes has been the North American wild relatives of V. vinifera, as they coevolved with now-pandemic pathogens and acquired allelic diversity that strengthens resistance against fun-gal and oomycete diseases (Alleweldt and Possingham 1988). Grapevine breeding has a long history of introgressing genetic information from wild grapevines (Vitis species) into V. vinifera. The first interspecific grapevine crosses were made in the United States during the early and mid-19th century, followed by a surge of breeding activity in Europe in the wake of the phylloxera epidemic in the late 19th and early 20th centuries (Reisch et al. 2012). This pioneering work focused on developing disease-resistant, fruit-producing varieties through crosses between V. vinifera and North American grape species and on breeding phylloxera-and lime-tolerant rootstocks through interspecific crosses among various American grapevines. These efforts met with great success, to the extent that hybrids from the late 19th century are still popular fruit-producing varieties in the eastern and midwestern US. Several of the interspecific American hybrids are among the most widely used rootstocks worldwide today (Di Gaspero et al. 2012, Migicovsky et al. 2016, Riaz et al. 2019.
While introgression of disease resistance traits continued through the 20th century, exploration of wild grape relatives for new resistance sources has lagged. Most breeding work during the past century focused on germplasm that was introduced to Europe from North America during the 1800s. This resulted in a narrow genetic base for both fruit-producing (Di Gaspero et al. 2012) and rootstock (Riaz et al. 2019) hybrids. The high degree of polymorphism reported in North American wild grapevines (Liang et al. 2019) suggests that the genetic diversity of this germplasm is vastly more extensive than what is represented in hybrid cultivars today. Recent examples in which a broader exploration of this germplasm led to the discovery and deployment of valuable haplotypes include the introgression of Pierce's disease resistance from Vitis arizonica (Riaz et al. 2009) and powdery and downy mildew resistance from Muscadinia rotundifolia (Feechan et al. 2013, Agurto et al. 2017. In this paper, we describe linkage map construction and quantitative trait locus (QTL) analysis in an F 1 family from a cross between Vitis rupestris and Vitis riparia, two species that vary in habitat and in adaptation to different environmental conditions. The seed parent used in this cross is V. rupestris B38, which was collected by Herbert C. Barrett in Texas in 1951 and donated to the National Germplasm Repository in Geneva, NY (PI588160) by Bruce Reisch in 1985. The pollen parent is V. riparia HP-1, which was collected by Neils E. Hansen in Bismarck, ND and donated to the National Germplasm Repository in Geneva, NY (PI588271) by Ronald Peterson in 1987. V. rupestris forms shrubs that sprawl along the surface of nutrient-poor sand or gravel bars in intermittent streams or stony outcroppings and grows in small populations within a limited geographic area. V. riparia, a sister taxon of V. rupestris, forms high-climbing lianas in moist, but well-drained, alluvial soils along rivers and thrives in large populations across great expanses of the continent. As these evolutionarily closely-related grapevine species (Klein et al. 2018) have adapted to contrasting environmental conditions in their native habitats, their alleles will likely influence horticultural traits in different ways. The parent plants were selected because they have large differences in fall photoperiod response, which is likely tied to cold tolerance, and because we wanted to pseudo-replicate the specific cross that produced the commercial rootstock 3309C. We expected to find in the F 1 hybrid progeny of this cross abundant potential in segregation for other important viticultural traits, such as branching, angle of growth, leaf shape, root growth, periderm formation, and disease resistance. We hypothesized, therefore, that their F 1 hybrid progeny would allow mapping of economically relevant genomic loci.
Materials and Methods
Mapping population. An F 1 mapping population was developed in 2014 by crossing V. rupestris accession PI588160 (female parent) with V. riparia accession PI588271 (male parent) (Germplasm Resources Information Network 2019). Crosses were made in the field by manually removing floral caps on the V. rupestris parent and applying dried collected pollen from the V. riparia parent. Inflorescences were covered with paper bags for three weeks to prevent unintended pollination. Seeds were collected from berries (fully colored and soft), vernalized four weeks at 4°C, and germinated under greenhouse conditions in a 1:1 mix of PRO-MIX and perlite. In spring 2015, seedlings were planted in the vineyard nursery and grown without irrigation or fertilizer, using pest management treatments as needed for powdery mildew, downy mildew (DM), black rot, phomopsis, and anthracnose. In 2018, the surviving seedlings (n = 257) were transplanted into a permanent research vineyard at the USDA clonal germplasm repository in Geneva, NY (42.89´N; 77.00´W). An additional 100 seedlings were germinated and maintained as potted plants at South Dakota State University as part of a pheno-typing project. The combined 357 vines were genotyped for the development of the genetic maps.
Genotyping. A young leaf was collected from each vine into a well of a 96-well plate, frozen promptly, and stored in a -80°C freezer until processing. Two grinder beads were placed into each tube and leaf tissue was ground using a Geno/Grinder 2000 (OPS Diagnostics LLC). Genomic DNA was extracted from each F 1 progeny plant and parents with DNeasy 96-well DNA extraction kits (Qiagen). Genotypingby-sequencing (GBS) was performed following a previously described protocol design (Elshire et al. 2011) with modification (Hyma et al. 2015). Barcoded adapters were ligated for each individual sample and single-end sequencing of 100 bp was performed using HiSeq 2000 (Illumina Inc.) at the Institute of Biotechnology, Genomics Facility at Cornell University in Ithaca, NY. Illumina reads were submitted to the NCBI BioSample Database (SAMN13512746 through SAMN13513110). The raw reads were demultiplexed, parsed, and trimmed for quality. Processed reads were aligned using BWA version 0.6.2-r126 (Li and Durbin 2009) against the 12X.v2 V. vinifera PN40024 reference genome sequence (RefSeq) (Jaillon et al. 2007, Canaguier et al. 2017. SNP (single nucleotide polymorphism) genotypes were called using TASSEL-GBS pipeline version 3.0.139 (Glaubitz et al. 2014).
Marker generation and map construction. GBS reads that aligned to the V. vinifera RefSeq were screened for SNPs. The identified SNPs were filtered using VCFtools v0.1.13 (Danecek et al. 2011) to retain only biallelic SNPs at a sequencing depth of ≥6. Further, only SNPs with missing genotypes of ≤20% and with minor allele frequency of ≥0.2 were retained. The resulting SNP data in the VCF file were then converted to JOINMAP 5.0 (Van Ooijen 2006) format using NGSEP (Duitama et al. 2014). F 1 genotypes with more than 10% missing SNP markers were discarded and a goodness-of-fit (ꭓ 2 ) test was performed to filter out test-cross and intercross markers deviating from the 1:1 and 1:2:1 segregation ratio in the progeny, respectively. Because segregation distortion is a natural phenomenon in outcrossing species such as grape, markers showing a moderate degree of segregation distortion were retained for the map construction and only significantly distorted markers (p < 0.0005) were discarded. Identical markers were identified and removed from the analysis. Maternal and paternal population nodes were created in JOINMAP 5.0 with marker types "ll × lm" and "nn × np", respectively, and parental maps were constructed following the two-way pseudo-test cross approach (Grattapaglia and Sederoff 1994). Only second-round maps were accepted for each parent, using the default jump threshold of five to maximize the number of markers included in the maps, while limiting inclusion of markers with weak linkage. Markers of the "hk × hk" type were then used to integrate parental linkage maps into a consensus map. Each linkage group was constructed with a threshold logarithm of odds (LOD) value of 14, maximum recombination frequency of 0.4, and jump threshold of 5. Marker order was determined with a regression mapping algorithm and genetic distances were expressed in Kosambi map units with parameters at default settings. Linkage maps were visualized using the software LinkageMap View (Ouellette et al. 2018).
Phenotyping the F 1 progeny for DM resistance. In the greenhouse, phenotyping was carried out by quantifying DM resistance on naturally infected leaves of two replicate plants for each of 136 F 1 genotypes five days after symptoms first appeared. Disease developed on naturally infected plants in the greenhouse, was monitored during development, then evaluated at a single time point when shoots were at the eight-to 10-node stage. Scoring was performed using a disease resistance scale of 1 to 10, where 1 represented the greatest susceptibility (100% of leaves had >50% of the leaf area on the abaxial side covered with sporangiophores) and 10 the greatest resistance (all leaves had minimal or no sporangial growth). All leaves on a shoot were used to provide the score (coverage over the entire shoot). Plants were then stripped and pruned to two buds and sprayed with Dithane.
An in vitro disease assay was performed to determine if the symptoms seen in the greenhouse were reproducible under more tightly controlled conditions. The 86 individuals phenotyped in vitro were part of the same F 1 population as those phenotyped following natural infection, but only 20 individuals were shared between the two cohorts. Healthy leaves from the third and fourth nodes from the apical meristem were surface-sterilized in 1% NaOCl solution for 2 min and then rinsed four times in sterile deionized water (dH 2 O) for 5 min per rinse. Four circular leaf disks, 2 cm in diameter, were excised from each leaf and placed abaxial-side up on 0.8% water-agar plates in petri dishes. DM was collected from infected leaves in the greenhouse and propagated on susceptible leaves to amplify the inoculum. A sporangial suspension of Plasmopara viticola was prepared by suspending sporangia in dH 2 O at a density of 70,000 sporangia/mL, which was then sprayed over the leaf disks uniformly. Inoculated leaf disks were incubated overnight in darkness under axenic conditions and transferred to a growth chamber set at 21°C with a 5-hr/19-hr dark/light diurnal cycle. The leaf disks were scored visually for disease resistance seven days after inoculation. Leaf surface area coverage was estimated using OIV standard disease resistance chart 452 (International Organization of Vine and Wine 2009), which uses a scale of 1 to 9, where 1 and 9 represent the greatest susceptibility and greatest resistance, respectively (Supplemental Figure 1). Each leaf disk was evaluated by two observers independently and their scores were averaged. Though the scales of the greenhouse and in vitro phenotyping had slightly different grading, they both had the same direction.
Characterization of the DM strain. To characterize the P. viticola strain responsible for this infection (named MO-1), the genomic DNA of the pathogen was extracted by boiling sporangiophores in the presence of 5% Chelex, and a 235 bp-long internal transcribed spacer-1 (ITS-1) sequence of the 5.8 S ribosomal RNA gene was PCR-amplified using a specific ITS-1 primer pair (Rouxel et al. 2014). The PCR product was then sequenced and aligned to the corresponding ITS nucleotide sequence of other P. viticola cryptic species. To assess the virulence of MO-1 on different grapevine species, the in vitro disease assay was performed using leaf disks of three different grapevines, V. riparia Gloire de Montepellier, M. rotundifolia Thomas, and V. vinifera F2-35, plus the parents of the F 1 population. Leaf surface area coverage was estimated using the 1-to-9 scale of the OIV standard disease resistance chart 452 (International Organization of Vine and Wine 2009). QTL analysis. QTL analysis was performed in MapQTL 6.0 (Van Ooijen 2009) using the integrated map. The interval mapping method was applied to detect significant associations between phenotypic traits and markers using a regression approach. Genome-wide LOD thresholds (p < 0.05) were determined for each phenotype by performing 1000 permutations. The genetic regions for significant LOD peaks were identified with corresponding 2-LOD intervals, the predicted gene content in this region was identified using the most recent annotation of the RefSeq (Grimplet et al. 2012, Canaguier et al. 2017, and the percentage of phenotypic variance explained by each QTL was calculated. QTL graphs were generated in MapChart version 2.32 (Voorrips 2002).
Results
Linkage map construction. The removal of F 1 individuals with >10% missing data reduced the number of individuals in the mapping population to 294. Filtering 348,888 SNPs across this population for various quality parameters yielded 11,063 SNPs. Of the SNPs that satisfied the filtering criteria, 3436 were discarded because both parents were homozygous for these sites. An additional 1276 sites with unexpected genotypes were excluded from downstream analysis. First, "ll × lm"-and "nn × np"-type SNP markers were used to construct parental maps. Population nodes were created in JOINMAP 5.0 for each parent separately. An additional 331 and 360 markers were removed from the maternal and paternal nodes, respectively, because their segregation was distorted from the expected 1:1 ratio as determined by ꭓ 2 test (p < 0.0005). Upon the removal of identical markers from each parental node, 1462 female parent-and 1351 male parent-informative markers following "ll × lm" and "nn × np" segregation types were used for linkage map construction. For the female and male parents, 1177 and 1115 significant markers (LOD threshold ≥ 14) were grouped into 19 different linkage groups covering 1401.3 cM and 1657.4 cM of genetic distance (Table 1), respectively. Linkage groups were numbered according to the assignment of V. vinifera RefSeq chromosome map-anchored SNP markers. For V. rupestris, the number of SNP markers on each linkage group varied, from a maximum of 114 on LG14 to a minimum of 31 on LG6. The longest and shortest linkage groups for V. rupestris were LG18 (108.7 cM) and LG6 (58.6 cM), respectively. In V. riparia, LG7 and LG10 had the most (92) and fewest (33) SNP markers, respectively, and LG18 (125.1 cM) and LG9 (63 cM) were the longest and shortest linkage groups, respectively. Although the female map contains more markers than the male map, it spans a shorter genetic length. Furthermore, 291 "hk × hk"-type markers were combined with the male and female maps to construct an integrated map. The integrated linkage map consists of 2583 markers distributed on 19 linkage groups and spans a genetic distance of 1634.1 cM, with an average marker interval of 0.63 cM (Supplemental Figure 2). Synteny between marker genetic positions on the linkage maps and their corresponding physical coordinates in the RefSeq are shown (Supplemental Figure 3). The detailed genotype information for each marker across 294 F 1 progeny were compiled for the V. rupestris, V. riparia, and integrated maps (Supplemental Tables 1, 2, and 3, respectively). Parental map quality was further tested using R/qtl (Broman et al. 2003, script provided in Supplemental File 1). Pairwise recombination fractions demonstrated tight linkage within, but not across, different linkage groups (Supplemental Figure 4).
QTL mapping of the sex-determining locus. To verify the correctness of the linkage maps, pistillate/staminate flower data were used to map the sex-determining locus through interval mapping. Of 203 flower-bearing F 1 individuals, 101 had pistillate, 102 had staminate, and none had hermaphroditic flowers, indicating that the female parent was homozygous for the recessive female allele and the male parent was heterozygous for the dominant male allele. A single major QTL was detected at a genetic position of 21.99 cM on chromosome 2 (chr2) in the integrated map with a peak LOD score of 60.32 (Supplemental Figure 5). This QTL (QTL.Sex) explained 80.7% of the phenotypic variance, and its localization to chr2 is in agreement with earlier reports (Dalbó et al. 2000, Riaz et al. 2006, Marguerit et al. 2009).
Characterization of the DM pathogen. The nucleotide sequence of the ITS-1 fragment amplified from the DM strain was identical to the corresponding fragment of Clade-A of the P. viticola species complex (Supplemental Figure 6), which established it as a member of the riparia cryptic species of P. viticola. (Rouxel et al. 2014). To characterize the virulence of the MO-1 strain, it was used to inoculate three different grapevines: V. riparia Gloire de Montepellier, M. rotundifolia Thomas, and V. vinifera F2-35. M. rotundifolia Thomas appeared immune to the strain, V. riparia Gloire de Montpellier proved partially resistant, while V. vinifera F2-35 was highly susceptible, indicating that the strain represents an aggressive pathogen of cultivated grapes (Figure 1). Both parents of the F 1 progeny had greater resistance to MO-1 than V. vinifera (Figure 1). While we consider the DM population used in this study to be a single strain, based on its appearance, we did not propagate it from a single sporangium to ensure that inoculations were done with a pure culture.
QTL mapping of DM resistance. Segregation of the DM resistance phenotype in the F 1 progeny suggested that this trait was quantitative and determined by multiple loci (Supplemental Figure 7). Of the 20 individuals in both the naturally and in vitro-infected cohorts, 16 had similarly moderate resistance ratings under both conditions. The four that differed substantially in DM coverage were all rated as highly susceptible (1 on a 1-to-10 point scale) in response to natural infection, but moderately resistant under in vitro conditions. Analysis of resistance levels in naturally-infected vines led to the detection of a major QTL at a genetic position of 12.46 cM on chr10 in the integrated map (Rpv28.1, Figure 2). This QTL, which independently mapped to the female parent but not to the male parent, had an LOD value of 32.32, and explained 66.5% of the phenotypic variance for disease resistance. QTL analysis of resistance levels in in vitro-inoculated leaf disks led to similar results: a significant QTL for resistance was detected at a genetic position of 15.09 cM on chr10 on the integrated map, explaining 24.3% of the phenotypic variance (Rpv28.2, Figure 2; mean and standard deviation scores for each genotype are reported in Supplemental Table 4). The in vitro-mapped QTL encompassed the entire Rpv28.1 interval. GBS markers that fall within the 2-LOD interval of Rpv28.1 and Rpv28.2, their LOD scores, and their projected position in the 12X.v3 assembly of V. vinifera (Canaguier et al. 2017) are listed (Supplemental Table 5). The 2-LOD interval surrounding Rpv28.2 is delimited by the GBS markers S10_419927 and S10_3959571, which correspond to the physical interval of chr10:419,927..3,959,571. Predicted genes within Rpv28.1 as projected to the V. vinifera 12X.v3 reference genome sequence are listed (Supplemental Table 6). No significant QTL for resistance was detected in the V. riparia parent under natural or in vitro conditions. Results of QTL analyses are summarized (Table 2) and effect plots are shown (Figure 3).
Discussion
The American grape species V. rupestris and V. riparia have adapted to disparate environmental conditions and occupy different but overlapping geographic ranges (Callen et al. 2016). They also evolved to have contrasting characteristics in dormancy, morphology, and growth habits (Munson 1909). Exploration of the genetic basis of their environmental adaptation is warranted because the viticulture industry is in need of genetic resources to mitigate the environmental impact of global climate change. The economic value of these species is evidenced by their status as cornerstone resources for developing disease-resistant, phylloxera-and stress-tolerant, fruit-bearing, and rootstock cultivars during the past century and a half (Reisch et al. 2012). Despite their proven value, V. rupestris and V. riparia have yet to be explored for their vast genetic diversity across North America. V. rupestris has been under pressure due to habitat loss and is threatened by genetic erosion (Pap et al. 2015). These conditions add urgency to a broader examination of its native populations. In a recent study, GBS markers were used to examine the genetic diversi-ty of 27 V. rupestris and 80 V. riparia accessions housed at the USDA-ARS Grape Germplasm Collection (Klein et al. 2018). While their data are limited to accessions maintained in the repository, their work set the technological and phylogenetic foundations for a broader exploration of the natural populations of these and other wild grape relatives (Klein et al. 2018). LG 10 with marker ID and corresponding genetic position (cM). The right panel shows the logarithm of odds (LOD) scores obtained from interval mapping for downy mildew resistance (red: greenhouse inoculation and green: in vitro inoculation) for each marker in the integrated genetic map. The solid boundary of red and green box plots, and extreme boundary represented by their whiskers indicate 1-and 2-LOD intervals for Rpv28.1 and Rpv28.2, respectively. The horizontal black dashed-dotted line represents genome-wide LOD threshold (1000 permutations) at a 5% level of significance. Markers in red and green color on the map represent markers with the largest LOD value.
We report here the construction of genetic linkage maps for V. rupestris and V. riparia based on an F 1 population produced from a cross between these two species. The genomes of both parents had a high degree of synteny with the genome of V. vinifera for marker position (Supplemental Figure 3). Only 9.94% and 8.87% of V. rupestris and V. riparia markers, respectively, were assigned to a linkage group that was different from the V. vinifera RefSeq linkage group assignment (Supplemental Figure 2). Similarly conflicting results were reported between genetic and RefSeq positions for 18.3% and 13.7% of SNP markers in apple (Antanaviciute et al. 2012, Gardner et al. 2014. Such disagreements between linkage maps and RefSeqs do not necessarily indicate mapping errors, but may result from the presence of paralogous genomic regions or incorrect RefSeq sequence assembly. By performing a map validation step using flower sex phenotype, we verified the linkage maps we generated, since the genomic position of the sex locus is known. This gave us confidence in the accuracy of our map and the ability to reproduce the mapping of a well-known locus with our set of markers, placed as they are on the genetic maps. Defense-related genes tend to be in the heterozygous state in plants (McDowell and Simon 2006), and genes that confer resistance to the same pathogen are often located at different loci in various grape genotypes (Gadoury et al. 2012, Buonassisi et al. 2017. Consequently, two resistant grapevine parents will likely produce an F 1 population in which defense-related traits will segregate, as recently demonstrated (Divilov et al. 2018). We followed a similar approach as described by Divilov et al. (2018), in that we established an F 1 hybrid population from a cross between two DM-resistant accessions. DM resistance segregated in the F 1 progeny, which enabled us to map a major QTL (LOD of 32.32), Rpv28.1, in the female parent that accounted for 66.5% of the phenotypic variance in naturally infected plants under greenhouse conditions. Repeating this analysis using a leaf disk DM inoculation assay led to mapping of another resistance QTL from the female parent, Rpv28.2, which overlaps with Rpv28.1, confirming the contribution of this locus to defense against DM. This QTL, however, explained only 24.3% of the phenotypic variance and had an LOD value of 5.2, indicating that the trait is strongly influenced by the environment. Other possible reasons for the lower LOD value with respect to the QTL detected in the greenhouse assay might include limited population size and/or phenotyping errors. Notably, of the 20 vines shared between the naturally infected and the in vitro-inoculated cohorts, 16 had similar and four had different phenotypes. All four of the latter were rated highly susceptible in the greenhouse, but moderately resistant in vitro. Unfortunately, the number of plants was too low to determine why these four individuals had so much lower resistance in the greenhouse. No QTL were identified from the male parent, which is surprising given the results of our virulence assay on multiple species (Figure 1), in which the V. riparia parent showed even greater resistance to this DM strain than the V. rupestris parent. One possible reason for this finding V. rupestris 12.456 11.86-14.85 S10_1285522 32.32 4.8 66.5 S10_2470297, S10_2868961 a 2-LOD interval on the integrated genetic map. b Genome-wide LOD threshold obtained with 1000 permutations at p = 0.05. c Percentage of phenotypic variance explained by quantitative trait locus. d Markers on each side of the largest LOD peak.
Figure 3
Effect plots showing the relative contribution to downy mildew resistance of quantitative trait locus Rpv28.1 (S10_1285522) and Rpv28.2 (S17_17189484) in the homozygous and heterozygous states.
may be the presence of multiple components of resistance that contribute at low levels to the observed resistance phenotype in the male parent, which were below the threshold of detection in the offspring. Other possibilities include a major QTL that was present in a homozygous state or in a region of the genome with low marker coverage. We report here the identification of two overlapping DM resistance loci, Rpv28.1 and Rpv28.2. While these may in fact represent a single resistance locus, we identified them in separate assays that produced different LOD scores, different levels of variance explained, and different numbers of markers included under the peaks. Therefore, we think it more prudent to retain separate nomenclature for these loci. While the high LOD value for Rpv28.1 and its reproducibility lend strong support for the presence of a resistance QTL on chr10, our experiments have limitations. Importantly, resistance was likely assessed against a single strain of P. viticola. Furthermore, our results lack multi-season field data. At the time of writing, the entire F 1 population has been established in the field in both New York and Missouri. Both locations have high DM disease pressure, but represent different climates where the prevailing DM populations are likely dominated by different cryptic species of P. viticola (Rouxel et al. 2014). In the future, it will be important to test the resistance of this progeny under vineyard conditions. The New York and Missouri plantings will enable us to collect data on how various P. viticola strains and climatic conditions influence the performance of this QTL in the field.
Rpv28.1 is responsible for 66.5% of resistance against an aggressive DM pathogen and, therefore, it may be used for breeding grape cultivars with reduced requirement for fungicide input. The applicability of this locus is even more relevant because, to our knowledge, it is the first defense-related QTL in this region of chr10. Previously, it was hypothesized on the basis of gene expression measurements that DM resistance in the hybrid cultivar Regent was encoded by three CC-NBS-LRR-type resistance genes on chr10 (Kortekamp et al. 2008). However, these genes mapped to a physical distance of at least 13 Mb away from Rpv28.1 and Rpv28.2, near the end of the opposite arm of chr10. Interestingly, the DM-resistant parent, V. rupestris B38, harbored another QTL for DM resistance, Rpv19 on chromosome 14, different from the QTL identified in our study (Divilov et al. 2018). We detected no QTL for DM resistance at this locus in either the greenhouse or in vitro assays performed in this study. The most likely reason that we did not observe this QTL is that the DM isolate in this study was different from the isolate that elicits disease resistance conveyed by Rpv19. Additionally, Rpv19 resistance is characterized by a hypersensitive reaction that was not observed in the Rpv28 resistance phenotype. Therefore, V. rupestris B38 is a promising source for multiple resistance loci and may prove a useful component of "gene pyramiding" schemes. The premise of gene pyramiding is that combining various defense mechanisms against the same class of pathogen will result in more stable resistance than introgression of a single resistance gene, particularly when considering the ability of different resistance gene products to recognize different pathogen iso-lates. Based on insight into the evolution of R genes in several plant species (McDowell and Simon 2006), it is not surprising that V. rupestris B38 carries several resistance factors against the same pathogen. The combination of potentially multiple resistance mechanisms represented by Rpv19 and Rpv28, afford protection against different isolates of the pathogen and provide a survival advantage in nature. Future work will assess the virulence of other DM isolates from different P. viticola clades on Rpv28 DM-resistant individuals to ascertain the breadth of recognition for this locus.
Introgression of several loci to provide resistance against the same pathogen is only possible with marker-assisted selection (MAS). The GBS markers that define the new QTL may prove valuable for the development of molecular markers for MAS (Table 2 and Supplemental Table 5). Only 24.3% of SNPs segregated within both V. vinifera and wild Vitis germplasm (Myles et al. 2010), suggesting that a portion of the heterozygous V. rupestris GBS markers can be selected readily when this resistance haplotype is introgressed into a predominantly V. vinifera background. Although SNP-based genotyping has gained popularity in grape breeding, many breeding programs still rely on simple sequence repeat (SSR) markers. A significant number of SSR markers are transferable from V. vinifera to wild Vitis species and hybrid grapes (Garris et al. 2009, Pap et al. 2015, Hammers et al. 2017. We identified six SSR markers that fall within or closely flank the region spanning Rpv28.1 and Rpv28.2 (Supplemental Table 7). These markers were developed originally for V. vinifera, and their applicability and polymorphism in V. rupestris B38 remains to be tested. Because V. rupestris B38 is now known to harbor multiple DM resistance genes (Rpv19 and Rpv28), marker-assisted selection is essential to identify which resistance alleles are passed to its progeny.
Highly effective MAS, however, will require development of markers closely linked to Rpv28.1. The strong synteny (Supplemental Figure 3) and partial conservation of SSR markers between V. vinifera and V. rupestris indicate that designing SSR primers based on the orthologous V. vinifera sequences may be a workable, though potentially ineffective, way to achieve this goal. A genomic library or a genome assembly of V. rupestris would make this approach more fruitful. Other marker types, including rhAmpSeq and KASP, could also be useful for this purpose and for further mapping work with this population. In addition, it may prove useful for future breeding efforts to survey the extant grapevine germplasm repositories for the markers associated with Rpv28 resistance and assess allelic diversity at this locus. Considering the importance of this species as a resource for grape breeding, the establishment of genomic tools would be a well-justified investment for the grape research community.
Conclusion
The hypothesis that a V. rupestris × V. riparia F 1 progeny can facilitate mapping of economically relevant loci was supported by the identification of the DM resistance QTL Rpv28.1 and Rpv28.2 in the V. rupestris genome. The novelty of this resistance locus suggests that the biological diversity of North American Vitis remains an extensive and still largely unexplored resource for grapevine breeding. This paper and its supplemental material provide a valuable resource for grape breeders, geneticists, and those teaching genetic mapping in an outcrossing species. | 2020-09-03T09:02:55.665Z | 2020-08-20T00:00:00.000 | {
"year": 2020,
"sha1": "d0d2677e4c8c88e8a1eb88c59eded7efdfdee2bd",
"oa_license": "CCBY",
"oa_url": "https://www.ajevonline.org/content/ajev/72/1/12.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "bc52599a6a354d048f0092f0e636437aa286ee20",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
56400560 | pes2o/s2orc | v3-fos-license | Antioxidant Activities and Phenolic Compounds of V arious Extracts of Rhus typhina Fruits and Leaves
The antioxidant activities of various extracts (methanol, hexane, dichloro-methane, ethyl acetate, nbutanol water) of Rhus typhina fruits and leaves were investigated using different methods and the main phenolic compounds were analyzed by LC-MS. The ethyl acetate extracts from fruits and leaves of R. typhina exhibited the highest DPPH, hydroxyl radical and nitrite scavenging activity, reducing potential and protein protection ability. The phenolic and flavonoïd contents were highest in the ethyl acetate fraction. The LC-MS analysis showed that the contents of luteolin and luteolin-7-O-glucuronide in leaves are little higher (34.49 and 32.69%, respectively) than that (32.49 and 27.89%, respectively) in the fruits, the content of rutin in fruits (16.73%) is higher than that (7.79%) in the leaves. These results implied that the leaves of R. typhina might serve as a natural source of antioxidant using as the food additive for its good nutrition as well as the fruits of Rhus typhina.
INTRODUCTION
Rhus typhina (Staghorn sumac) is used to make a beverage termed "sumac-ade" or "Rhus juice" prepared from its fruits and serves also as a traditional medicine having pharmacological functions such as antihaemorrhoidal, antiseptic, diuretic, stomachic and tonic (Foster and Duke, 1990;Moerman, 1998).Some phenolic compounds including gallic acid and gallotannin have been identified in the leaves (Frohlich et al., 2002;Werner et al., 2004).The fruits of R. typhina were rich in polyphenols (Kossah et al., 2010) and also found to abound in oleic and linoleic acids, vitamins (B1, B2, B6 and Vc), minerals as well as organic acids (Kossah et al., 2009).However, study of the antioxidant of R. typhina leaves is limited The present work focused on comparing the antioxidant activities of extract.including various fraction (nhexane, dichloromethane, ethyl acetate, n-butanol and water) of methanol extracts from R. typhina fruits and leaves using different in vitro assay, such as the 2, 2diphenyl-2-picrylhydrazyl hydrate (DPPH) radicals scavenging ability, hydroxyl radical scavenging ability, nitrite scavenging ability, reductive potential, protection protein damage.Furthermore, the main compositions of polyphenolic compounds were analysis by analytical HPLC-MS.
The objectives of this study was to find the difference of chemical composition and antioxidant activity of R. typhina fruits and leaves extract and providing the proofs to use the leaves as functional food source as well the fruits.
MATERIALS AND METHODS
Preparation of plant extraction: R. typhina fruits and leaves were harvested from Xinxiang city in Henan province, China.The fruits and leaves were dried in the shade for 15 d and 10 d, respectively before solvent extraction.The dried R. typhina fruits and leaves were extracted three times with 70, 85 and 95% methanol sequencing at 45°C for 12 h respectively and then filtered through filter paper (100 mm; Whatman, Maidstone, UK).The methanol extract (RT-M) were concentrated under reduced pressure by a rotary evaporator machine (CCA-1110; EYELA, Tokyo, Japan).The RT-M extract were suspended in water and then partitioned with n-hexane, dichloromethane, ethyl acetate and n-butanol, repeat three times with each solvent.Removal of the solvents afforded the n-hexane (RT-M-H), dichloromethane (RT-M-D), ethyl acetate (RT-M-E), n-butanol (RT-M-B) and water fractions (RT-M-W), respectively.
Determination of total phenolic contents: Total Phenolic Content (TPC) was estimated using Folin-Ciocalteu according to Ragazzi and Veronese (1973).Briefly, 1.0 mL of extract solution containing 1.0 mg extracts was dilution with 5 mL deionixed water, 0.5 mL of 50% Folin-Ciocalteu reagent was added.Five min later, 1 mL of 5% Na 2 CO 3 was added, then mixed thoroughly and stand for 1 h in the dark.The absorbance was measured at 725 nm.The concentration of total phenolic compounds was expressed as mg Gallic Acid Equivalents (GAE)/g of extract.
Determination of total flavonoids contents: Total Flavonoids Content (TFC) was measured using the method described by Park et al. (1997) with a slight modification.An aliquot of 0.5 mL of the solution containing 1 mg extracts was added to test tubes containing 0.1 mL of 10% aluminiumchloride hexahydrate, 0.1 mL of 1 M potassium acetate, 2.8 mL of deionized water and 1.5 mL 95% ethanol.After 40 min at room temperature, the absorbance was determined at 415 nm.The concentration of flavonoid compounds was expressed as mg Quercetin Equivalents (QUE)/g extract.
DPPH radical scavenging activity: A 2.0 mL aliquot of extract was added to 2.0 mL of 0.2 mM DPPH methanolic solution.The mixture left to stand at room temperature for 30 min and read the absorbance at 517 nm.The ability to scavenge the DPPH radical was calculated using the follow equation: Scavenging effect (%) = 1-(A sample -A blank )/A control ×100% Synthetic antioxidants L-Ascorbic acid was used as positive control.The values of scavenging effect were expression as EC 50 (µg/mL), which means the concentration of the sample antioxidant required to scavenge 50% of the DPPH radical in the mixture.
Hydroxyl radical (•OH) scavenging activity:
The Fenton reaction mixture consisted of 200 µL of each FeSO 4 (10 mM), Ethylenediaminetetraacetic acid (EDTA, 10 mM) and 2-deoxyribose (10 mM).200 µL of the samples and 1 mL of 0.1 M phosphate buffer (pH 7.4) were added to a total volume of 1.8 mL.Subsequently, 200 µL of H 2 O 2 was added and the reaction mixture was incubated for 4 h for 37°C.After incubation, 1 mL of 2.8% TCA and 1 mL of 1% TBA were added and the mixture was placed in a boiling water bath for 10 min, the mixture was then centrifuged (5 min, 3000 rpm) and the absorbance was measured at 532 nm.The hydroxyl radical scavenging activity was calculated according to the following equation: Synthetic antioxidants, BHT were used as positive control.The values of scavenging effect were calculated for the various concentrations of extract.Tests were carried out in triplicate.
Reducing power assay:
The reducing power of extracts was determined as described by Singh and Rajini (2004).Total 1 mL aliquot of the extract, 2.5 mL of 0.2 M phosphate buffer pH 6.6 and 2.5 mL of 1% (w/v) K 3 Fe(CN) 6 were added.The mixture was incubated at 50°C for 30 min.Ten percent (10%, w/v) trichloroacetic acid (2.5 mL) was added and the resulting mixture was centrifuged at 3000 rpm for 10 min.2.5 mL of the supernatant was added 2.5 mL of distilled water and 0.5 mL of 0.1% (w/v) FeCl 3 solution.The absorbance was measured at 700 nm using a spectrophotometer.Assays were performed in triplicate.The reducing power of Vc was also determined as the positive control.
Measurement of nitrite scavenging ability:
The nitrite scavenging ability of the extracts was determined according to a method using Griess reagent (Kato et al., 1987).Briefly, 1 mL extract was mixed with 1 mL of 1 mM nitrite sodium.Then the mixture was added to 8 mL of 0.2 M citrate buffer (pH 1.2) and incubated for 1 h at 37°C.After incubation, 1 mL of solution/supernatant was withdrawn and added to 2 mL of 2% acetic acid and 0.4 mL of Griess reagent (1% sulfanilic acid and 1% naphthylamine in a methanol solution containing 30% acetic acid).After vigorous mixing, the mixture was placed at room temperature for 15 min and measured the absorbance at 520 nm.Quercetin was used as positive control.
Protein protection assay: Protein oxidation was assayed as described by Kwon et al. (2000) with minor modifications.Oxidation of Bovine Serum Albumin (BSA) in PBS was initiated by AAPH and incubated with various concentrations of the extract or galic acid (positive standard).After incubation for 24 h at 37°C, 0.02% BHT was added to prevent the formation of further peroxyl radical.The proteins were then assayed with normal SDS-PAGE.
Statistical analysis:
Statistical analysis software (SPSS version 11.50; SPSS Inc., Chicago, IL, USA) was used for all analyses.The data was subjected to ANOVA and significant differences were reported at the level of p<0.05 by the SPSS software.All experiments were performed with 3 replications.One-way ANOVA followed by Tukey's HSD test or paired Student t-test was used to assess the statistical significance of changes in all indices with the level of significant difference set at p<0.05.
RESULTS
Total polyphenol and flavonoid contents: Among the six different solvent fruits and leaves extracts, the TPC were significant differences (p<0.05),ranging from 19.57 to 184.26 mg GAE/g extract.In fruits, the TPC of RT-M-E was highest at 184.42±1.65 mg GAE/g extract, the TPC of RT-M (183.42±2.29)was a little bit lower than RT-M-E, then followed by RT-M-D at 181.66±1.38 mg GAE/g extract and RT-M-B at 168.05±0.97mg GAE/g extract.
However, in same solvent extract the TPC of R. typhina leaves were lower than that of the fruits.In leaves, the TPC of RT-M-E was highest at 184.26±2.17mg GAE/g extract, followed by RT-M-D at 180.15±1.98mg GAE/g extract and RT-M at 163.02±1.2mg GAE/g extract, the lowest content is RT-M-W at 19.57±1.31.A similar content tendency was found in fruits.The TFC in R. typhina leaves, range from 15.71 to 62.53 mg of QUE/g extract, in R. typhina fruits which range from 11.31 to 71.46 mg of QUE/g extract.The two RT-M-E extracts in leaves and fruits possess the highest polyphenol for its polarity compared to other fractions (Table 1).
DPPH radical scavenging activity:
DPPH is a free radical and accepts an electron or hydrogen radical to M-H: n-hexane fraction of RT-M; RT-M-D: dichloromethane fraction of RT-M; RT-M-E: ethyl acetate fraction of RT-M; RT-M-B: n-butanol fraction of RT-M; RT-M-W: water fractions of RT-M.The results are presented as the mean±SD of three independent experiments in triplicate.A p-value<0.05 was considered significant become a stable diamagnetic molecule (Soares et al., 1997).Various concentration of extract was taken as a measure of antiradical activity, the lower the EC 50 , the higher the antioxidant potential (Brand et al., 1995).Lascorbic acid was the reagent used as standard.The EC 50 of all extracts of R. typhina leaves and fruits and L-ascorbic acid against the DPPH radical were in Table 2.The lowest EC 50 (7.85and 8.6 µg/mL) were found in RT-M-E of R. typhina leaves and fruits respectively.
It was observed that the RT-M-E of R. typhina leaves and fruits exhibited the best DPPH radical scavenging activity for the highest levels of TPC and TFC, the RT-M-W showed relatively weak DPPH scavenging ability, because it is extracted polyphenolic compounds difficultly from the non-polarity solvent.Compared with the same solvent fraction of leaves and fruits, all the fruits extract exhibited better DPPH radical scavenging activity than leaves extract because the fruits contain a bit more TPC and TFC than that of the leaves, all these suggested that the polyphenols and flavonoids may be the main constituents responsible for the DPPH radical scavenging activity.
Hydroxyl radical scavenging activity:
The active hydrogen peroxide can be toxic to cell, therefore, removing H 2 O 2 as well as O 2 •-is very important for antioxidant defense.The hydroxyl radical scavenging ability of all extracts was shown in Fig. 1.It was noticed that most of the extract were capable of scavenging hydrogen radical in a dose-depend manner except the RT-M-W.The results exhibited that RT-M-E of R. typhina leaves and fruits had the higher scavenging ability than other fraction; the RT-M-W of R. typhina leaves and fruits had the very weak scavenging capacity on deoxy-D-ribose degradation.
There was significant correlation between the TPC and hydrogen radical scavenging activity (Fig. 1).
Reducing power assay:
The reduction of an oxidant antioxidant molecule to regenerate the reduced antioxidant is another reaction pathway in electron donation.The reducing power of all samples showed a dose-dependent manner (Fig. 2) and followed the order: RT which interacts with oxygen to produce nitrite ions in physiological pH that can be estimated using Griess reagent.NO. react O 2-to produce reactive peroxynitrite (ONOO-), which causes serious damage to lipids, protein and nucleic acids (Moncada et al., 1991).Our data revealed that all the extract exhibited the better scavenged nitric oxide activity in pH 1.2 than those of at pH 4.6 and pH 6.0.(Fig. 3) Of the leaves extracts, RT-M-E exhibited the best nitric oxide scavenging activity than others, followed by RT-M.Of the fruits extracts, RT-M-E showed the best scavenging activity, followed by RT-M, RT-M-E, RT-M-B and RT-M-D.Both leaves and fruits water fraction showed the weak nitric oxide scavenging ability.
Protein protection ability: AAPH is a water-soluble initiator, which decomposes into alkyl radicals at physiological condition, then react to oxygen and produce alkyl peroxyl radicals to initiate DNA oxidative fragmentation (Cai et al., 2003).Under many pathological conditions cellular proteins get oxidized, the vulnerability of various amino acid residues of proteins to oxidation varies with reactive oxygen species (Ames et al., 1993).The protection against protein oxidative damage was determined by the oxidation of BSA initiated by AAPH.The normal SDS-PAGE results demonstrated that, RT-M-E of leaves and fruits exhibited significant protective effect against oxidation of BSA in a concentration depend manner (Fig. 4).The effect of R. typhina fruits extracts at the same concentration is a bit better than that of leaves.
The RT-M-D and RT-M also showed some extent protective effect against oxidation of BSA (data not shown).
HPLC-MS identifies the individual phenolic compounds in the RT-M-E of R. typhina fruits and leaves:
The nine compounds such as gallic acid, catechin, EGCG, caffeic acid, p-coumaric acid, luteolin-7-β-D-glucopyranoside, luteolin, rutin and quercetin were taken as standard substances to identify and quantify the phenolic compounds in the RT-M-E of R. typhina fruits and leaves.The RT-M-E of R. typhina fruits and leaves is rich in phenolic compounds, including 8 standards based on comparison of their retention time and mass with the authorized compounds.In the RT-M-E of R. typhina fruit, luteolin is the most abundant present with 32.493%, luteolin-7β-D-glucopyranoside is the second abundant present with 27.899%, rutin is the third abundant present with 16.734%, gallic acid and p-coumaric acid is present with 1.496 and 0.792%, respectively, the other standards are present very weak (Table 3).In the RT-M-E of R. typhina leaves, luteolin, luteolin-7-β-Dglucopyranoside and rutin are the main abundant compounds.As shown in Fig. 5 and Table 3, we can find that the content of luteolin, luteolin-7-β-Dglucopyranoside and p-coumaric acid is higher in leaves than those in fruits, however the content of rutin and gallic acid is higher in fruits than those in leaves.
DISCUSSION
The antioxidant activity of methanol extract and its five fraction of leaves and fruits of R. typhina were reducing power and protection protein damage activities, all these activities were positive correlated with the TPC and TFC.The RT-M-D of R. typhina leaves and fruits also exhibited the good antioxidant in many antioxidant assays, Their higher TPC and TFC contribute to these activities.The TPC, TFC in fruit is a bit higher than in leaves.The LC-MS analyzed the leaves and fruits of R. typhina possess large amounts of flavonoids including the luteolin, luteolin-7-Oglucuronide and rutin.Flavonoids can prevent some diseases such as cancers, diabetes and Alzheimer's disease through antioxidative action and/or the modulation of several protein functions.The higher concentrations of flavonoid derivatives enhance the neutraceutical value in terms of health-promoting effects.As a whole, all the results find that the leaves of R. typhina also could be used as effective functional foodstuff resource for its antioxidative and bioactive flavonoids as well the fruits.Now, a study on the effects on inhibition the human tumor cell proliferation with leaves and fruits of R. typhina is currently in progress.
Table 1 :
Total polyphenols (mg/g GAE dry extract) and flavonoid contents (mg/g QUE dry extract) of R. typhina leaves and fruits extracts | 2018-12-15T13:59:21.883Z | 2015-02-01T00:00:00.000 | {
"year": 2015,
"sha1": "20e9705d627a73fb53f375eaf501f3c447c4c401",
"oa_license": "CCBY",
"oa_url": "https://www.maxwellsci.com/announce/AJFST/7-223-229.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fb97c3791ed4bc0797edff5ba0d043c07a3b7ad2",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
232265798 | pes2o/s2orc | v3-fos-license | Lightweight, Flexible Cellulose-Derived Carbon Aerogel@Reduced Graphene Oxide/PDMS Composites with Outstanding EMI Shielding Performances and Excellent Thermal Conductivities
Highlights Cellulose aerogels were prepared by hydrogen bonding driven self-assembly, gelation and freeze-drying. The skin-core structure of CCA@rGO aerogels can form a perfect three-dimensional bilayer conductive network. Outstanding EMI SE (51 dB) is achieved with 3.05 wt% CCA@rGO, which is 3.9 times higher than that of the co-blended composites. Supplementary Information The online version contains supplementary material available at 10.1007/s40820-021-00624-4.
Introduction
While electronic and electrical equipment have brought great convenience to our lives, they have also caused increasingly serious electromagnetic pollution, such as electronic noise, electromagnetic interference and radio frequency interference [1][2][3]. Electromagnetic waves not only couple and interfere with the normal use of other electronic components, making it impossible for electronic equipment to function properly and posing serious threat to information security, but also affect human health. Studies have shown that when people are exposed to electromagnetic radiation for a long time, the risk of diseases such as cancer, heart disease, skin problems, headaches and other mild or acute diseases will increase. Therefore, the design and development of lightweight, economical and efficient EMI shielding materials are imperative to address the problems of electromagnetic pollution [4][5][6].
Compared with traditional metal-based EMI shielding composites, polymer-based EMI shielding composites have attracted much attention from the scientific and industrial communities due to their lightweight, high specific strength, easy molding and processing, excellent chemical stability, low cost and good sealing properties [7][8][9] Commonly used polymer matrixes are epoxy resin, phenolic resin, polyvinylidene fluoride (PVDF) and polydimethylsiloxane (PDMS). Among them, PDMS has good mechanical properties, high and low temperature resistance, excellent weather resistance, chemical stability and easy processing and molding characteristics, widely used in many fields such as aerospace, automotive industries and microelectronics [10][11][12]. In addition, PDMS has excellent flexibility compared to rigid matrixes such as epoxy resins and can meet the requirements of wearable electronic devices for flexibility in materials. In recent years, PDMS-based EMI shielding composites have made certain research progress, but to achieve the desired EMI shielding effectiveness (EMI SE) usually requires high filler loading, which seriously affects the cost, processability and mechanical properties, largely limiting the application of PDMS-based EMI shielding composites in the field of microelectronics, aircraft and spacecraft [13][14][15]. Therefore, the development of PDMS-based EMI shielding composites with excellent EMI shielding performances at low filler loading is a research hotspot.
As the abundant renewable bioenergy on the earth, biomass (such as straw, wood, sugarcane and cotton) is easy and fast to prepare from a wide variety of sources [16][17][18][19]. The preparation of biomass-based carbon aerogel/polymer composites by certain methods has a wide range of applications in the fields of flexible conductive materials, supercapacitors, energy storage materials and EMI shielding materials [20][21][22]. Shen et al. [23] prepared aerogel (Cs)/epoxy EMI shielding composites by carbonizing natural wood at 1200 °C to obtain Cs and then backfilling with epoxy resin. The results showed that the electrical conductivity (σ) and EMI SE T of the Cs/epoxy EMI shielding composites reached 12.5 S m −1 and 28 dB, respectively. Li et al. [24] prepared aerogel-like carbon (ALC)/PDMS EMI shielding composites by hydrothermal carbonization of sugarcane to obtain ALC, followed by backfilling with PDMS. The results showed that the EMI SE T of ALC/PDMS EMI shielding composites reached 51 dB with the thickness of 10 mm. Ma et al. [25] obtained straw-derived carbon (SC) aerogel by carbonizing wheat straw at 1500 °C and then prepared SC/epoxy EMI shielding composites by backfilling with epoxy resin. The results showed that the EMI SE T of SC/epoxy EMI shielding composites reached 58 dB with the thickness of 3.3 mm.
It has been shown that the EMI SE of biomass-based carbon aerogel/polymer EMI shielding composites can be further enhanced by compounding the carbon aerogel with highly conductive materials (such as silver wire, MXene and graphene) or magnetic materials (such as iron, cobalt, nickel and their oxides) [26,27]. The introduction of reduced graphene oxide (rGO) into cellulose carbon aerogels (CCA) can further improve the 3D conductive network and significantly enhance the σ of biomass-based carbon aerogel/polymer EMI shielding composites, thus effectively improving their EMI SE [28]. Zeng et al. [29] prepared ultra-lightweight and highly elastic rGO/lignin-derived carbon (LDC) aerogel EMI shielding composites by freeze-drying. The results showed that the EMI SE T of the rGO/LDC aerogel EMI shielding composites reached 49 dB with the thickness of 2 mm. Wan et al. [30] prepared ultra-lightweight cellulose fiber (CF)/rGO aerogel EMI shielding composites by freeze-drying and carbonization. The results showed that the EMI SE T of the CF/rGO aerogel EMI shielding composite reached 48 dB with the thickness of 5 mm. In our previous research, Gu et al. [31] prepared annealed sugarcane 1 3 (ACS) by hydrothermal method and annealing, followed by vacuum-assisted impregnation to prepare ASC/rGO aerogel EMI shielding composites. The results showed that the EMI SE T of ASC/rGO aerogel EMI shielding composites reached 53 dB with the thickness of 3 mm.
In this paper, NaOH/urea solution was used to dissolve cotton via hydrogen bond driving self-assembly to obtain cellulose solution and then CA was prepared by combining gelatinization, freeze-drying. The optimized CA was impregnated into GO solution, freeze-dried to produce CA@ GO aerogel with GO loaded on CA backbone, then carbonized at high temperature to produce CCA@rGO aerogel with rGO loaded on CCA backbone and finally backfilled with PDMS to produce CCA@rGO/PDMS EMI shielding composites. On this basis, the effects of CCA and rGO loading on the electrical conductivities, EMI SE, thermal conductivities, mechanical and thermal properties of CCA@rGO/ PDMS EMI shielding composites were investigated.
Preparation of CA
NaOH/urea solution was used to dissolve the cotton by hydrogen bond driving self-assembly to obtain the cellulose solution. The process was described below. The NaOH/urea solution (NaOH/urea/water = 7/12/81, wt/wt/wt) was first prepared and pre-cooled to 0 °C. An appropriate amount of dried cotton was then weighed and immersed in the precooled solution (cotton/pre-cooled solution = 1/99, 2/98, 3/97, 4/96 and 5/95, wt/wt) and mechanically stirred in an ice bath (0 °C) for 48 h to obtain the cellulose solution, and the corresponding solution concentrations were 1 wt%, 2 wt%, 3 wt%, 4 wt% and 5 wt%, respectively. The cellulose hydrogel was obtained by adding a certain amount of cellulose solution to a three-necked flask equipped with a condensing unit and heating to 70 °C for 24 h. The cellulose hydrogel was soaked in deionized water, changed at 12 h intervals to pH=7 and frozen in liquid nitrogen (−56 °C) and freeze-dried for 72 h to obtain cellulose aerogel (CA).
Preparation of CCA@rGO/PDMS
GO was prepared by modified Hummers method, and a range of GO solutions at different concentrations (2.5, 5, 7.5 and 10 mg mL −1 ) were configured. The CA@GO was prepared by impregnating the pre-prepared CA into the above aqueous GO solution, evacuating until no air bubbles emerged, then freezing (−56 °C) and freeze-drying for 72 h to obtain CA@GO with GO loaded on the CA backbone. After carbonization at 1500 °C for 2 h under nitrogen atmosphere, CCA@rGO with rGO supported on the CCA framework was obtained. The prepared CCA@rGO foam has excellent flexibility and can withstand bending deformations up to 180 °C (Fig. 1b). It also has excellent mechanical load-bearing performance and resilience. It can load 500 g weights, and the original shape can be restored immediately after the weights are removed (Fig. 1c-c'').
Weigh a certain amount of PDMS and n-hexane (PDMS/n-hexane=1/2, vol/vol), and mechanically stir at room temperature for 30 min to obtain the PDMS/n-hexane solution. The CCA@rGO was placed in the mould and the portion of the PDMS/n-hexane solution was first poured into the mould and vacuum impregnated at room temperature until there were no bubbles. The PDMS/n-hexane solution was then poured into the mould and the vacuum impregnation was continued at room temperature until there were no bubbles. This was repeated until the PDMS/n-hexane solution had completely submerged the CCA@rGO and the temperature was raised to 65 °C for 4 h. The CCA@rGO/ PDMS EMI shielding composites were obtained by simple processing after natural cooling to room temperature. The schematic diagram is shown in Fig. 1a.
At the same time, CCA@rGO was crushed to obtain P(CCA@rGO). A series of P(CCA@rGO)/PDMS EMI shielding composites with the same amount of CCA@rGO as the CCA@rGO/PDMS EMI shielding composites were prepared by controlling the amount of P(CCA@rGO) added. The prepared CA was carbonized at 1500 °C under nitrogen atmosphere for 2 h to obtain cellulose carbon aerogel (CCA). The same PDMS casting process was adopted to obtain CCA/PDMS EMI shielding composites
Characterization on CA, CCA, CA@GO and CCA@rGO
Figure S1a illustrates the thermogravimetric analyses curves of CA and CCA. It can be seen that CA has a significant thermal weight loss process from 200 to 400 °C, and the residual carbon rate at 1000 °C is 5.4%, which is mainly attributed to the low thermal stability of CA due to the rich hydrogen and oxygen elements in the cellulose molecular chains inside CA. In contrast, CCA has no significant thermal weight loss, with a residual carbon percentage of 98.6% at 1000 °C. This is mainly attributed to the fact that after carbonization at 1500 °C, CCA removes most of the oxygencontaining functional groups and has a very high degree of carbonization. Figure S1b shows the Fourier transform infrared spectroscopy (FTIR) spectra of CA and CCA. It can be seen that in the FTIR spectra of CA, 3358, 2903, 1470~1320, 1450, 1173 and 1058 cm −1 are the vibrational peaks of O-H, C-H, C-H, C=O, C-O-H and C-O-C, respectively. In contrast, in the FTIR spectra of CCA, the characteristic absorption peaks of the above functional groups almost all disappear, mainly attributed to the chemical inertness of CCA, which makes it show almost no characteristic absorption peaks. Figure S1c shows the X-ray diffraction (XRD) spectra of CA and CCA. We can see that the main diffraction peaks of CA appear at 14.7° (101), 16.7° (101) and 22.5° (002), which are characteristic diffraction peaks of type I cellulose [32]. The main diffraction peaks of CCA appear at 23.5° (002) and 43.8° (100), which are formed by the specific reflection of graphitic carbon on the (002) and (100) planes and are mainly attributed to the high-temperature carbonization of CA into CCA containing graphitic carbon. Figure S1d shows the Raman spectra of CA and CCA. The D peak (1340 cm −1 ), G peak (1590 cm -1 ) and 2D peak (2500~3000 cm −1 ) correspond to the defective/disordered carbon, the tangential planar stretching vibration peak of sp 2 -hybridized carbon and the characteristic peak of graphitic carbon, respectively. The Raman spectrum of CA only has the G peak, which is attributed to the regularity of the cellulose network within CA. The Raman spectrum of CCA contains both D and G peaks, which is attributed to the production of irregular graphitic carbon in CCA during the high-temperature carbonization. In addition, the Raman spectrum of CCA also shows a 2D peak, which further evidence of the production of graphitic carbon. Figure 1s peak of CA is weaker and the O 1s peak is stronger, with a C/O ratio of 1.61. Compared to CA, the C 1s peak of CCA is more intense and the O 1s peak is less intense, with a corresponding increase in C/O ratio to 13.90. This is mainly due to the gradual removal of oxygen-containing functional groups and the carbonization of the cellulose molecular chains under high-temperature conditions. In addition, the three characteristic peaks in the high-resolution C 1s spectra of CA and CCA (Fig. S1e') are 284.6 eV (sp 2 C-sp 2 C), 285.6 eV (sp 3 C-sp 3 C) and 287 eV (C=O), respectively. Compared to CA, the characteristic peaks of sp 2 C-sp 2 C and sp 3 C-sp 3 C of CCA are enhanced, while the characteristic peak of C=O is weakened, mainly attributed to the high-temperature removal of most of the C=O and conversion to graphitic carbon [33]. Figure S2 further supports the removal of oxygen-containing functional groups on CCA. Figure 2a shows the FTIR spectra of CA, CA@GO and CCA@rGO. In the FTIR spectra of CA, 3358 cm −1 is the stretching vibration peak of O-H, 2903 cm −1 is the stretching vibration peak of C-H in CH 2 , 1470~1320 cm -1 is the bending vibration peak of C-H, 1450 cm -1 is the stretching vibration peak of C=O, 1173 cm −1 is the stretching vibration peak of C-O-H, and 1058 cm -1 is the C-O-C stretching vibration peak. In the FTIR spectra of CA@GO, in addition to the characteristic peaks mentioned above, a stretching vibration peak of O-C=O at 1652 cm -1 appears, attributed to the introduction of GO [34]. In contrast, in the FTIR spectra of CCA@rGO, these functional groups almost completely disappear, mainly due to the fact that CCA@rGO is chemically inert so that it shows almost no characteristic absorption peaks. Figure 2b shows the Raman spectra of CA, CA@GO and CCA@rGO. Only the G peak is present in the Raman spectrum of CA, which is attributed to the regularity of the cellulose network within CA. A faint D peak starts to appear in the Raman spectrum of CA@GO, which is attributed to the irregular graphitic carbon structure in CA@GO due to the introduction of GO [28]. The Raman spectrum of CCA@rGO contains D peaks, G peaks and 2D peaks, and the D peaks are stronger than the G peaks. This is mainly attributed to the irregular graphitic carbon produced by CCA during the carbonization. At the same time, GO is reduced to rGO by thermal annealing, resulting in a large amount of irregular graphitic carbon structure in CCA@ rGO. Figure 2c shows the XPS spectra of CA, CA@GO and CCA@rGO. The C 1s (284.0 eV) and O 1s (530.0 eV) peaks are evident in CA, CA@GO and CCA@rGO. The C 1s peaks are weaker and the O 1s peaks are stronger in CA and CA@GO. Compared to CA and CA@GO, CCA@rGO has a significantly higher C 1s peak intensity and a significantly lower O 1s peak intensity, which is mainly attributed to the gradual removal of oxygen-containing functional groups, the carbonization of cellulose molecular chains and the reduction of GO to rGO under high-temperature conditions [35]. In addition, the high-resolution C 1s spectra of CA and CCA@rGO (Fig. 2c') have three characteristic peaks: 284.6 eV (sp 2 C-sp 2 C), 285.6 eV (sp 3 C-sp 3 C) and 287 eV (C=O), respectively. In contrast to CA, a new characteristic peak of 288.6 eV appears in the high-resolution C 1s spectrum of CA@GO, which is characteristic of O-C=O in GO. Compared with CA@GO, the characteristic peaks of sp 2 C-sp 2 C and sp 3 C-sp 3 C of CCA@rGO are enhanced, the characteristic peak of O-C=O disappears, and the characteristic peak of C=O is very weak, which is mainly due to the removal of most of the C=O by CCA and conversion to graphitic carbon and the reduction of GO to rGO [36]. with the diameter of approximately 12 µm. When the mass ratio of cotton to pre-cooled solution is 4:96, CCA is the 3D carbon aerogel formed by fibers lapping onto each other. But unlike CA, the single fibers of CCA have the twisted twist-like structure and are approximately 6 µm in diameter (Fig. 3b), which is mainly attributed to the removal of oxygen-containing functional groups and the carbonization of the cellulose. When the mass ratio of cotton to pre-cooled solution is 5:95, the tangling of fibers within the CCA is more severe (Fig. 3b'). This is due to the limited solubility of the NaOH/urea solution on the cotton, resulting in the tangling of fibers within the CCA. As shown in Fig. 3c, when the GO solution concentration is 7.5 mg mL −1 , the CCA@ rGO is the homogeneous network structure, with the CCA forming the main framework of the CCA@rGO and the rGO lamellae completely wrapping the fibers, forming the skincore structure similar to that of a cable. The rGO is similar to the skin and is densely wrapped around the CCA fibers to provide sufficient structural stability for CCA@rGO. The CCA is similar to the core and is wrapped by rGO sheets to provide attachment points and support for the rGO sheets.
Morphologies of CA, CCA, CCA@rGO and CCA@ rGO/PDMS
When the GO solution concentration is 10 mg mL -1 , rGO is agglomerated in CCA@rGO, CCA is not uniformly wrapped and CCA@rGO has an uneven network structure (Fig. 3c'). This is due to the high viscosity of the GO solution, which limits its diffusion inside the CA and eventually leads to the agglomeration of rGO in the CCA. After backfilling with PDMS, the skin-core structure of CCA@rGO is well preserved, and the 3D double-layer conductive network structure of CCA@rGO is not significantly damaged (Fig. 3d), and PDMS is more uniformly dispersed in the gaps of the 3D conductive network of CCA@rGO. At the same time, PDMS is uniformly dispersed in the gaps of the 3D conducting network of CCA@rGO. Figure 4a shows the σ of PCCA/PDMS and CCA/PDMS EMI shielding composites. It can be seen that the σ of PCCA/PDMS EMI shielding composites shows a gradual increase with the increase in the amount of PCCA. When the loading of PCCA is 2.80 wt%, the σ of the PCCA/PDMS EMI shielding composites reaches 0.094 S cm -1 . This is mainly due to the fact that the conductive network inside the PCCA/PDMS EMI shielding composites is gradually improved with the increase in PCCA. As the loading of CCA increases, the σ of the CCA/PDMS EMI shielding composites tends to increase and then decrease. When the loading of CCA is 2.24 wt%, the CCA/PDMS EMI shielding composites have the largest σ value (0.47 S cm −1 ). This is mainly attributed to the gradual improvement of the CCA-CCA conductive network within the CCA/PDMS EMI shielding composites with increasing CCA loading [37]. However, the further increase in the amount of CCA causes the fibers within the CCA to twist into knots, which is detrimental to the formation of the complete conductive pathway and thus has negative impact on the σ. In addition, the σ of the CCA/ PDMS EMI shielding composites is consistently much larger than that of the PCCA/PDMS EMI shielding composites for the same amount of CCA or PCCA [38]. At a CCA loading of 2.24 wt%, σ for the CCA/PDMS EMI shielding composites (0.47 S cm −1 ) is 6.3 times greater than that of the PCCA/ PDMS EMI shielding composites (0.075 S cm -1 ) with the same loading of PCCA. Mainly due to the random distribution of PCCA in the PCCA/PDMS EMI shielding composites, which is difficult to form the effective PCCA-PCCA conductive network. Within the CCA/PDMS EMI shielding composites, CCA has the more complete 3D conductive network structure, giving it an even better σ. Figure 4b shows the σ comparison diagram of P(CCA@ rGO)/PDMS and CCA@rGO/PDMS EMI shielding composites. It can be seen that the σ of the P(CCA@rGO)/PDMS EMI shielding composites tends to increase as the loading of P(CCA@rGO) increases. The σ of the P(CCA@rGO)/ PDMS EMI shielding composites reaches 0.117 S cm -1 at a P(CCA@rGO) loading of 3.32 wt%, mainly due to the gradual improvement of the conductive network inside the P(CCA@rGO)/PDMS EMI shielding composites with the increasing P(CCA@rGO) loading. With the increase in the loading of CCA@rGO, the σ of the CCA@rGO/PDMS EMI shielding composites tends to increase and then decrease [39]. When the loaidng of CCA@rGO is 3.05 wt%, the CCA@rGO/PDMS EMI shielding composites have the largest σ value (0.75 S cm -1 ), which is 59.6% higher than the σ (0.47 S cm −1 ) of the CCA/PDMS EMI shielding composites (2.24 wt% CCA). This is mainly due to the fact that the rGO wrapped around the CCA gradually forms the second conductive network as the amount of rGO increases based on the first conductive network (2.24 wt% CCA). The synergy of the two conductive networks results in the gradual improvement of the internal conductive network of the CCA@ rGO/PDMS EMI shielding composites and the consequent increase in its σ [40]. However, as the amount of rGO increases further, the rGO is prone to agglomeration inside the CCA@rGO and the CCA is not uniformly wrapped, resulting in an imperfect second conductive network, which has negative impact on the σ. It can also be seen that the σ of the CCA@rGO/PDMS EMI shielding composites is consistently much greater than that of the P(CCA@rGO)/PDMS EMI shielding composites for the same loading of CCA@ rGO and P(CCA@rGO). When the loading of CCA@rGO is 3.05 wt%, the σ of the CCA@rGO/PDMS EMI shielding composites (0.75 S cm -1 ) is 7.1 times higher than that of the P(CCA@rGO)/PDMS EMI shielding composites (0.106 S cm -1 ) with the same loading of P(CCA@rGO). Mainly due to the random distribution of P(CCA@rGO) in the P(CCA@ rGO)/PDMS EMI shielding composites, which makes it difficult to form an effective P(CCA@rGO)-P(CCA@rGO) conductive network through point-point laps [41]. In the CCA@rGO/PDMS EMI shielding composites, the CCA@ rGO has the more complete 3D conductive network. At the same time, the rGO sheet is wrapped around the CCA fibers to form the double-layer conductive network with the skin-core structure. The CCA-CCA (wire-wire), CCA-rGO (wire-surface) and rGO-rGO (surface-surface) laps form the very complete 3D double-layer conductive network, giving it an even better σ. Figure 5 shows the comparison graph of the EMI shielding effectiveness (EMI SE) results of PCCA/PDMS, CCA/ PDMS, P(CCA@rGO)/PDMS and CCA@rGO/PDMS EMI shielding composites. As shown in Fig. 5a, the EMI SE T of the PCCA/PDMS EMI shielding composites tends to increase as the loading of PCCA increases. When the loading of PCCA is 2.80 wt%, the EMI SE T of the PCCA/ PDMS EMI shielding composites is 12 dB. This is mainly due to the PCCA-PCCA conductive network within the PCCA/PDMS EMI shielding composites gradually improving with increasing PCCA loading, resulting in an increased ability to reflect and absorb incident electromagnetic waves, which is reflected in the increase in EMI SE T value [42]. Figure 5b shows that as the loading of CCA increases, the EMI SE T of the CCA/PDMS EMI shielding composite material first increases and then decreases. When the loading of CCA is 2.24 wt%, the CCA/PDMS EMI shielding composites have the best EMI SE T (40 dB), which is 20 times than that of pure PDMS (2 dB). This is because the density of the CCA-CCA conductive network within the CCA/PDMS EMI shielding composites increases as the amount of CCA increases [43]. At the same time, the two-phase interface with the PDMS matrix increases, resulting in enhanced conductive losses, impedance mismatch and interfacial polarization losses between the incident electromagnetic waves and the CCA-CCA conductive network, thus significantly improving the EMI SE T of the CCA/PDMS EMI shielding composite [44]. However, when the amount of CCA is too high, the fibers in CCA tend to twist into knots, which reduces the conductive network density of CCA and reduces the two-phase interface between CCA and PDMS substrate, thus reducing its EMI SE T . As shown in Fig. 5c, the EMI SE T of the P(CCA@rGO)/PDMS EMI shielding composites tends to increase gradually as the loading of P(CCA@rGO) increase. The EMI SE T of P(CCA@rGO)/ PDMS EMI shielding composites is 14 dB when the loading of P(CCA@rGO) is 3.32 wt%. This is mainly due to the gradual improvement of the conductive network inside the P(CCA@rGO)/PDMS EMI shielding composites with the increase in the loading of P(CCA@rGO), which leads to the enhancement of its ability to reflect and absorb the incident electromagnetic waves, manifested in the increase in the EMI SE T [45,46]. Figure 5d shows that the EMI SE T of CCA@rGO/PDMS EMI shielding composites tend to increase and then decrease as the loading of CCA@rGO increase. When the loading of CCA@rGO is 3.05 wt%, the CCA@rGO/PDMS EMI shielding composites have the best EMI SE T (51 dB), which is 27.5% higher than the EMI SE T (40 dB) of the CCA/PDMS EMI shielding composites (2.24 wt% CCA) and 25.5 times higher than that of the pure PDMS (2 dB). This is because as the amount of CCA@rGO increases, the CCA (first conductive network) wrapped with rGO gradually forms the perfect second conductive network, and the skin-core structure of CCA@rGO makes the two conductive networks work together to form the perfect 3D double-layer conductive network [47]. At the same time, the interfaces between rGO and CCA, rGO and rGO, and CCA@rGO and PDMS matrix are increased, so that the conductive loss, impedance mismatch and interface polarization loss between CCA@rGO/PDMS EMI shielding composite and incident electromagnetic waves are enhanced, which significantly improves the EMI SE T of CCA@rGO/PDMS EMI shielding composites. However, when the loading of rGO is too high, rGO tends to agglomerate in CCA@rGO, resulting in an imperfect second conductive network and reducing the conductive network density. At the same time, it reduces the two-phase interface between CCA@rGO and PDMS matrix, which adversely affects the EMI SE T of CCA@rGO/PDMS EMI shielding composites [48].
Electrical Conductivities and EMI Shielding Performances
Combining Fig. 5c, d also shows that the EMI SE T of CCA@rGO/PDMS EMI shielding composites is always better than that of P(CCA@rGO)/PDMS EMI shielding composites at the same CCA@rGO and P(CCA@rGO) loading. When the amount of CCA@rGO is 3.05 wt%, the EMI SE T of CCA@rGO/PDMS EMI shielding composites is 51 dB, which is 3.9 times higher than that of P(CCA@rGO)/PDMS EMI shielding composites (13 dB) with the same loading of filler. This is because the conductive fillers in P(CCA@ rGO)/PDMS EMI shielding composites are randomly distributed, and the efficiency of lap bonding through P(CCA@ rGO)-P(CCA@rGO) (point-point) is extremely low [49]. At the same time, P(CCA@rGO) has high surface energy and are prone to agglomeration within the PDMS matrix, which makes it difficult to form an effective conductive network, thus affecting the reflectivity and dissipation ability of the P(CCA@rGO)/PDMS EMI shielding composites for incident electromagnetic waves, and therefore, its EMI SE T enhancement effect is poor [50]. For the CCA@rGO/ PDMS EMI shielding composites, the skin-core structure allows CCA@rGO to form the 3D double-layer conductive network structure with a high conductive network density, which enhances the conductive loss and impedance mismatch between the incident electromagnetic waves and the CCA@rGO/PDMS EMI shielding composites (Fig. 5e). Meanwhile, the introduction of rGO leads to more twophase interfaces between rGO and CCA, rGO and rGO, and CCA@rGO and PDMS matrix, which significantly improves the interfacial polarization loss capability of CCA@rGO/ PDMS EMI shielding composites to incident electromagnetic waves [51]. The synergistic effect of the two aspects makes the CCA@rGO/PDMS EMI shielding composites have relatively stronger reflection, scattering and absorption of incident electromagnetic waves, so that their EMI SE T is consistently better than that of P(CCA@rGO)/PDMS and CCA@rGO/PDMS EMI shielding composites [52]. Figure 5d' shows that the EMI SE A and EMI SE R of CCA@rGO/PDMS EMI shielding composites also tend to increase and then decrease as the loading of CCA@rGO increases. When the loading of CCA@rGO is 3.05 wt%, the EMI SE R and EMI SE A of CCA@rGO/PDMS EMI shielding composites reach the maximum values of 7 dB and 44 dB, respectively. This is because the continuous increase in CCA@rGO provides more mobile charge, which enhances the impedance mismatch between the CCA@rGO/PDMS EMI shielding composites and the incident electromagnetic wave, hence the EMI SE R increase [53]. Meanwhile, the CCA@rGO-CCA@rGO double-layer conductive network is gradually improved with the increase in CCA@rGO loading, which can provide more carriers for dissipating electromagnetic waves, so its EMI SE A is improved [54]. However, as the loading of CCA@rGO increases further, rGO tends to agglomerate inside CCA@rGO and CCA is not uniformly wrapped, reducing the internal conductive network density of CCA@rGO/PDMS EMI shielding composites and decreasing the two-phase interface between CCA@rGO and PDMS matrix [55]. This weakens the ability of the CCA@ rGO/PDMS EMI shielding composite to reflect, scatter and absorb incident electromagnetic waves, resulting in lower EMI SE R and EMI SE A . Figure 6 shows the λ (a), thermal diffusivity (α, b), 3D infrared thermal images (c) and surface temperature curves vs heating time (d) of the CCA@rGO/PDMS EMI shielding composites. Figure 6a, b shows that λ and α of CCA@ rGO/PDMS EMI shielding composites both tend to increase and then decrease as the amount of CCA@rGO increases. When the loading of CCA@rGO is 3.05 wt%, the CCA@ rGO/PDMS EMI shielding composites have the largest λ (0.65 W mK -1 ) and α (1.082 mm 2 s -1 ), which are 3.3 and 3.4 times higher than λ (0.20 W mK -1 ) and α (0.3185 mm 2 s -1 ) of pure PDMS. This is because, as the loading of CCA@rGO increases, rGO gradually wraps the CCA fibers to form the 3D double-layer thermal conductivity network with the skincore structure, which improves the thermal conductivities of CCA@rGO/PDMS EMI shielding composites. However, as the amount of CCA@rGO increases further, the internal rGO of CCA@rGO tends to agglomerate, which reduces the density of the thermal conductivity network inside the CCA@rGO/PDMS electromagnetic shielding composites [56,57]. However, with the further increase in the loading of CCA@rGO, the rGO inside CCA@rGO tends to agglomerate, which decreases the density of the thermal conductivity network inside the CCA@rGO/PDMS EMI shielding composites, thus adversely affecting the λ and α of the CCA/ PDMS EMI shielding composites [58].
Thermal Conductivities
As shown in Fig. 6c, the heat flow conduction rate is significantly higher inside the CCA@rGO/PDMS EMI shielding composites compared to the CCA/PDMS EMI shielding composites for the same temperature thermal stage and heating time, indicating their excellent thermal conductivities [59]. Meanwhile, with the increase in the amount of CCA@ rGO, the heat flow conduction rate inside the CCA@rGO/ PDMS EMI shielding composites becomes faster and then slower, indicating that the appropriate amount of CCA@ rGO (3.05 wt%) is beneficial to further improving the thermal conductivities of the CCA@rGO/PDMS EMI shielding composites, which is consistent with the experimental results of Fig. 6a, b. In addition, the heat flow is uniformly conducted inside the CCA@rGO/PDMS EMI shielding composites, indicating the relatively uniform dispersion of CCA@rGO in the CCA@rGO/PDMS EMI shielding composites (consistent with Fig. 3d).
The surface temperature change of the CCA@rGO/PDMS EMI shielding composites is divided into two stages as the heating time increases (Fig. 6d). The first stage is the 0 to 40 s heating time period, where the surface temperature of the CCA@rGO/PDMS EMI shielding composite increases rapidly. This is mainly attributed to the low initial temperature of the CCA@rGO/PDMS EMI shielding composites, which causes the large temperature difference between them and the heat table, and therefore, the heat propagation rate is fast [60]. The second stage is the 40 to 80 s heating time period, where the surface temperature of the CCA@rGO/ PDMS EMI shielding composites increase slowly. This is mainly attributed to the fact that after 40 s of heating, the temperature of the CCA@rGO/PDMS EMI shielding composites start to increase and the temperature difference between them and the hot table are smaller, so the heat propagation rate become slower [61]. It is also observed that the heating rate of the surface temperature of the CCA@rGO/ PDMS EMI shielding composites tend to increase and then decrease in the first heating stage with the increase in the loading of CCA@rGO. In the case of both heating times is 40 s and the loading of CCA@rGO is 3.05 wt%, the surface temperature of CCA@rGO/PDMS EMI shielding composites reach the maximum value of 89.2 °C indicates that the appropriate CCA@rGO (3.05 wt%) is beneficial to efficiently enhance the thermal conductivities of CCA@rGO/ PDMS EMI shielding composites [62]. (Fig. 7a-c). When the loading of CCA@rGO is 3.05 wt%, the tensile strength and elongation at break of CCA@rGO/PDMS EMI shielding composites are 4.1 MPa and 77.3%, respectively, which are 36.9% and 35.6% lower than the tensile strength (6.5 MPa) and elongation at break (120%) of pure PDMS. It is mainly attributed to more two-phase interfaces (weak interfacial connections) between rGO and CCA, rGO and rGO, and CCA@rGO and PDMS matrix with the increase in CCA@ rGO loading. It is easy to develop microcracks and voids inside the CCA@rGO/PDMS EMI shielding composites, resulting in the reduction of bond strength. When subjected to external forces, its internal defects will become stress concentration points and rapidly trigger the expansion and fracture of internal microcracks, thus reducing the tensile strength and elongation at break of CCA@rGO/PDMS EMI shielding composites. As shown in Fig. 7d, the hardness of the CCA@rGO/ PDMS EMI shielding composites illustrates a gradual increase with the increase in the loading of CCA@rGO. When the loading of CCA@rGO is 3.05 wt%, the hardness of CCA@rGO/PDMS EMI shielding composites reach 42 HA, which is 50% higher than that of pure PDMS (28 HA). This is mainly attributed to the fact that the network density of the rigid skeleton CCA@rGO gradually increases with the loading of CCA@rGO, forming more hard twophase interfacial layers with the PDMS matrix, which effectively hinders the deformation of the CCA@rGO/PDMS EMI shielding composites under pressure, resulting in the increase in the hardness. Figure 8 shows the σ (a) and EMI SE T (b) results of CCA@rGO/PDMS EMI shielding composites after bending fatigue. The σ and EMI SE T of CCA@rGO/PDMS EMI shielding composites show a slight decrease with the increase in the number of bending fatigues. When the bending fatigue reach 2000 times, the σ and EMI SE T of CCA@ rGO/PDMS EMI shielding composites are 0.745 S cm −1 and 50 dB (3.05 wt% CCA@rGO), respectively, which were only 0.7% and 2.0% lower than the σ (0.75 S cm −1 ) and EMI SE T (51 dB) of the CCA@rGO/PDMS EMI shielding composites without bending fatigue, which indicates that the CCA@ rGO/PDMS EMI shielding composites have good bending fatigue resistance.
Thermal Stabilities
Figures 9a, b shows the DSC and TGA curves of the CCA@ rGO/PDMS EMI shielding composites, respectively, and Table 1 shows the corresponding thermal characteristic data. Figure 9a and Table 1 show that the T g of the CCA@ rGO/PDMS EMI shielding composites gradually increases with the increase in the loading of CCA@rGO. When the loading of CCA@rGO is 3.05 wt%, the T g of CCA@rGO/ PDMS EMI shielding composites is −43.4 °C, which is 5.7 °C higher than that of pure PDMS. This is mainly attributed to the fact that the hard two-phase interfacial layer between CCA@rGO and PDMS matrix increases with the increase in CCA@rGO loading, which restricts the movement of PDMS molecular chains and makes T g increase [63]. As shown in . It is mainly attributed to the fact that the introduction of rGO with excellent heat resistance can help improve the heat resistance of CCA@rGO/PDMS EMI shielding composites [64]. Meanwhile, the good interfacial compatibility between CCA@rGO and PDMS matrix can effectively prevent the oxygen penetration and thermal degradation behavior of CCA@rGO/PDMS EMI shielding composites [65,66]. The synergy of the two aspects leads to the significant improvement in the heat resistance of CCA@rGO/PDMS EMI shielding composites compared to pure PDMS.
Conclusions
rGO was successfully wrapped on the surface of CCA to form CCA@rGO with the 3D double-layer conductive network skin-core structure, and its 3D conductive network structure was not significantly damaged during backfilling with PDMS. When the loading of CCA@rGO is 3.05 wt%, CCA@rGO/PDMS EMI shielding composites have the best EMI SE T (51.0 dB). At this time, the CCA@rGO/ PDMS EMI shielding composites have outstanding thermal conductivities (λ is 0.65 W mK −1 ), excellent mechanical properties (tensile strength and hardness are 4.1 MPa and 42 HA, respectively) and excellent thermal stabilities (T HRI of 178.3 °C). Excellent EMI shielding performances and thermal stabilities, as well as good thermal conductivities, make CCA@rGO/PDMS EMI shielding composites have great application prospects in lightweight, flexible electromagnetic shielding composites and portable and wearable electronic devices. | 2021-03-18T13:52:18.028Z | 2021-03-16T00:00:00.000 | {
"year": 2021,
"sha1": "9b4440a553f154a18818268dab14bbd0270b71c2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40820-021-00624-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84e9f8f1c54a8aed933edea69fae6bb2b0e02c75",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238200024 | pes2o/s2orc | v3-fos-license | UAV Block Geometry Design and Camera Calibration: A Simulation Study
Acknowledged guidelines and standards such as those formerly governing project planning in analogue aerial photogrammetry are still missing in UAV photogrammetry. The reasons are many, from a great variety of projects goals to the number of parameters involved: camera features, flight plan design, block control and georeferencing options, Structure from Motion settings, etc. Above all, perhaps, stands camera calibration with the alternative between pre- and on-the-job approaches. In this paper we present a Monte Carlo simulation study where the accuracy estimation of camera parameters and tie points’ ground coordinates is evaluated as a function of various project parameters. A set of UAV (Unmanned Aerial Vehicle) synthetic photogrammetric blocks, built by varying terrain shape, surveyed area shape, block control (ground and aerial), strip type (longitudinal, cross and oblique), image observation and control data precision has been synthetically generated, overall considering 144 combinations in on-the-job self-calibration. Bias in ground coordinates (dome effect) due to inaccurate pre-calibration has also been investigated. Under the test scenario, the accuracy gap between different block configurations can be close to an order of magnitude. Oblique imaging is confirmed as key requisite in flat terrain, while ground control density is not. Aerial control by accurate camera station positions is overall more accurate and efficient than GCP in flat terrain.
Introduction
Accurate knowledge of camera interior orientation elements and proper mathematical modelling of the image formation process are key elements for image metrology. UAV photogrammetry is no exception in this respect [1]. Camera calibration, the process leading to the estimation of such model parameters, has long been (and still is) one of the most researched topics in close range photogrammetry [2] as well as in computer vision [3,4]. At least in the former area, there is general agreement on conditions providing optimal results [1,5]: camera parameters should be estimated in a Least Squares Bundle Block Adjustment (BBA) of a highly redundant camera network with strong geometry (highly convergent images, orthogonal roll angles, more than six rays per point, and large scale variations in images), a testfield with appropriate targets, highly accurate image matching of targets, image points covering full frame format, and significance tests to avoid overparametrization [6][7][8]. Not all conditions need to be satisfied nor are Ground Control Points (GCP) generally necessary.
In the context of UAV camera calibration, assessing the accuracy of calibration parameters computed in various image block configurations by on-the-job self-calibration is still a disputed argument. Current technology also allows, besides the traditional case of block control by GCP, GNSS-assisted self-calibration. Evaluating the effects of residual calibration errors on tie point accuracy, in the case of pre-calibration as well as of on-the-job self-calibration, on the other hand, is of relevant interest, especially from a practical point of view.
In this paper a set of UAV synthetic photogrammetric blocks, built by varying terrain shape, surveyed area shape, block control (ground and aerial), strip type (longitudinal, cross and oblique), image observation, and control data precision has been synthetically generated. Through a set of Monte Carlo simulations the actual performance of each single configuration has been investigated. From an operational standpoint, analytical camera calibration comes in two versions: pre-calibration or on-the-job calibration. Both use a BBA with additional parameters; the former is normally executed in a laboratory test field under optimal camera network geometry, with estimated parameters kept fixed later in actual surveys; the latter estimates camera parameters as a by-product of the BBA of the actual survey block [9].
How to transfer close-range expertise on camera calibration, with its strong roots in industrial and metrology applications, to UAV photogrammetry is still an investigated topic. In a way, UAV photogrammetry is indeed a mix of close-range and aerial photogrammetry, as it inherits consumer cameras from the former and block geometry features from the latter (e.g., a basic flight plan made of nadir imagery along parallel strips). To complicate matters, UAV platforms come in two versions, fixed-wing and rotary-wing, with marked differences in flight management and camera pointing flexibility. The wealth of ongoing research devoted to UAV camera calibration witnesses a not-yet-settled issue, with many questions still open and even "old" certainties put under scrutiny [10].
Pre-calibration is well suited when the camera is mechanically stable and repeatable in focusing operations [11]; a further constraint is that it should be operated in the field under similar conditions (image scale, scene depth, etc.) to that of calibration. Most software packages provide specific camera calibration tools, with calibration patterns and automatic target detection to speed up operations. With fixed-wing platforms, cameras are easily removed from the drone body and so pre-calibration can take place in laboratory settings. With rotary-wing platforms both indoor and outdoor options are generally feasible. It should be noted, however, that if similarity of image scale between calibration and survey block is sought, indoor or laboratory calibration can be troublesome, especially with longer focal length optics.
As far as the alternative between pre-calibration and on-the-job calibration is concerned, the outcomes of the many study cases on UAV camera calibration are not all consistent, and the situation looks poised to remain so. The results of [12] found that, with dense ground control, differences between on-the-job and pre-calibration were not substantial. In [13], proper distortion modelling is the goal to pursue to avoid systematic errors; pre-calibration is recommended together with an after-flight calibration check based on k1-k2 parameters' equifinality. Oblique imaging in the range of 20 • to 45 • with respect to nadir amounting to at least 10% of block images should be included to reduce doming. The authors of [14] recommend robust pre-calibration (longitudinal and double cross with a few oblique ones) and claim that an on-site block as small as 20 images, with four oblique images at block corners, in a scene with sufficient height variations, might be enough to achieve this aim. Additionally, using pre-calibrated parameters, they found virtually the same residuals on GCP for two flights executed at a three-day distance over the same test field, implying a good short-term stability of camera parameters. In a rectangular block with high-overlap nadir imagery, [15] found pre-calibration to be more accurate than on-the-job calibration, though the main improvement came from accurate camera distortion modelling. On the other hand, it has been found in empirical tests [16][17][18] that Interior Orientation (IO) elements are not stable or that the reliability of the pre-computed parameters is questionable, due perhaps to poor repeatability of focusing, shocks in landing or different ambient temperatures. According to [1], pre-calibration remains the best option in the case that basic conditions for self-calibration cannot be met on site. However, in practice, on-the-job calibration is the method of choice, perhaps optimizing flight parameters to meet both survey requirements and safe conditions for self-calibration.
The progress in feature-based matching, with tens of thousands of tie points extracted and often matched across more than a dozen images, makes self-calibration without targets possible [19,20], on condition of a reasonably textured scene. Therefore, tie points' distribution over the full frame format and accurate image matching can be taken for granted in most survey flights. In his analysis, [1] highlights the importance of scale changes within images as a key factor allowing, even with limited geometric block strength, full or partial recovery of IO and distortion parameters. Flight planning software for UAVs commonly incorporates the so-called double-grid option, with cross strips providing the orthogonal roll angles to reduce projective coupling between Exterior Orientation (EO) and IO parameters. Multi-scale self-calibration, with scale changes between images arising from blocks flown at different altitudes, has been shown [14] being less effective, at least unless GCP are introduced [10,21].
Simulations, as well as empirical studies, showed that large systematic elevation errors (the so-called doming effect) could arise from inaccurate estimations of calibration parameters [22]. The addition of oblique imaging to nadir imagery along parallel strips has been proposed and shown to be beneficial [13,[23][24][25]. Rather than adding another flight layer, even flying the longitudinal strips with moderate-to-strong (30 • to 45 • ) camera axis inclination along flight direction [12,21,26] proved effective in eschewing systematic errors in elevation. The effectiveness of the gently oblique (20 • camera pitch) double grid proposed by [26] has also been confirmed by [27]. More radically, the very advantage of using nadir images at all, as well as of the large overlaps of UAV blocks, has been questioned: from homologous ray intersection analysis, [10] suggests switching, whenever feasible, to a simple or double grid image acquisition mode where the UAV camera always points towards the center of the area of interest at ground level. On the other hand, a simulation study [28] showed that, with only gently inclined camera axes, otherwise negligible correlations among decentring and radial distortion parameters may arise and affect calibration results as well as reduce the doming effect mitigation of oblique imaging.
Most flight planning software allows for simple and double grid schemes and, for multi rotors, for Point Of Interest (POI) mode, where the UAV takes a circular path around a ground target that is always kept centred in the camera frame. It should be noted, however, that (to the best of authors' knowledge) all experimental studies with oblique imaging have been performed with multi-rotor platforms, where the camera is normally mounted on a gimbal. Oblique imaging with fixed wings, though an option available in some platforms, is more difficult to achieve in practice, so meeting optimal conditions for on-the-job self-calibration with these platforms may be harder; [24] suggest including gently banked turns in the flight plan to this aim.
In aerial blocks, the basic camera network geometry is determined by image overlap (side and forward), as the area of interest is typically covered by nadir imagery along parallel strips. Increasing overlap to a much higher degree than necessary for stereo coverage is common in UAV blocks; due to high repeatability of extracted key points, it increases ray multiplicity and so network strength. How effective this larger overlap is in improving self-calibration is, however, questioned [10], as the average ray intersection angle decreases with increasing overlap.
In aerial and UAV photogrammetry, block georeferencing and block control by GCP are intertwined and enforced in the BBA. Finding rules for determining the most efficient density and distribution of GCP in a UAV survey is not a trivial task, given the number of parameters involved. Indeed, the topic is still a debated subject of investigation [29,30] and is further complicated if accurate camera station positions are employed. Using Camera Stations (CS) determined by on-board Global Navigation Satellite System (GNSS) receivers to georeference and control the block is indeed a more than 30-year-old technique [31], known as GPS-supported or GPS-assisted aerial triangulation [32,33]. In many of today's papers this technique is (improperly, in the author's opinion) referred to as Direct Georeferencing (DG), a term that should be restricted to blocks where camera E.O. data are all determined by GNSS-assisted inertial navigation, and in principle there is no need for tie points. The availability on the market of both fixed-wing and multi-rotor platforms equipped with dual frequency GNSS receivers with Real Time Kinematic (RTK) technology enables GNSS-assisted block georeferencing and control, minimising the need for control at ground level [34,35]. As this technology becomes less expensive and satellite constellations improve their coverage, ensuring cm-level accuracy, it can be expected that it will gain ground, especially whenever site conditions make GCP survey difficult [36,37]. Notice that RTK is not strictly necessary, though it allows quick, on-site checking of the positioning quality. Indeed, the GNSS observations might as well be recorded on board and elaborated later in Post Processing Kinematic (PPK) mode, exploiting more sophisticated processing options and possibly improving positioning accuracy [35,38].
Agreeing with the Computer Vision approach, [13] believe that GCP or GNSS-determined CS need not to be involved in the BBA but instead used to compute an Helmert transformation from the BBA arbitrary reference frame and the mapping reference frame. However, it is also acknowledged in the paper that GCP or GNSS-determined CS help to refine calibration or limit block deformations that may arise from un-modelled systematic errors (such as residual calibration errors) and, to some extent, might also improve calibration parameter estimation. It is therefore worth investigating whether moving the control from ground points to CS changes the accuracy of the calibration parameters in a self-calibrating BBA. A few experiences [11] as well as previous simulation studies [13,39] suggest camera calibration with UAV blocks flown with GNSS-assisted block georeferencing and control deserves a more systematic investigation. In particular, in early tests [34,40] and later ones [35] it has consistently been found that in nadir-only imagery blocks a bias in elevation could arise using self-calibration and that a way to cope with this problem is to use at least one GCP. Lately, however, no need for such single GCPs has been found if oblique images are added [27].
In the context of UAV camera calibration, this paper therefore has two objectives. The main one is to assess the accuracy of calibration parameters computed in various image block configurations by on-the-job self-calibration under realistic conditions, representative of two widespread operating scenarios in UAV surveys. Besides the traditional case of block control by GCP, a well-searched topic, of special interest in authors' view is the performance of a GNSS-assisted self-calibrating BBA as a function of the number of GCP; more precisely, just one at block centre or none at all.
The second paper goal is to assess the effects of residual calibration errors on tie point accuracy in case of pre-calibration as well as of on-the-job self-calibration, again as a function of different block configurations.
Compared to other papers on the subject, the experiments herein try for a more systematic approach through simulations, to gain insight on the influence of several factors affecting UAV camera calibration. To this aim, a set of synthetic UAV photogrammetric blocks has been generated that encompasses overall 144 different combinations of landform, surveyed area shape, block control type (ground and aerial), number and type of strip layers, precision of image coordinates and control data. In a Monte Carlo (MC) scheme, each simulated block combination has been adjusted by a self-calibrating BBA where the simulated, true values of image and control data have been corrupted with random errors, executing 1000 runs for each combination. A similar approach, here applied in a more comprehensive test setup, has been already proposed by [41,42], applied to GNSS-assisted block orientation by [39] and also adopted by [35] to generate precision maps. Another example of a Monte Carlo simulation study focused on the dome effect is also presented in [43].
Of course, the problem dimensionality is so large that many other factors could have been considered in the simulations (first of all image overlap, instead kept fixed to values frequently adopted in today's UAV surveys). A choice was made to limit computing time and memory storage.
Materials and Methods
For the simulated blocks to be as realistic as possible, it has been decided to build them from the BBA output of two real blocks, each flown over a different landform according to the same flight plan. The motivation for this choice is to avoid an unrealistic distribution of the tie points over a regular grid and, most of all, an artificially high and fairly homogeneous distribution of the tie point ray multiplicity i.e., of the number of images an object point is observed on. As the two sites present rather different characteristics, the tie point distribution and their multiplicity can be expected to differ as well; this should help in clarifying whether and how these two factors affect the calibration accuracy. In the following, first the characteristics of the real blocks are described, then the procedure to build the synthetic blocks is illustrated.
Characteristics of the Two Real Survey Flights
The first block (Flat) images the Torrente Baganza riverbed (44 • -7 nadir-imaging longitudinal strips with 80% forward overlap and 70% sidelap; -12 nadir-imaging cross strips with 80% forward overlap and 70% sidelap matching the longitudinal strips; -2 rings of 36 oblique images, regularly spaced along a horizontal circle, with camera axes pointing downwards at the circle centre ground projection (POI mode), with an angle from nadir close to 49 degrees. As the longitudinal strips length is designed to be twice the block width, the centre of each ring has been designed to be close to the (square) half-block centre while the circle radius is slightly larger than half the half-block diagonal.
Pix4D capture flight planning software has been used with the double grid option for shooting the longitudinal and cross strips as well as the ring of oblique images. Two separate flights have been executed in each site, one for the double grid, the other for the rings. The flight elevation above ground level (a.g.l.) is computed with respect to the lowest terrain point in Flat block and to the highest in Hilly block. Both full blocks (i.e., including all images of all strip types) have been oriented with Agisoft's Metashape v. 1.5.3 and georeferenced on navigation data only. Figure 1 shows the orthophotos (top) and the DEMs (bottom) of both areas. The camera stations are shown, color-coded according to strip type, superimposed to the orthophotos. Table 1 summarizes the main characteristics of the two flights, that show different average GSD, number of extracted tie points, average image overlap and reprojection error. Table 3) where GCP are represented by triangles. Different colours of triangles refer to control tightness: Basic (red) or Enhanced (red + green). On-the-job self-calibration has been executed in the BBA, enabling the estimation of the camera parameters listed in Table 2. Notice that the camera mount of the Phantom 3 is such that the largest side of the sensor is perpendicular to flight direction. As such, the Y axis is "along strip" and the X axis is "across strip": the image coordinate system is oriented with the Y axis in flight direction and the X axis 90 • clockwise with respect to Y. No particular refinement of the BBA adjustment results has been carried out, as the goal of the operation was simply to provide data for the simulations.
Generation of Block Configurations for the Simulation
Different block configurations are generated by selectively removing from the full block one of the strip types (namely: the cross strips, the oblique images or both) in the original projects. Each block configuration is labeled according to the strip type it contains using the letters L, C and O to label longitudinal strips, cross strips and POI images, respectively. For instance, an LCO configuration corresponds to the original full block, while an LC configuration represents a block made of longitudinal and cross strips only, and so on. A block configuration is therefore made of one, two or three different strip types.
In order to account for the effect of different area shapes on calibration, exploiting the 1:2 width-to-heigth ratio of the rectangular original block, the second half of the original full block (LCO) has been cut out, allowing the generation of square block configurations also from the (original) first half-block. Configurations derived from this square block recieve the prefix H. As such, the HLO configuration is made of a longitudinal square block complemented with a ring of oblique images; HO is a POI (single ring) over the square area, and so on. For each block configuration, camera stations and pose, tie point ground coordinates and camera calibration parameters are exported to act as true data for the simulation.
Overall, four block configurations (LCO, LC, LO and L) have been considered for the rectangular area and five (HLCO, HLC, HLO, HL and HO) for the square area. Each configuration has been generated for both the Flat and Hilly areas.
As far as block control is concerned, both ground-only (GCP case) and GNSS-assisted (GNSS case) have been tested with both Basic and Enhanced tightness (see Table 3). In GCP Basic (see red triangles in Figure 2), 8 and 5 GCP are placed at the corners and in the middle of the block square(s), respectively, for the rectangular and square area. In GCP Enhanced, 15 and 9 GCP are arranged in three rows along the longitudinal flight lines, respectively, for the rectangular and square blocks (see red and green triangles in Figure 2). In GNSS Basic as well as in GNSS Enhanced, all camera stations' positions are used as control information; however, in the former no additional GCP are used, while in the latter a single GCP, located at the block centre, is fixed.
GNSS (Aerial)
Basic: no GCP Enhanced: 1 GCP Two sets of observation precisions have been considered to simulate both a medium as well as a high-precision data set (see Table 4). Though "average" users cannot do much to improve the precision of the control data, in principle a better quality of positioning data can be foreseen. As far as the GNSS case is concerned, a better hardware (especially a better antenna), a good satellite configuration and expertise in PPK GNSS data might do the job. As far as the GCP case is concerned, using a Total Station shifts the accuracy range below the cm level [30]. Tie point image coordinate precisions depend on image quality and object texture characteristics so, once the image block is acquired, the user has a limited ability to intervene. Some differences can be expected in point identification performances, if different Structure from Motion algorithms (i.e., different software packages) are used, or if different processing parameters are chosen (e.g., the Orientation quality parameter in Metashape). A matching precision of 1 pixel and of 1/3rd of a pixel has been considered. Table 5 summarizes the combinations of BBA configurations tested in the MC simulation. The combination of Area shape and Strip types yields nine different configurations. Combining them in all possible ways (16) with the parameters Landform, Measurement precision, Block control type and Block control tightness, a total of 144 different cases were investigated in the MC simulations.
Generation of True Values and True Errors for the Synthetic Data
The true values of the exterior and interior orientation parameters (including the camera optical and sensor distortion parameters) and of the tie points' ground coordinates of the simulated blocks are taken from the real blocks, i.e., from the estimated parameters values after a free-net self-calibrating BBA executed with Agisoft MetaShape (Agisoft, St. Petersburg, Russia) on the two real blocks. The tie points' distribution and their multiplicity in the simulated block is also taken from the real blocks. To this aim, the list of tie points in each image has been exported from MetaShape and the tie point image coordinates' true values have been generated by projecting the ground point coordinates with the collinearity equations, according to the estimated exterior orientation parameters and camera parameters. The synthetic image coordinates so obtained incorporate the optical and sensor frame distortion estimated for the real block. Therefore, though the same camera has been used in both real flights, the synthetic block's IO parameters are slightly different. For instance, the focal length true value for the Flat terrain block is 2335 pixels (21.01 mm equivalent 35 mm format focal length) while for the Hilly terrain block it is 2320 pixels (20.88 mm equivalent 35 mm format focal length). Normally distributed errors with standard deviations according to Table 4 have been generated in each run of the MC simulation and added to the true observations. Running the BBA in the MC simulations, the standard deviations assigned to the observations should be the same as those reported in Table 4. This is true for the tie points' image coordinates; for the GCP and the CS coordinates, however, in a real block it might be advisable to reduce their standard deviations (i.e., by a factor three/four) to reduce unmodeled errors, as the much larger number of image observations with respect to the other observation types needs to be counterbalanced by increasing the weights of the latter [1,40,42,44]. In this case, however, being the observations affected by zero mean gaussian errors, the effect of varying to some extent the weights of the observations have negligible effects on the final results.
Accuracy Evaluation of Camera Calibration Parameters
The calibration parameters' accuracy will be investigated at the single parameter level as well as at a global (image) level. The former analysis will focus primarily on the three IO parameters, the latter on the largest residual distortion over the whole image frame.
Correlations between parameters play a key role in estimation errors, not affected by the MC simulations, and will also be considered in the evaluation of the nine configurations. Given this paper's objectives, special attention will be given to comparison between GNSSassisted and traditional GCP block control.
To present the results, the nine block configurations have been (albeit arbitrarily) ranked according to a decreasing block "strength score" (see Table 6). The overall calibration accuracy of each block configuration will be measured by the largest residual distortion. To this aim, a grid of 20 by 15 points has been set over the image frame. At each iteration of the MC scheme, the maximum residual distortion error on such grid points (i.e., the distance between the true image distortion correction and the one computed with the estimated distortion parameters) is recorded. Upon completion of MC simulations, the average and standard deviation of the maxima per iteration are computed. To weigh the alternative between GNSS and GCP block control in the calibration, the percentage gain in modelling distortion (reducing the average max residual distortion) will be computed for identical block configurations and similar block control tightness.
Accuracy Evaluation of Ground Coordinates
The accuracy of the ground coordinates in each of the 144 cases is evaluated by comparing, for each tie point coordinate, the true values of the coordinates against the estimated value in each block adjustment. For each check point coordinate, the mean error, the error standard deviation and the RMSE obtained in the 1000 MC iterations is computed and averaged over all the tie points common to all block configurations. As the number of tie points depends on the block configuration, in order for the comparisons to be made on an equal basis, only points common to all configurations have been used in computing the error statistics. As such points are fairly distributed over the survey area, restricting the analysis to the common set doesn't affect the statistics' significance. However, this common tie point set has been built excluding the HO configuration (POI case with oblique images only), as very few tie points turned out to be common to other blocks. Using such a small amount, in our opinion, would have affected the significance of the results for all configurations too much. As such, the sample size of the error statistics for HO configuration is not homogeneous with the other configurations [10]. As a matter of fact, in our test the HO case turned out to yield in most cases quite singular results, hardly in agreement with a trend that could be spotted in the other configurations and mainly quite poor [10]. Additionally, in our experiment design we did not think of the POI as a real standalone configuration, but rather as a complement of nadir imagery. A reason for such disappointing results might be that the POI is not that far from an "orbital motion" critical configuration [45]. We present them anyway, with this caveat and without any further comment.
Dome Effect and Pre-Calibration
Our test has not been specifically designed to study the so-called dome effect [22] that may show up when residual calibration errors and weak block geometry produce systematic errors in tie point coordinates, mostly apparent in elevation. Indeed, except in one case (GNSS control case with noGCP), the block control applied in the simulations (see Figure 2) always foresees at least one GCP in the block centre, therefore limiting the magnitude of the Z coordinate error at the block center. However, taking advantage of the 144,000 camera calibration parameter sets estimated in the BBA of the MC simulations in the nine block configurations of the experiment, we investigated the 3D tie point coordinates sensitivity to (inaccurate) pre-calibrated camera parameters, i.e., the dependence of the dome size on the pre-calibration block configuration, through a second MC simulation.
To set "better" conditions for the dome effect to show up, a slightly modified L (ongitudinal) image block configuration has been extracted from the Flat block. In addition to the original tie points, in this block a set of more than 1600 check points has also been generated as follows. The horizontal coordinates of each check point are taken from the nodes of a regular 5 × 5 m grid set over the area, while their elevation is set equal to the average elevation of all the tie points in the original Flat block. The synthetic image coordinates of tie points and check points are then generated error-free, i.e., by projection on the images according to true values of camera parameters, ground coordinates and EO parameters. Finally, only four GCP located at the block corners are used as control in the modified L block.
In the new MC simulation, consisting of 144,000 runs over the modified L block, random errors with a standard deviation of 1 pixel (Medium precision in Table 4) are applied to tie points' image coordinates only, while check point image coordinates are left unperturbed. The modified L block observations are then adjusted, fixing the four GCP at the corners, in pre-calibration mode, i.e., using fixed camera calibration parameters. Such camera parameters are taken, in each run of the new MC simulation, from one of the 144,000 calibration parameter sets estimated in the first MC simulation. After the BBA, the ground coordinates of the check points are computed by forward intersection (i.e., keeping fixed the estimated EO parameters). In this way, only the effect of the tie point image errors and of the pre-calibrated parameters is transferred via the EO parameters to the check points ground coordinates, as the check points image coordinates are error-free. On completion of the MC simulation then we get 144,000 Z error sets for the check points, each set representing the dome effect generated in the flat area by application to the modified L block of a pre-calibrated camera parameter set coming from one of the 144 originally tested configurations.
We divide the 144,000 error sets in 144 groups, according to the block configuration the camera parameters have been estimated on. To summarize the results, for each group (block configuration) the average Z error, calculated as the average over all check points of the mean Z error of the 1000 MC runs in each check point, is computed. Moreover, the error range due to each pre-calibration configuration is computed as follows. Out of the 1000 runs over each check point, the largest positive, largest negative and standard deviation (being the mean approximately zero for all the points) of the Z error are recorded. Finally, the average of all check points differences between the largest positive and negative error (i.e., the maximum range) is computed, hereafter named the Z error range. The analysis of both errors should highlight the influence of the pre-calibration block configuration on a dome effect-prone block such as the modified L block.
Results
In the following, all figures and tables, unless explicitly stated, refer to simulations with random errors generated under Medium precision (see Table 4): 3 cm for CS, 0.5 cm for GCP and 1 pixel for image coordinates. The plots show an accuracy deterioration from strong to weak block configurations which is comparatively larger in flat terrain, especially in the GCP case. Oblique images are necessary for accurate estimation of the principal distance: if they are included, the accuracy range is from 0.05 to 0.29 pixel irrespective of control type and tightness as well as terrain type. If they are missing, a sharp decrease in accuracy may occur, from 0.4 to 2.9 pixels. The importance of oblique images becomes apparent when computing the ratio between the principal distance average RMSE of the four LC, HLC, L and HL configurations without oblique images and the corresponding average RMSE of the configurations LCO, LO, HLCO and HLO with oblique images (see Table 7). As can be seen, the accuracy gap in principal distance determination without and with oblique images ranges from a factor 5 to 9. In the GCP case the gap is largest and almost the same, irrespective of terrain type and control tightness. In the GNSS case, if no GCP is fixed (Basic) the gap is quite significant, while it is the lowest if 1 GCP is fixed (Enhanced), especially in flat terrain. This on the one hand means that, in flat terrain, oblique images are even more necessary than in hilly ones; on the other hand, that GNSS control with 1 GCP partly compensates for a less geometrically strong block configuration.
Without oblique images, in the GCP case it is the terrain type that ensures (Hilly) or prevents (Flat) accurate determination of the principal distance, while control tightness plays only a minor role; flat terrain is critical also in the GNSS case as, unless a single GCP is employed, the estimation error raises quickly well above 1 pixel. In both GCP and GNSS cases, the same block configurations in hilly terrain provides better results than in a flat one (on average about two times better in our test settings). The HO case (a single ring of oblique images) stands out: it is the only case where, with GCP control, Flat is more precise than Hilly and in the GNSS case Flat Basic (noGCP) is not markedly worse than Flat Enhanced (1 GCP). With both GCP and GNSS block control, control tightness is not critical for PPx estimation, as the values for Basic and Enhanced cases are very similar. In the GCP case, the accuracy gap between hilly and flat terrain deteriorates markedly moving from strong to weak block configurations, up to a factor 3.8 in HL. To the contrary, in the GNSS case, both for flat and hilly terrain, the accuracy level is weakly dependent on the block configuration and control tightness: indeed, the accuracy gap between the two terrain types is quite stable and never exceeds a factor of 1.7. Finally, the HO case appears again as a singular and critical one, both with GCP or GNSS-assisted block control, and particularly so in the latter case, with an eightfold decrease in accuracy.
Principal Point Location
PPy accuracy is overall substantially worse than PPx, at least in weak block configurations. In hilly terrain the accuracy gap with respect to PPx is limited for both GCP and GNSS cases: the ratio RMSE_PPx/RMSE_PPy ranges from 1 to 1.7; in flat terrain, to the contrary, the PPy RMSE is worse by a factor ranging from 1.5 (LCO Enhanced) up to 9 (HL Basic).
The plot of the GCP case shows that ground control tightness is not critical for PPy estimation in both terrain types. In the GNSS case, however, this is true only with hilly terrain, while in flat terrain and block configurations lacking oblique images the 1 GCP case is significantly more accurate. In both GCP and GNSS cases and hilly terrain the PPy accuracy is very stable with respect to block configuration and always better than in flat terrain under the same block configuration. In the GNSS case and flat terrain, moreover, the error increases when moving from strong to weak block configurations with a marked jump and at a higher rate when oblique images are removed; a growing gap also opens between Basic (no GCP) and Enhanced (1 GCP) control tightness. The overall relative accuracy gap between the strongest and the weakest block configurations is significant: for the GCP case the error increases by a factor of 3 in hilly terrain to a factor of 9 in flat terrain, while the respective figures for the GNSS case are from 2.4 to 15. Finally, also for PPy, the HO case is critical. Figure 6 shows the average of the maximum distortion error value registered over the image frame as a function of the nine block configurations, of the terrain type (Flat, Hilly) and of the control tightness (Basic or Enhanced), respectively, in the GCP and GNSS cases. Both the GCP and GNSS cases show similar trends, with a slight degradation of accuracy for decreasing block configuration strength. A hilly terrain yields more accurate distortion modelling than a flat one: by a factor of 1.8 to 2.3 in the GCP case and from 1.4 to 2.9 in the GNSS case. The HO case is somehow apart, with the largest values about four times worse than the worst result of the other eight block configurations. As far as the block control type is concerned, while in the GCP case the control tightness has little or no influence on distortion accuracy, in the GNSS case with flat terrain the Enhanced control (1 GCP) is clearly more effective when oblique images are missing.
Calibration Overall Accuracy
To measure, if any, the overall calibration accuracy gap between the GCP and GNSS case, Figure 7 plots the percentage accuracy gain of performing camera calibration in a GCP or GNSS case, for the nine block configurations. More precisely, for each pair of GCP and GNSS identical configurations, the difference of the average max distortions is computed and expressed as percentage. ∆ maxD of the distortion in the GCP case for each of the nine block configurations, two terrain types and two control cases: where: maxD(GCP) = average value of maximum distortion error over the image frame in the 1000 MC runs when the block configuration is adjusted with the GCP control type. maxD(GNSS) = average value of maximum distortion error over the image frame in the 1000 MC runs when the block configuration is adjusted with the GNSS control type. In Figure 7 a positive value means the GNSS case is more accurate in modelling the overall image distortion than the GCP case, and vice versa for negative values. Overall, the GNSS delivers a better calibration in most cases, sometimes with quite a significant improvement (up to 45%). In the four strongest block configurations (all with oblique images) GNSS performs markedly better in flat terrain (+23% on average), while GCP is better in hilly terrain (+14% on average). In weaker blocks GNSS performs almost always better than GCP (+20% on average). The largest gains are in flat terrain if at least 1 GCP is used (Enhanced tightness case), with three cases exceeding a 30% gain.
Ground Point Coordinate Accuracy
The ground coordinates accuracy is evaluated by comparing the true against the estimated coordinates for a set of tie points common to all block configurations (see Section 2.5). Such coordinates are influenced by the estimated interior orientation and distortion parameters, whose accuracy, as shown in the previous sections, can vary strongly with the block configuration and control type. At the same time, different block configurations (e.g., LCO vs. HO) have different tie point projections redundancy, projecting ray intersection angles and image multiplicity which affect the accuracy of the tie points as well.
Rather than the magnitude (absolute values) of the coordinates' RMSE, it seems more appropriate here to present a relative comparison among the different block configurations, as this provides a measure of the accuracy gain when flying according to one or another block configuration. More precisely, the relative accuracy loss ∆ RMSE has been computed as: where RMSE(CFG i ): average RMSE on tie points in the CFG i configuration, with CFG i = LCO, LO, ..., HL and HO; and RMSE(LCO): average RMSE on tie points in the LCO configuration. Figure 8 shows the percentage loss ∆ RMSE of the ground coordinates RMSE of every block configuration with respect to the reference configuration (LCO) as a function of terrain type and control tightness in the GCP case and in the GNSS case. The top figures refer to horizontal coordinates and the bottom ones to elevation. Please note that the previously used sequence order of the block configuration labels in the graphs has been modified in such a way as to have a monotonic decreasing accuracy. In both the GCP and the GNSS case, the block configurations split in three groups of similar accuracy: (1) LCO, HLCO, LC and HLC; (2) LO, HLO, L and HL; and (3) HO, which is a singular case. This suggests that cross strips look more important than oblique images to ensure accurate ground coordinates, while the opposite is true for camera calibration parameter estimation accuracy (see Table 7).
For the horizontal coordinates, in the GCP case and hilly terrain, group (1) blocks are roughly equally accurate (differences below 10%); group (2) blocks are 20% to 30% less accurate than LCO; and block HO is 120% less accurate than LCO. In flat terrain the accuracy gap range in group (2) is larger (30% to 50%). In the GNSS case the accuracy gap pattern is basically the same as the GCP case, with a larger group (2) gap (from 35% to 60%).
As far as elevations are concerned, in the GCP case the accuracy gaps in group (2) range from 24% to 35% in hilly terrain and from 35% to 70% in flat terrain. Moreover, in flat terrain a noticeable dependence on control tightness is apparent. A smaller accuracy gap is found in the HO case (from 60% to 80%) with respect to horizontal coordinates. In the GNSS case the picture is more complex. In group (2) the rate of accuracy decrease in flat terrain is larger than in hilly terrain, and even more so between Basic and Enhanced control tightness (the accuracy gap reaches 180%). In HO configuration the accuracy gap goes from 60% (Hilly Dense) to 150% (Flat Sparse).
For a comparison between GCP and GNSS case, Figure 9 reports for the tie point coordinates RMSE the percentage gain (or loss) relative to the GCP case. More precisely, the relative accuracy gaps ∆ RMSE_CT between GCP and GNSS RMSE for the same configuration have been computed as: where: RMSE GCP (CFG i ): average RMSE on tie points in CFG i configuration with GCP block control type; RMSE GNSS (CFG i ): average RMSE on tie points in CFG i configuration with GNSS block control type. A positive value means the GNSS case is more accurate than the GCP case and vice versa for negative values.
From Figure 9 it can be seen that the horizontal coordinates' accuracy does not show significant differences between the GCP and GNSS cases: the largest for all configurations (less than 5%) can be expected in flat terrain with basic control; in hilly terrain the differences are below 1%. The HL and HO configurations are (partial) exceptions, with differences up to 8% and 16%, respectively. In elevation the pattern is somehow similar, with differences even more insignificant in hilly terrain. However, in flat terrain there is a clear distinction for block with and without oblique images. In the former case the GNSS case is better (up to 14%) while in the latter the GCP case is markedly better unless the single GCP (Enhanced control case) is fixed: the gap grows from 14% (LC) to almost 55% (HL).
Comparison of GNSS-controlled blocks vs. GCP-controlled ones is strongly influenced by the instruments' actual precision in Camera Station and GCP coordinate determination and by the weights assigned to such information in the BBA (see Section 2.3). Such precisions, in author's opinion, are representative of the current state-of-the-art of most UAV surveys. In our test context the two solutions (GCP control network vs. GNSS-assisted orientation) are largely balanced and provide similar tie point accuracy results. Should this not be the case (e.g., should a less-precise on-board receiver be used) one solution would provide significantly better performance than the other.
Effect on Tie Points RMSE of Increased Block Control Precision
As pointed out at the beginning of this section, all the above-presented results refer to errors in GNSS-determined camera stations, GCP coordinates and image coordinates generated according to Medium precision in Table 4. With error magnitudes three times smaller i.e., generated according to High precision observations as reported in Table 4, the tie points RMSE patterns as a function of block configuration are broadly similar to those shown in the previous paragraph. To measure the improvement (if any) brought by the increased measurement precision, Figure 10 shows the percentage accuracy gain for the tie points ground coordinates achievable with High precision measurements as opposed to Medium precision measurements. More precisely, the accuracy gain ∆ i has been computed separately for horizontal (X, Y) and vertical (Z) tie point coordinates as: Table 4) to 1 cm, 0.17 cm and 0.33 pixel, respectively (High precision in Table 4). Overall, the ground coordinates' accuracy increase is very limited in hilly terrain in both the GNSS and GCP cases, where it is lower than 2% in almost all cases, and even less for the horizontal coordinates. In flat areas, accuracy gains, though in absolute terms mostly small, are larger than in hilly terrain in both control cases and, again in both control cases, are more significant for elevations. The gains pattern as a function of the block configuration is, however, different. In the GCP case, perhaps surprisingly, the largest gains (from 5 to 7% in horizontal coordinates and from 6 to 12% in elevation) are found for the stronger configurations with cross strips (LCO, LC, HLCO and HLC). In the GNSS case the only noticeable gains in horizontal coordinates are for the square block (from 6 to 12%). Still in the GNSS case, the largest accuracy gains (from 20% to 26%) are registered for the elevations, in flat terrain and Basic control tightness (no GCP) in block configurations without oblique images.
Dome Effect
As anticipated in Section 2, a second MC simulation has been carried out to evaluate whether and to what extent applying pre-calibrated parameters may still cause the occurrence of the dome effect in the current block being adjusted. In particular, the influence of the pre-calibration block characteristics is investigated. From this new simulation, 144,000 Z error sets, each computed on 1600 check points, have been obtained. Every error set represents the dome effect generated on the check points by the application of a precalibrated camera parameter set, obtained in one of the first 144,000 MC simulations, in the adjustment of the simulated image observations of a L (ongitudinal) block configuration flown over a flat area, with four GCPs at the corners (see Section 2.6 for details).
The 144,000 error sets have been divided in 144 groups, according to the configuration type of the pre-calibration block. To summarize the results, for each group the average Z error and the Z error range have been computed. Rather than measuring the magnitude of elevation distortion, here the focus is on the effectiveness of the pre-calibration block configurations in preventing it. Therefore, Figure 11 shows the percentage increase of the average Z error and of the Z error range of each pre-calibration block configuration with respect to the reference configuration LCO. Both values refer to Z errors computed over the 1600 check points in the 1000 adjustments of the modified L (ongitudinal) block configuration with the 1000 camera parameter sets obtained in the former MC simulation. More precisely, the relative percentage differences ∆ Z for the average Z error have been computed as: where Z error (CFG i pre-cal.): average Z error on 1600 check points in a L block in a flat area adjusted with camera parameters from a pre-calibration block in CFG i configuration, with CFG i = LCO, LO, ..., HL and HO; and Z error (LCO pre-cal.): average Z error on 1600 check points in a L block in a flat area adjusted with camera parameters from the reference pre-calibration block. The reference LCO varies according to block control type (GCP or GNSS), control tightness and terrain type. Likewise, the relative percentage differences ∆ Z_range for the Z error range have been computed as: where eZ range (CFG i pre-cal.): average Z error range (difference of the largest positive and the largest negative Z error) on 1600 check points in a L block in a flat area adjusted with camera parameters from a pre-calibration block in CFG i configuration, with CFG i = LCO, LO, ..., HL and HO; and eZ range (LCO pre-cal.): average Z error range (difference of the largest positive and the largest negative Z error) on 1600 check points in a L block in a flat area adjusted with camera parameters from the reference pre-calibration block. The reference LCO varies according to block control type (GCP or GNSS), control tightness and terrain type. Figure 11. Average Z error increment (left) and Z error range increment (right) when applying to a L configuration block a pre-calibrated camera parameter set estimated with a given configuration with respect to a pre-calibration set estimated with LCO configuration. The results are presented for both the GCP (top) and GNSS (bottom) block control cases, as a function of block configuration, block control and terrain type.
From Figure 11 left, it is apparent that the percentage error gap can be dramatic, especially in flat terrain and in square blocks, if oblique images are missing: the worst case is HL, with 150% and 90% increase in GCP and GNSS case, respectively. In all cases pre-calibration parameters estimated on hilly terrain perform better compared to those on a flat terrain: except for the HO case, the percentage increase of the Z error is always less than half compared to that from a calibration over flat terrain, and much less so in the strongest block configurations.
In the GCP case, with pre-calibration executed over a hilly terrain, LO, HLCO and HLO configurations are on par with LCO pre-calibration. This applies also to LC and, perhaps surprisingly, to L (only 7% worse than LCO). Square blocks without oblique images (HLC or HL), on the other hand, deliver calibration parameters that produce Z errors 20 to 30% worse. Pre-calibration parameters estimated over a flat terrain with square blocks are not as effective even with oblique images (HLCO +13% and HLO +20%), and much worse without (+97% and +160% in HLC and HL, respectively).
In the GNSS case with pre-calibration executed over hilly terrain, all rectangular configurations (LO, LC and L) and the square configurations with oblique imaging (HLCO and HLO) are on a par with LCO. As in the GCP case, HLC (+17%) and HL (+33%) produce instead significantly larger Z errors. In flat terrain a pre-calibration with GNSS rectangular blocks perform better than square ones, as in the GCP case, even if they include oblique images (HLCO +23% and HLO +39%). Comparing GNSS and GCP pre-calibration, GNSS is always better in rectangular blocks and in all square blocks except those including oblique images.
In both the GNSS and the GCP case, the pre-calibration block control tightness seems to play a marginal role (i.e., increasing block control does not significantly reduce the gap with respect to the reference case LCO).
The Z error range looks rather independent of the terrain type and block control; it is larger for weaker configurations, but without a clear monotonic trend (i.e., matching the decreasing "block strength" emerging from the previous analysis). The ratio between average height of the dome (Volume/Area) and error range (difference between maximum and minimum height of the dome) is almost constant in flat terrain, from 6 to 7; in hilly terrain instead, it increases from 1.9 to 4.9 with decreasing block strength.
A comparison between the effectiveness of camera calibration when taking advantage of GNSS-determined camera stations and when using GCP is among the paper objectives. The average Z error obtained using camera parameters from the same calibration block configuration adjusted with GNSS-determined camera stations with respect to the equivalent error obtained from adjustments with camera parameters obtained with GCP control is shown in Figure 12. To compare both pre-calibrations, the percentage difference ∆ pre-cal has been computed as: Positive ∆ pre-cal values mark comparatively smaller Z errors for GNSS pre-calibration w.r.t. GCP pre-calibration and vice versa for negative values.
It is apparent that in hilly terrain both control types are basically equivalent, as differences are below 5%. Likewise, block control tightness is not very important as differences between Basic and Enhanced are also below 5%. In flat terrain with oblique images GCP performs better; however, just slightly so, with differences ranging from almost insignificant (LO, less than 1%) to small (HLO, 13%). Without oblique images, GNSS precalibration is better, with improvements up to 30% for square blocks (HLC and HL) and a bit smaller (up to 19%) in rectangular shaped blocks (L and LC). Interestingly, also with Basic tightness (no GCP) the GNSS case seems to deliver better calibration parameters than GCP when flying over a flat terrain.
Camera Calibration Parameters
Overall, as far as the estimation accuracy of the IO parameters is concerned, the GNSS and the GCP cases show similar trends with respect to block configuration, terrain type and block control. Accurate estimation of the principal distance (see Figure 3) is ensured if POI oblique images are included to complement nadir-imaging longitudinal (and possibly cross) strips; if they are missing, the accuracy becomes two-to-five times worse. The HL case in flat terrain is particularly critical for both the GNSS and GCP case (up to ten times worse than the best case). It should also be noted that [10] in a single POI block (HO case) the accuracy is five times worse than the best case. Cross strips, on the other hand, provide only a marginal improvement. Although, at first thought, this might seem surprising, the image block being much more rigid with cross-strips, it is actually in line with findings from [10,18] where cross strips attained less than expected improvements or worse results. It should be noted, as far as principal distance is concerned, that nadir-imaging cross strips do not introduce significant new geometrical constraints (from a projective point of view) for its estimation. On the contrary, having a more significant depth change in the scene pictured by the oblique images (as well as due to the object geometry e.g., as in the hilly study area), drastically increases the accuracy of the estimation.
The accuracy of Principal Point (PP-see Figures 4 and 5) estimation in hilly terrain is very stable with respect to block configuration and control tightness, while in flat terrain the accuracy gets worse with weak block geometries. It should be noted, in this context, that the use of cross strips increases, although not drastically, the determination of the PP location. In fact, it is well known (see for instance [8]) that the use of 90-degree-rolled images in a calibration image block prevents, or at least reduces, the insurgence of unwanted correlations between the parameters and in particular the ones associated to the PP.
The average and standard deviation of the maximum residual distortion affecting the image coordinates after camera calibration parameter estimation (see Figure 6) show trends quite similar to those of PP accuracy estimation. It is worth pointing out that, in this analysis, the distortion error considers both the effects due to a not accurate estimation of the radial and tangential calibration parameters and the ones induced by a not accurate Principal Distance and Principal Point estimation. In other words, the reported errors represent the image coordinates error on image plane due to all the estimated parameters. It is therefore intuitive that this analysis shows similar trends of the ones in Figures 4 and 5. The HO case is a stunning exception in the GNSS case as, even in hilly terrain, the accuracy is more than ten times worse than the best case. This is also true for the maximum average distortion, where HO shows a clear gap compared to other configurations.
To summarize the comparison between block control by GNSS or GCP, there is perhaps no outright winner, but a clear edge for the GNSS case, which performs better especially in weaker block geometry configurations. In agreement with findings from [10,27], accurate determination of all interior orientation parameters is possible with GNSS even without GCP, if oblique images can be included. At first sight, this seems to contradict authors' [40] and others' previous tests [34]. However, it should be noted that in both the cited cases the GNSS-assisted blocks were made of nadir images only, as the flights were performed with fixed-wing platforms. Moreover, the authors of [35], flying only longitudinal strips, found adding 1 GCP necessary and sufficient to recover bias in elevation due to inaccurate determination of principal distance.
The question about the optimal survey block configuration is likely to remain open, as the variety of parameters to explore is really too large. As far as our contribution to this point is concerned, a few basic configurations and their combinations have been taken into account. However, some promising variants in the imaging geometry i.e., flying the longitudinal strips with moderate-to-strong (30 • to 45 • ) camera axis inclination along flight direction [26] that recently received attention [10,27,28] were not considered. Another caveat applies to block size and shape, especially in the GNSS case, as pointed out in [27]: should large blocks be composed by juxtaposing basic, optimized sub-block tiles? Do results found with this and other simulations apply to any block size and shape? Is a complete layer of oblique images necessary to complement a basic longitudinal strip layer or, as suggested in [14], is taking just one at each block corner enough? From the results, longitudinal nadir-only blocks should be limited to hilly terrain (in the presented case the largest image scale was three times bigger than the smallest one), where calibration is still fine and the accuracy loss on ground coordinate (see next section) compared to LCO is negligible in horizontal coordinates and does not exceed 20% in elevation. This agrees with [1]. Adding two flight layers (C and O) to the basic longitudinal one delivers of course the top results. It should be noted that, in most cases (see Figures 4-6) dropping one of the two results in significant worse accuracies (at least as far as the percentage error increment is considered) of the estimated camera model parameters, but does not result in a significant accuracy loss for the ground coordinates (see Figure 8), except for the GNSS control case on flat terrain. If a choice is to be made between cross and oblique, our results are ambiguous. In flat terrain oblique images are necessary for accurate determination of all IO parameters, while cross strips are only effective with PP coordinate estimation. On the other hand, LC and HLC configurations for rectangular and square blocks show significantly better RMSE on tie point coordinates for hilly terrain, and better or comparable ones for flat terrain compared to LO and HLO.
Do GNSS-based and GCP-based image blocks deliver equivalent calibration accuracy? Broadly speaking the answer is negative, as the former performs always better than the latter in flat terrain if 1 GCP is used, with improvements up to 30%, while the latter is 10% to 20% better with strong block configurations in hilly terrain. It should be noted, however, that from a practical standpoint, GNSS-assisted UAV surveys come with significantly fewer operational constraints than traditional GCP-based ones, especially if the area investigated presents accessibility issues and if total time of operation is critical. The simulations seem to confirm what several of the previously cited authors illustrate in their contributions: the current state of the GNSS technologies implemented in most of the modern RTK UAV systems are already precise enough to implement accurate, and maybe also reliable, GCPfree surveys in most of (if not all) operational conditions. In author's opinion, acquiring also some GCP (at least one) remains an important requirement nonetheless: as far as the accuracy of the ground points is concerned, introducing at least one GCP might highlight some RTK solution bias and reduce it to some extent. In author's experience the GNSS UAV navigation solution is sometimes affected by systematic errors, easily masked in a pure GNSS-assisted solution. Additional independent ground control constraints can therefore dramatically increase the survey reliability. At the same time, as the simulations highlighted, including at least one GCP in the GNSS-assisted block might increase significantly (though not drastically) also the quality of the IO and distortion parameters estimation, especially for the weaker image block geometries.
Check Point Coordinates Accuracy
With the exception of HO case, the accuracy loss of horizontal coordinates as a function of the block configuration grows from just 1% (LC) to 30% (HL) in hilly terrain but reaches 60% (HL) in flat terrain. The double grid configurations show the lower loss ( Figure 8). The pattern is similar for the GNSS and GCP cases, though in flat terrain the loss rate is more pronounced for the former.
As far as elevations are concerned, in hilly terrain the pattern is similar to horizontal coordinates, though the loss is higher (38% in HL case) in both the GCP and GNSS cases. In flat terrain, however, the GNSS and the GCP show, to the contrary, clear differences. In the former, without oblique images, the accuracy loss is quite sensitive (up to 175% in HL) to afford the lack of ground control. Adding (at least) a single GCP does not really solve the problem as the overall loss remains very high (75% in HL). To the contrary, with inclusion of oblique images, there is no difference between adding or not the single GCP and the overall loss is below 50% in the worst case (HLO). This suggests that adding the GCP as proposed in [40] is not the best solution to error estimation in the principal distance: using a stronger block configuration is more effective. In the light of [27] results and of authors' findings, a double grid with a moderate pitch angle configuration seems the best trade-off, though perhaps not yet an operational solution for many fixed-wing platforms.
In the GCP case, increasing the control tightness does not bring substantial improvements in horizontal coordinates; in elevations the gains are a bit higher, but not much. Though a meaningful comparison is difficult, this result only partly agrees with findings in [12]. Accuracy gains by increasing by a factor three the control precision ( Figure 10) are very limited in hilly terrain, being less than 5% in both horizontal and vertical coordinates. In flat terrain the situation is more complex. In GCP case the improvement is between 5% and 10% in elevation and mostly less than 5% in horizontal coordinates. This point agrees with [12] results. With aerial control, the improvement in horizontal coordinates is still modest, below 10%. In elevation, to the contrary, configurations without oblique images gain from 15% (with a single GCP fixed) to 25% (without GCP) while the remaining are basically not affected.
Dome Effect
Before discussing the results of Section 3.4, it should be stressed again that they refer to the case of pre-calibration only. In other words, what has been presented is an analysis of the pre-calibration block configuration performance in possibly delivering an effective camera calibration parameter set. All the IO and distortion parameter sets evaluated in the different image block configurations were applied (i.e., were used as pre-calibrated parameters) in a single (always the same) L (ongitudinal) image block. For the results presented in Figure 11, the LCO configuration has been taken as "gold standard" and the results of the other configuration types have been measured relative to that case, in order to measure the calibration accuracy loss when pre-calibrating with a weaker block configuration.
As far as Z error increase is concerned (see Figure 11 left), a pre-calibration over hilly terrain with both control types (GCP and GNSS) is always better than one over flat terrain. Moreover, except for some weaker configurations (i.e., HLC and HL) the increase in Z error is very limited i.e., up to 7% worse. For HLC the increase is ca. 19%, while for HL is stronger (32%). In flat terrain, on the other hand, if the configuration includes oblique imaging, the accuracy loss is minimal only for rectangular block shapes (LO) while in square blocks (HLCO and HLO) the gap is noticeable (10 and 20% in GCP case and 23 to 40% in the GNSS case). Without oblique imaging, there are again similarities between GCP and GNSS control, but the gap loss with respect to LCO is reversed (now GCP is almost twice worse than GNSS). In other words, the weaker the pre-calibration block configuration, the more that an accurate camera station position helps in camera calibration. Again, square blocks are less effective than rectangular ones: LC and L are about three to four times better (GCP case) or even more (GNSS case) than HLC and HL. Motivations for this behaviour should be further investigated. In fact, the differences between the square vs. rectangular image blocks resides only, in authors' opinion, in a number of observations approximately two times larger, that should not be enough to justify the results. At the same time, analysing the results of camera model parameters estimations and the connected ground point accuracy (see previous sections), even if square (H) blocks provide usually worse results, the differences with rectangular configurations are much smaller.
It is interesting to compare the results presented in Figure 11 with those concerning the actual accuracy in determining the IO and calibration parameters (shown in Figures 3-6) and the associated behaviour of the different image block configurations.
Looking at the increase in the Z error range (see Figure 11 right) three points can be stressed: the loss with respect to LCO is generally much larger; pre-calibration on hilly terrain does not rule out the chance of large errors; the gap between flat and hilly terrain is mostly small in the GCP case but, in the GNSS case for HLC and HL configurations, the error range for pre-calibration in hilly terrain is even larger than in flat terrain (a fact yet without a clear explanation).
The comparison between GCP-based and GNSS-assisted (camera-based) block control shows that pre-calibration with the latter is generally a better option, as smaller Z errors compared to GCP control are obtained. Indeed, results shown in Figure 12 indicate that, as long as oblique imaging is included in the block, it makes little difference in terms of Z error whether block control is achieved with GCP or GNSS, as all LCO, LO, HLCO and HLO configurations obtain similar errors with both control types. This can be seen as in agreement with the claim of [13] that calibration is first and foremost a matter of block imaging geometry and camera modelling and that oblique imaging is an essential element of such imaging geometry in blocks flown over flat terrain as well as, generally, with all previous works on optimal imaging for camera calibration. On the other hand, looking at weaker configurations, when imaging geometry is less robust (LC, HLC, L and HL), camera projection centres are more helpful than GCP in flat terrain. There also seems to be a dependence of the improvement amount on the block shape, while cross strips seem less important. Indeed, with our test settings the gain is limited (from 12 to 19%) in rectangular blocks (LC and L), while it is larger (up to 30%) with square blocks (HLC and HL). In short, if, for whatever reason, oblique imaging is not applicable in a survey over flat terrain, using GNSS-assisted orientation is more advisable than using GCP; the remarkable indication is that this applies also when no GCP are available on ground i.e., in the Basic tightness control case, which is in agreement with results shown in Figures 6 and 7.
Conclusions
Drawing conclusions in a topic as complex as UAV camera calibration with reasonable confidence on their scope and validity is never easy, as the results always come out of given experiment settings, never exhaustive of the multi-dimensional space of the process relevant parameters. As such, keeping in mind the test characteristics depicted in Section 2, a few conclusions are presented in the following.
As far as accuracy of interior orientation parameters is concerned, though trivial to say, the calibration block configuration matters a lot: the accuracy decrease could be as high as 30 times in the worst case for the principal distance, though less (nine times) for the PP coordinates. Oblique images help a lot (LO is almost as good as LCO), though a POI-only (HO) calibration is not recommendable: in our findings nadir looking images are also necessary. The comparison between ground (GCP) and aerial (GNSS-assisted) block control configurations shows that over flat terrain the latter deliver 20% to 60% more accurate calibration parameters than the former, in almost all configurations and for all IO parameters. In hilly terrain GCP control is generally better, though no more than 20%. Unless oblique images are included, estimation of principal distance in the GNSS case over flat terrain might result in large errors.
Estimation errors of the calibration parameters in a pre-calibration block, when applied as fixed parameters in a subsequent BBA, affect ground point coordinates. In this respect our conclusions are that configuration of the pre-calibration block matters in general, and particularly when flying over flat terrain. The average Z error increase for weaker configurations compared to LCO can be as large as 150% with GCP control; less so, but still up to 90% for GNSS control. GNSS-assisted block control is in most cases a better option than GCP control in pre-calibration (only with oblique imaging included the difference is minimal). In weaker configurations over flat terrain, camera station positions constrain the block more than GCP.
As far as tie point ground coordinates RMSEs are concerned, weakening the calibration block configuration leads to sizeable but limited percentage accuracy losses in hilly terrain (below 35%) while losses reach 70% in elevation in flat terrain. Block control by GNSS or by GCP are in practice equally accurate in horizontal coordinates, while in elevation GNSS without oblique imaging and no GCP might perform up to 50% worse.
Simulations also confirm that, as many practical experiments have shown, in the GNSS case GCP are generally not necessary for both horizontal coordinates and elevations; however, in flat terrain oblique imaging is necessary to avoid errors in the latter. | 2021-09-29T05:19:25.767Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "9977555b9a826d51605f6565f6fe03868b662fa7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/21/18/6090/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9977555b9a826d51605f6565f6fe03868b662fa7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
220484765 | pes2o/s2orc | v3-fos-license | An integrated 3D fluidic device with bubble guidance mechanism for long-term primary and secondary cell recordings on multi-electrode array platform
A 3D fluidic device (3D-FD) is designed and developed with the capability of auto bubble guidance via a helical pathway in a 3D geometry. This assembly is integrated to a multi-electrode array (MEA) to maintain secondary cell lines, primary cells and primary retinal tissue explants of chick embryos for continuous monitoring of the growth and electrophysiology recording. The ability to maintain the retinal tissue explant, extracted from day 14 (E-14) and day 21 (E-21) chick embryos in an integrated 3D-FD MEA for long duration (>100 h) and study the development is demonstrated. The enhanced duration of monitoring offered by this device is due to the controlled laminar flow and the maintenance of a stable microenvironment. The spontaneous electrical activity of the retina, including the spike recordings from the retinal ganglion layer, was monitored over a long duration. Specifically, the spiking activity in embryonic chick retinas of different days (E-14 to 21) is studied, and the presence of light-stimulated firings along with a distinct electroretinogram for E-21 mature retina provides the evidence of a stable microenvironment over a sustained period.
Introduction
Microfluidic technology offers a platform for physiological studies with features superior to current in vitro methods. These features include a controlled cell culture microenvironment mimicking in vivo conditions with attributes of sustained laminar flow, debris elimination and controlled introduction of biochemical agents that is conducive for long-term studies. Microfluidic device (MD)-based cell culture systems are preferred over conventional perfusion chamber-based systems for long-term live cell studies. MDs can be designed to maintain the temperature, pH, osmolarity, gaseous environment and nutrient concentration [1][2][3][4][5][6]. More specifically, the possibility of obtaining electrophysiological time-series recordings from active cells and tissues at high spatial resolution juxtaposed with microscopy imaging has wide implications for development studies.
An example of a model system for such ex vivo and in vitro studies is the vertebrate retina. Studies of the retina are performed in various modes; as primary dissociated monolayer cultures, secondary cell lines expressing light-sensitive G proteincoupled receptors (GPCRs), retinal explants with and without the retinal pigment epithelium and as a perfused whole eye. Long-term dissociated primary culture of the retina has for a long time been used as an in vitro platform for experimental retinal studies, primarily to study a single type of cell interaction in isolation from its environment. Isolated cell populations such as retinal pigment epithelial cells, cone photoreceptor cells or cell lines such as retinoblastoma are used for these studies [7]. Specific gene manipulation and its effect on the morphology of cell populations and synapses between them are crucial to study cues that affect retinal development [8,9]. The organotypic culture of the retina, in particular, has advantages as a model system as it maintains the architecture and cellular connections of the tissue in vitro. It therefore serves as an exemplary platform for studies pertaining to development, neurodegeneration, neuroprotection and pharmacological manipulation [10,11]. Retinal explants have been isolated and cultured since 1933 both as an intact retina culture and as a neuroretinal explant (with only the neural retina) [12]. Most explants are maintained in the floating and rolling configurations. However, maintenance of organotypic culture of the explant is fraught with difficulties due to the soft-fragile nature and the fuel requirement to sustain photoreceptor activity of the mature retina [13,14]. Significant pyknosis of retinal ganglion cells between 12-24 h of culturing and complete apoptosis within 3-4 d have been reported [15]. The retinas were kept afloat on a rocking platform atop custom-made boats inside a large media reservoir that was agitated with a magnetic stirrer. The importance of the continuous agitation of the media reservoir and its replacement at frequent intervals to maintain high viability of ganglion and amacrine cells were emphasised [14]. Recently, a platform that could study both tissue explants and organoids under the same conditions was presented, to enable translation of therapeutics from animal model studies for retinitis pigmentosa to human organoids derived from human pluripotent stem cells [16]. A system to record from specific subcellular regions of an engineered network architecture of primary cortical cultures for understanding the in vivo physiological environment along with a demonstration of high-throughput manipulation/testing for therapeutics was recently presented [17]. Various in vitro microfluidic platforms have been introduced for tissue engineering studies [18,19]; however, the integrated microfluidic microelectrode array generally has limitations arising from continuous oxygenation. The formation of bubbles during medium exchange perturbs the microenvironment and the electrophysiology recording of neuronal systems, and is an issue that needs to be addressed [20,21].
In this context, we introduce a versatile setup for controlled medium exchange to maintain the metabolites without any trace of fluid leakage and bubble formation. The auto-bubble-guidance mechanism incorporated in a custom-made design ensures the complete elimination of bubbles in the vicinity of the targeted area. The separated bubbles are guided along the helical periphery towards the outlet. The efficacy of this setup is demonstrated through long-term recordings of the primary retina explant of chick embryos at different stages of development. The multi-electrode array (MEA)-integrated microfluidic culture system was implemented for studying the development of retina using an MEA platform for real-time data acquisition. The presence of distinct characteristics such as light-induced electroretinogram (ERG) accompanied by the firing of stimulated retinal ganglion cells (RGCs) along with spontaneous electrical activities provide an unambiguous tracking signature during the long-term recordings. The results of these studies are consistent with previous studies of the developing chick retina and offer additional insights. To the best of our knowledge, no long-term studies of chick retina explants using a microfluidic system or indeed any studies on retina explants along with electric field recordings have been previously conducted outside the incubator and under ambient conditions. The chick embryo retina is known to develop gradually, and the displacement and growth of the ganglion cells have been monitored previously. The chick embryo also offers a sizeable retina during the early developmental stages. The onset of RG differentiation and the associated mechanisms and structures which develop into the final mature structure have been well documented [22]. Further, the conerich retina of the chick has also been used as a model for the developmental aspects of pattern formation for all photoreceptor subtypes. The embryonic chick retina has been reported to exhibit marked differences in the distribution of rods and cones, as well as cone subtypes, which define specialised regions similar to those found in other species [22,23]. The 3D fluidic device (3D-FD) platform can be utilised to explore and verify some of these features observed in retina development. From the translation point of view, the embryonic chick retina in the early stage of development where photoreceptors are not functional has been used as a suitable model system in our laboratory to demonstrate the utility of an extrinsic photoactive soft material layer for retinal stimulation. The 3D-FD platform is useful for studying and identifying mechanisms and pathways in the vision system with artificial retina prosthetic elements [24][25][26].
The key highlights of the 3D-FD presented in this paper are the following: incorporation of autobubble-guidance geometry, uninterrupted live longterm electrophysiological recording, compatibility with inverted or upright microscopy for long-term imaging, suitability for measurements with the commonly available MEA [27], and maintenance of the microenvironment for primary explant tissue culture [28]. A variety of adherent and suspended secondary cells were also cultured for different periods of time from several hours to several days. The cell morphology, migration and division of these cells were studied over a long period of time. Though the paper mainly focuses on the results of the studies on the primary explant tissues [29], a brief set of results from studies on secondary cells is also described in the results section. The results of the electrophysiology recordings, light-stimulated activities [30][31][32][33] and microscopy imaging of the developing tissues are also discussed. These results open up new avenues to study many problems related to development and long-term issues for controlled drug delivery.
System setup
The novel experimental setup consists of a 3D-FD integrated to a microelectrode array, mounted on an inverted phase contrast microscope (Nikon, Japan) with an in-house-developed perfusion system, a DSLR camera (Canon, Japan), a 60-pin extracellular multichannel recording system (MCS, Germany) and a customised digital light processor projector (DLP Light Crafter 4000, Texas Instruments, USA) light source for stimulation, as shown in figures 1(A) and S1 (available online at stacks.iop.org/BF/12/045019/mmedia). The tissues inside the 3D-FD were connected to the MEA. The tissue images were focussed by a 40X objective lens onto the DSLR camera and the images were captured continuously. All the devices were synchronised to measure neural activity. The 3D-FDintegrated MEA is continuously perfused with buffered medium to avoid bubble formation, which changes the osmolality as the pH drift results in tissue degeneration. In addition, the system allows simultaneous optical and electrical measurements in the 3D-FD.
Fabrication and assembly of the 3D-FD
The 3D-FD platform (figures 1(A)-(C)) and integrated O-ring washer were custom designed and fabricated. The integrated O-ring washer was fabricated with the standard polydimethylsiloxane (PDMS; Sylgard 184, Dow Chemical, USA) replica moulding technique. The metal moulding templates were made using a brass rod of 40 mm diameter and 8 mm height, and 33 mm outer diameter and 32 mm inner groove for the O-ring washer. The PDMS was degassed to yield bubble-free elastomeric media prior to transfer into the template housing. The metal template with the clear PDMS O-ring was then baked for 3 h at 55 • C. The metal templates were then removed to obtain the complete integrated O-ring washer. The pre-baked O-ring washer was exposed to plasma for 2 min to make it hydrophilic. The O-ring washer was then immediately embedded on top of the MEA using PDMS. The geometry of the bubble guidance assembly was designed in Autodesk Inventor software and fabricated using a five-axis computer numerical control (CNC) machine. The design file was converted into the STEP file format, which is compatible with the CNC machine. An acrylate polymer, which is biocompatible, non-flammable and optically transparent, was used for the fabrication. The 3D-FD used in this study had an oval trajectory within the inner cylindrical extrusion of the device, which served as a bubble guidance rail as shown in figure 1(C). A micro washer groove having 32 mm outer diameter and 30 mm inner diameter with a horizontal V-shaped cut from inner to outer diameter, was designed so that fluid leakage was prevented. The bubble guidance geometry had a 20 mm outer diameter with a 52 • angle slope, and a step curved around the slope, which was designed to guide bubbles from inlet to outlet (figure S6). The two polymers PDMS and PMMA (acrylate) used in the 3D-FD had good biocompatibility and enabled gas permeability while ensuring that the system was free from leakages and created a stable microenvironment for tissue or cell culture. The PDMS O-ring integrated washer (figure S6) was an in-house laboratory product and was fabricated using soft lithography techniques. It was then subjected to plasma treatment and bonded onto the MEA, again using the PDMS 1:10 mix (Sylgard 184, Dow Corning). This procedure is well established in the literature [34].
The confined MEA was then coated with a mixture of poly-l-ornithine 0.1 mg ml −1 and laminin 10 µg ml −1 and placed in a standard cell culture incubator (CO 2 5%, 37 • C, humidity 95%) overnight. The seeded MEA then was carefully placed in a square chamber (40 × 40 × 2 mm) to restrict the cell/tissue culture movements during assembly.
The microfluidic platform (3D-FD) was gradually and carefully press-fitted onto the seeded MEA, with measures to restrict any perturbation and movements. The entire assembly was mounted inside a biosafety cabinet to avoid contaminates as shown in figure 1(C). The assembled microfluidic platform with the MEA was carefully aligned onboard the MCS system for electrophysiology recording as shown in figure 1(A). In summary, the facile integration was enabled by the PDMS O-ring presence between the microfluidic platform and MEAs. The microfluidic platform can be reused multiple times after standard dry sterilisation methods.
Dissection and tissue alignment
The chick embryos were sacrificed through cervical dislocation. The eyes were enucleated by separating from the central nervous system and were transferred into the cell culture petri dish containing bubbled artificial cerebrospinal fluid (ACSF), with the vitreous humour removed before the neural retina was isolated from the choroid (retinal pigment epithelium). The whole procedure was carried out in ACSF bubbled with carbogen gas (95% oxygen and 5% CO 2 ). The isolated retina was placed into a freshly bubbled carbogen gas solution. Subsequently, the slice was dissected to the appropriate size. By using a polyvinylidene fluoride filter (thickness 100 µm) the tissue was lifted gently onto the photoreceptor side to avoid tissue folding, with the ganglion side placed on top of the laminin matrix present on the subset of the [35]. The MEA was immediately sealed with the 3D-FD. Before, the 3D-FD was soaked in tetra-enzyme solution to remove the contaminants at room temperature for 24 h and wetautoclaved at 125 • C for 45 min and inspected for any structural changes of the device. The 3D-FD geometry or internal circular extrusion remained directly on top of the chick retina with a work distance approximately 150-200 µm; this internal fabrication kept the retina intact on the electrode matrix. The assembled device was placed inside a standard CO 2 incubator (CO 2 5%, 37 • C, humidity 95%) for an hour with very low media volume. This process prevents any physiological movements and ensures good contact between the retinal tissue and the electrodes ( figure 1(A)) [36]. All the sample preparation protocols followed national and institutional guidelines and were approved by the Animal Ethics Committee and the Institute Bio-Safety Committee at JNCASR.
Tissue recording and signal analysis
The retina tissue signal was recorded using a commercially available inverted 60 channel MEA filter amplifier with a variable gain and bandwidth (1 Hz to 300 Hz) for recording the field potential and action potentials (300 to 3000 Hz) and a software user interface (MC_rack). The retinal (RGC) side was placed on the electrodes and the activity was continuously recorded at 20 kHz rate, typically for a duration of around 65 min after the dissection. During the acquisition, the temperature was maintained at 37 • C with the aid of an external temperature controller (TC-1, MCS). The raw signals were analysed offline using a high pass filter (cut-off at 200 Hz). The spikes were recorded in the filtered data by allowing a negative threshold of three standard deviations of peak-to-peak noise. Neuroexplorer analysis software was used for representing spike trains in terms of time-stamps.
Immunofluorescence imaging
The selected tissue slices were fixed with 4% (w/V) paraformaldehyde and preserved in 4% (w/V) sucrose solution upon removal from the culture medium and washed with warm 1X phosphate-buffered saline (PBS). The tissue was permeabilised with 0.2% Triton X-100 in 1X PBS solution for 15 min and the tissue slices were incubated with a blocking buffer (1% goat serum, 3% bovine serum albumin, 1X PBS (Thermo Fisher Scientific, USA)) solution for 4 h. The tissue slices were incubated with primary antibodies overnight at 4 • C. The primary antibodies used were anti-visinin (DSHB), XAP-2 (DSHB) and anti-Tau (Thermo Fisher, USA). It was then washed with 1% PBS repeatedly (three times), followed by incubation with a secondary antibody for 2 h at room temperature. The secondary antibody used was goat antimouse, alexa-fluor-488. Finally, a mounting solution containing DAPI to stain the nuclei was added to each slice and the sample was covered a glass coverslip. The fluorescence image was acquired using an inverted confocal microscope (LSM 700) with a 40X objective, using ZEN software (Carl Zeiss, Germany) and processed with ImageJ software. (II) and pressure gradient (III). The efficacy of the 3D-FD was verified initially using a standard protocol by introducing the medium buffer solution into the flow system and imaging it using a video camera. It was possible to track the bubble guidance trajectory right from the inlet to the outlet of the flow system (figure S2, video SM1). Bubble formation is typically present in perfusion-based MDs as there is a continuous oxygen supply. Bubble accumulation at the inlet eventually results in a deflection to the circular trajectory along the boundary (figures 2(B) I-IV) resulting in a clear media solution which enters via the narrow pore leading to the cell culture device. The guided bubbles are eventually flushed out from the outlet. The flow rates and volume of the media were adjusted and monitored to arrive at an optimum level which is suited for longterm cell culture maintenance [37,38]. The device design and flow parameters were simulated using COMSOL and the simulation results are presented in figure 2(C).
The rate of proliferation and the confluency of secondary SHSY5Y cells typically observed in standard cell culture conditions [39] were reproducible within this device ( figure 3(C)). The morphology of the day 3 cells cultured in the 3D-FD were found to be similar to the day 3 cells cultured via standard means (figure S3). The equivalence of the features with in vitro standard cell culture observations and the ability to control and maintain the microenvironment over a long-term is a major attribute of the 3D-FD.
The effect of bubble removal on electrophysiology
The embryonic chick retina is a facile source for electrophysiological signals. The relatively large size of the developing eye offers a versatile model to demonstrate the utility of the 3D-FD. An unambiguous signature of the functioning of a developed retina is the characteristic ERG waveform in the MEA recording [35,40,41]. The profile of the light-induced response of the 21-d developed retina consists of contributions from the different layers of the retina, which reflect in different components of the ERG (a, b, c and d waves), and is accompanied by the characteristic spiking of the RGCs. The presence of the light-induced ERG is a clear indicator of the functioning organoid in the in vitro environment and can be taken as a measure of long-term survivability of the explant in the 3D-FD.
The 3D-FD uses a syringe injection method that maintains the in vitro 37 • C microenvironment [42][43][44]. Electrophysiological studies were carried out for retinal explants from chick embryos of stages E-14-E-21. Light-evoked responses of these tissue explants were observed starting from the late E-20 to early E-21 stages of development. The observation of a clear light-induced ERG from this device setup over long duration demonstrates the sensitivity and reliability of the recording. Figure 4(A) shows the characteristic ERG signal of a mature post-natal day zero/E21 chick retina obtained upon photoexcitation using 532 nm pulses of 0.5 ms duration and a duty cycle of 10 s. The ERG signal feature includes the initial (40 ms post-trigger) local minima corresponding to the 'a' component. This feature is followed by local maxima (peaking at 100 ms post-trigger) signifying the onset of the 'b' wave of the ERG. Then the gradual decrease to the baseline until trigger (light) off corresponds to the 'c' component wave. The off-cycle generates the 'd' wave local maxima. These characteristic a b c d wave components of mature chick retina ERG are well documented. Further, the a b c d wave components of the ERG are accompanied by distinct spiking activity. The distribution of these spikes within the ERG is shown in figure 4(A), with the use of a high pass filter (200 Hz). It was noted that the response to the periodic light pulse is sustained over a long duration, and the continuous recording over 600 s is shown in figure 4(A) (bottom in red). The features of ERG and spike distribution in figure 4(A) (top) at the 0 to 2 s scale were present over the entire long-term range from 0 to 600 s. Spike clusters were extracted from a representative electrode for a 1 s light pulse-evoked response recording of the mature chick retina, and they revealed the presence of at least three different types of responses; namely, the ON response, OFF response and a mixed ON-OFF response. Figure 4(B) represents the post-stimulus time histogram (PSTH) and the colour maps. The PSTH highlights distinct contributions from three different types of neurons (RGC) figure 1(A). The MEA recording of the SHSY5Y in the 3D-FD is benchmarked against the control sample (SHSY5Y) from a standard CO2 incubator as shown in figure 3(C). [45]. The different types of RGCs developed in the E-21 retina have been documented. A complete analysis of all the contributing electrodes should be useful in identifying the different functional types of RGCs. This demonstration also reveals the possibility of using this device to study the emergence and development of different RGC types.
The progression from a blind retina to a mature control retina is monitored by studying its light response and is also corroborated with immunohistochemistry images of the different embryonic stages of the chick retina ( figure 4(C)). The absence of mature photoreceptors with developed outer segments (antivisinin) and the scarcity of axonal innervations (anti-Tau) across the plexiform layers explain the lack of a light response in the E-14 retinas. Longterm recordings of chick embryonic explant tissues from E-14-E-18 were carried out in the 3D-FD, maintaining the ACSF perfusion at an optimum rate [46].
A typical long-term recording result is shown in figure 5(C). The flow rate during the measurement was optimised by comparing recordings of the same tissue (E-14) at different flow rates. A flow rate of 1 ml min −1 was observed to be ideal for recording. Higher rates (up to 10 ml min −1 ) do not significantly affect the background noise levels. Figure 5(C) depicts an artefact-free recording of a blank MEA (IR-ITO, MCS) perfused with bubbled ACSF at a rate of 1 ml min −1 over a period of more than an hour. Recordings were carried out on the E14 retina in the 3D-FD after a sufficient (30 min) stabilisation period within the perfusion environment.
The key features of the long-term recording include the burst activity that was observed as retinal waves moving across electrodes approximately every 150 s. A continuous recording from a representative electrode featuring spikes and bursts is shown at two different time scales in figures 5(B) and (C). The activity persists over the entire period of recording and is observed for most of the electrodes of the MEA. The burst activity rate and the intervals between the burst from a typical active electrode were analysed from different electrodes to extract the spatial correlation (supplementary information available). Besides the functional aspect, the structure and the morphology of the retina can also be monitored over a long duration. The integrity of the live tissue cultured on 3D-FD was validated by the bright-field image of an organotypic culture of an E-18 retina cultured over a duration of 100 h, as shown in figure 5(A). The image captures some neurite growth at the edges of the tissue ( figure 5(A2)). It should be noted that the spontaneous electrical activity persisted beyond this 100 h mark. The recording shown in figure 5(B) reveals the spontaneous electrical activity which is exhibited by the propagating signal across four different electrodes. The key features of these observations were reproducible for a variety of explants. Studies involving photoexcitation revealed similar trends and strongly indicated the viability of this approach.
The long-term studies of the 3D-FD are further emphasised by studies involving a concoction of primary cells derived from E-16 chick retina, which include retinal ganglion cells, bipolar cells, amacrine cells, horizontal cells and other components. These cells were seeded on ornithine-laminin-coated MEA and observed over 100 h using an inverted microscopy setup configured for time-lapse imaging and electrophysiological experiments, and the results are provided in figures S4 and S5 and video M2.
Discussion
The induction of air/bubbles or the formation of bubbles during perfusion of the medium into the chamber is severely detrimental for cells. The importance of molecular oxygen for the growth and proliferation of cells in culture is well known. Hence, bubbling of O 2 and CO 2 has been a key factor in deciding the viability of cultures and survivability of explants. Thus, maintaining the concentration of dissolved O 2 in the medium and enabling its diffusion to the cells in culture is imperative. A key problem faced in recording and culturing platforms that needs to be addressed is the retention of bubbles in these chambers that affect dissolved O 2 concentrations and lead to the accumulation of reactive oxygen species that result in oxidative stress leading to cell death. The 3D-FD provides an elegant solution to this problem with a helical bubble guidance path [47]. The dynamics of bubbles within the MD can be simulated using a volume of fluid method which is geometry-based and works on the principle of conservation of different fluid volumes. Ongoing efforts in this direction to arrive at a computational model for the flow and bubble guidance can be further utilised to justify and optimise the design.
The present design permits seamless switching between different reagents and media, such as that from ACSF (chick) to Ca-free ACSF. The ability to introduce reagents and analytes and capture their effects may find applications in screening and identifying molecules. In the present studies of 3D-FD, a decrease in the firing rate was observed upon introduction of toxins which act as channel blockers at a desired, controlled rate. It was also possible to observe the recovery and the corresponding increase in the signal upon subsequent washing and introduction of appropriate growth media [42,48].
The provision for simultaneous imaging facilitates the study of the effects of different morphogens, blockers, metabolites and toxins on the growth, morphology and electrical activity of different systems. The functioning of the chick retina [49] which manifests in electrophysiological activity was monitored over 90 min. This duration of measurement should exceed a few hours and opens up the opportunity to conduct recording studies as a function of several experimental parameters on a single explant, thus avoiding studies across a large number of samples and minimising the associated statistical variances. The growth and migration of cells at the periphery of an explanted retinal tissue was also observed using the imaging platform over a period of 48 h. The effect of tetrodotoxin (TTX) on the silencing of spiking activity during an ERG and its wash-off to restore prior activity levels provides a proof of concept of its multifaceted utility to monitor morphology and electrical activity of cell and tissue cultures during development (figure S1).
The 3D-FD is also geared to study substrate effects on the growth and proliferation of different cell populations in cell culture. The possibility of simultaneously imaging provides the growth and morphology trajectory of different cell types [49,50]; for instance, a finite growth of the tissue from the E-18 retina explant at the periphery can be observed over a 100 h period ( figure 5(A2)). The effect of substrates with varying stiffness parameters on stem cell differentiation and secondary cell neuronal type culture is well known [51][52][53]; specifically, substrate parameters such as adhesive strength, texture and nanotopography, surface wettability and stiffness can be engineered and probed using this setup. Signalling mechanisms involved in the results of light-evoked response of a blind retina on semiconducting polymer substrates [25,54] can be probed to ascertain the pathways involved in the transduction from systematic long-term studies. We have embarked on extending the utility of 3D-FD by incorporating compact CO 2 and O 2 containers and transforming it to a complete, portable, standalone assembly.
Conclusions
A bubble-free, leakage-proof 3D-FD for long-term measurement of electrophysiology signals of neuronal networks is demonstrated. These were achieved primarily due to the incorporation of an auto-bubbleguidance trajectory in the form of a helical pathway in a system which is integrated to a MEA. The developing retina in different stages extracted from chick embryos is studied using this setup. A clear enhancement by a factor of five in the duration of functioning retina cultured in the 3D-FD is observed. The versatile 3D-FD design may find use in a host of other biomedical applications. | 2020-07-12T13:05:59.605Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "f74f562e32c7bffe29cb04dcfe8014792f7a858b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1758-5090/aba500",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "f4b5f04f8068b15b520da734d207041687a01e73",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
252136542 | pes2o/s2orc | v3-fos-license | Evaluating the Reliability of Neurological Pupillary Index as a Prognostic Measurement of Neurological Function in Critical Care Patients
Background Neurological pupil index (NPi) is a novel method of assessing pupillary size and reactivity using pupillometry to reduce human subjectivity. This paper aims to evaluate the use of NPi as a potential prognostic tool in a broad population of neurocritical care patients by observing the correlation between NPi, modified Rankin Scale (mRS), and Glasgow Coma Scale (GCS). Methods Our data was collected from 194 patients in the neurosurgical intensive care unit (ICU) at Arrowhead Regional Medical Center (ARMC), as determined by the power calculation. We utilized the Kolmogorov-Smirnova and Shapiro-Wilk normality tests with Lilliefors significance correction. Pearson product-moment correlation was performed between average final NPi and final GCS. Multi-variate linear regression and analysis of variance (ANOVA) were used to evaluate the association and predictive capabilities of NPi on GCS and discharge mRS. Finally, we evaluated whether age, ethnicity, sex, length of stay (LOS), or discharge location were significantly associated with NPi. Results We observed a significant correlation between final GCS and NPi (r=0.609, p<0.001). Our regression analysis revealed that NPi significantly predicted GCS and mRS scores; however, no associations were found between age, ethnicity, sex, LOS, or discharge location. Limitations of our study include a single institutional study with a lack of disease subtyping and the inability to quantify the predictive ability of NPi. Conclusion The analysis revealed a strong correlation between final GCS and average final NPi. NPi was also able to significantly predict GCS and mRS scores. The correlation between NPi and established methods to determine neurological function, such as mRS and GCS, suggests that NPi can be a good prognostication tool for neurological diseases.
Introduction
Pupillary size and reactivity are among the major non-invasive methods of assessing neurological function. During the pupillary reflex exam, the pupil's size and symmetry are measured, and the rate of reactivity is classified as either brisk, sluggish, or non-reactive. Abnormal measurements can indicate diseases such as stroke, tumors, and traumatic brain injuries [1][2][3]. It is a valuable prognostic tool for assessing the patient's neurological health [4]. Current methods of assessing pupillary reflex include the Glasgow Coma Scale (GCS) and modified Rankin Scale (mRS). To date, many variations of these clinical tools have been developed to further refine the prognosticative ability of patient outcomes. However, these measurements are predisposed to inaccuracy due to subjectivity, inexperience, language barriers, iatrogenic barriers (e.g., intubation, sedation), and lack of standardization of the examiner [5][6][7][8][9]. Neurological pupil index (NPi) is a new practice established by NeuroOptics, Inc. (Irvine, USA) that measures the size, latency, and velocity parameters and quantifies them on a scale from zero to five, zero being non-reactive, and a score equal to or above three indicating normal pupil behavior [6]. NPi utilizes automated pupillometry to decrease human subjectivity and minimize administration time to increase the efficiency and accuracy of neurological assessments. Evaluating the change in NPi could potentially serve as a more robust prognostication tool to assess the recovery of the brain in neurocritical care.
With the development of the international curing coma campaign (COME TOGETHER), we have seen increasing interest to improve the assessment of patients with impaired neurological function [7][8][9][10]. Previous studies have investigated the utility of these prognostic measurements in combination and even integrated in prognostic modeling calculators, such as the international mission for prognosis and clinical trials in traumatic brain injury (IMPACT) and corticoid randomization after significant head injury (CRASH) [8,[11][12][13]. These studies found that among a variety of these clinical measurements, GCS, NPi, and mRS were all significant predictors of patient outcome in the context of traumatic brain injury (TBI) [14][15][16][17]. Other studies have corroborated these findings, but many of these analyses hone into specific clinical contexts, such as TBI and stroke [18][19][20]. More studies are needed to compare prognostic capabilities across a broad range of neurocritical diseases to increase the generalizability and feasibility of automated pupillometry.
If we can establish a correlation between NPi, mRS, and GCS, it could support the reliability of NPi as an alternative and potentially more robust prognostic tool in critical care patients. In this study, we used a pupillometer to measure the NPi in 194 subjects in the neurosurgical intensive care unit (ICU) at Arrowhead Regional Medical Center (ARMC) in San Bernardino, California. We hypothesized that NPi and GCS would be positively correlated, mRS and NPi would be negatively correlated, and NPi could significantly predict GCS and mRS scores. Therefore, NPi can be used similarly to or in conjunction with GCS as a tool to assess neurologic function across a varied neurocritical patient population.
Materials And Methods
We collected data from patients in the neuro-ICU at ARMC. The following demographic and clinical information were obtained from medical records to describe patient baseline characteristics: age, sex, ethnicity, length of stay (LOS), and discharge disposition/location. Clinical neurological assessments (i.e., GCS, mRS) were obtained using conventional established methods upon admission and at the time of discharge. NPi measurements were obtained at admission, every four hours (every hour for critically unstable patients), and at ICU discharge using the NeuroOptics Pupillometer (Irvine, CA, USA) version 2.00. NPi values greater than 3.0 were characterized as normal, and NPi values less than 3.0 was considered abnormal.
Initially, we conducted a pre-study power calculation using G*Power (version 3.1.9.7; Heinrich Heine University Düsseldorf, Germany) to compute the statistically significant sample size needed for data collection. Our a priori analyses suggested a sample size of 191 patients to achieve a power of 0.80. Next, we evaluated the homogeneity of our data distribution by using the Kolmogorov-Smirnova and Shapiro-Wilk normality tests with Lilliefors significance correction. We then performed Pearson product-moment correlation analyses to identify any statistically significant correlations between our continuous variables average final NPi, Final GCS, and LOS. The obtained Pearson correlation coefficients were categorized as weak (0.00-0.30), moderate (0.31-0.60), and strong (>0.60). The association of NPi, GCS, and mRS and the ability of NPi to predict GCS and mRS, was determined using multiple and ordinal regression analysis. Finally, we determined whether several predictor variables, such as age, ethnicity, sex, LOS, and discharge location, could significantly predict a patient's NPi score using multivariate regression and ANOVA. All statistical analyses were performed using SPSS statistics software V28.0.1.0 (IBM Inc., Armonk, USA).
We conducted this study in compliance with the principles of the Declaration of Helsinki. The study's protocol was reviewed and approved by the Institutional Review Board of Arrowhead Regional Medical Center (#22-21). Informed consent was waived.
Participants of the study were those that were admitted to the neuro ICU at ARMC. Their sex was determined through medical records. This study did not involve an exclusive population. Ethnicity was self-determined by patients upon initial admission to the hospital and was included in this study to determine any patterns between NPi and prognosis in certain groups.
Variable
Value (
Discussion
As we hypothesized, our results identified a strong correlation between average final NPI and final GCS scores. The correlation between these two variables suggests that NPi measurements could be used similarly to GCS as an effective predictor of prognosis. Our subsequent regression model reveals that our utilization of these three measurements is significantly correlated within varying disease contexts present in our patient population. Since GCS and discharge mRS have been shown to effectively predict patient prognosis, NPi's ability to significantly predict these two measurements supports the possibility of using NPi as an alternative predictor of prognosis [21][22]. However, further multi-institutional studies are needed to evaluate the potential superiority of NPi as a predictor of the prognosis within a variety of neurological disease contexts.
Interestingly, although NPi, GCS, mRS displayed significant relationships, we found that only NPi was able to significantly predict discharge location. Although the majority of patients were discharged home, being able to identify relationships between a patient's "measured" neurological health and inevitable discharge location could help improve hospital resource utilization and care management. This highlights the need to explore whether these trends can be seen at other institutions and if better categorizations are needed to better identify significant relationships and predictive capabilities within these variables.
Our study also looked at whether demographic parameters such as age, sex, and ethnicity played a role in predicting our patients' prognoses, as measured by NPi. Overall, we failed to identify any significant associations between these variables, which could be attributed to our predominantly Hispanic patient population. Finding strong correlations between certain demographic groups and the prognostic capabilities of NPi, GCS, and mRS could help make better-informed decisions. However, such strong demographic patterns could also question the external validity of these measurement tools in a varied patient population. These results convey that NPi and GCS can be similarly used in predicting prognosis among a potentially diverse patient demographic, although more studies are needed to confirm these findings.
Our study does have some limitations. First, although our sample size displayed appropriate statistical power, a majority of our patients are Hispanic and over the age of 50. Further analysis is needed within a multi-institutional study with more patients to ensure reproducibility and generalizability to a diverse patient population. Second, since our study aimed to evaluate the reliability of NPi in all neurocritical care patients, we did not stratify our patients based on the clinical context. Hence, we are unable to take into account specific neurological diseases and the potential confounding effects on predicting patient prognosis. Finally, our correlation analyses cannot determine any causations or quantify how well these tools predict long-term patient outcomes and should be interpreted with these points in mind.
Overall, our study offers additional insight into how NPi, which could serve as a more objective and efficient method of evaluating patient prognosis, relates to conventional methodologies like GCS and mRS. Despite their widely accepted use, these tools are inherently slightly more subjective and more prone to potential differences in inter-observer reliability. Our future studies will aim to quantify how well NPi can predict long-term patient outcomes within specific disease subtypes. We plan to replicate this analysis as part of a multi-institutional study and corroborate NPi as a potentially superior prognostic tool for neurocritical care patients. | 2022-09-09T15:14:30.434Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "8f8373017a71a96d15ab77f4aced5a516c7778d7",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/112848-evaluating-the-reliability-of-neurological-pupillary-index-as-a-prognostic-measurement-of-neurological-function-in-critical-care-patients.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "067b51a67c68670c7b073d3c46e18e5cacbda0ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56374414 | pes2o/s2orc | v3-fos-license | Adapting Budgetmaking to Inflation, Icelandic Municipalities in a Volatile Economic Environment
Inflation was a growing problem during the 1960’s and the 1970’s in Iceland. Inflation represents a problem for any entity that tries to write up a budget, whether an individual, a company or a governmental entity. Many governmental entities do not have right to spend money unless what has been prescribed in the budget. The purpose of this paper is to map out how inflation affected different parts of municipality budgets in Iceland during the 1960’s and the 1970’s. Furthermore, to look at how well budget makers in the Icelandic municipality sector managed the task of forecasting inflation during the high inflation period of the 1970s. JEL: E31, E37, H68. Key concepts: Municipalities budgeting, Inflation budgeting, Expectation formation. 1 Thorolfur Matthiasson is a Professor in the Faculty of Economics at the University of Iceland. E-mail: totimatt@hi.is. Data was collected and prepared by Ólafur Hjálmarsson and published in Matthiasson (1987). The data collection was made possible by a grant from the Nordic Economic Council during the period 1982-1985. The Icelandic part of the project was the responsibility of Guðmundur Magnússon, Magnús Pétursson and Thorolfur Matthiasson. The Paper was presented at the symposium in honour of Professor Gudmundur Magnusson, University of Iceland, Reykjavik, November 2007. 72 Tímarit um viðskipti og efnahagsmál, Special Issue 2008
Introduction
Budgetmaking in public institutions is never an easy task. Policy is changed as new political actors or new political majorities happen to enter the scene. The entrance of new actors or new majorities will almost always result in projects or actions that have budgetary consequences; one politician may have promised to build a bridge, another a school or a kindergarten, a third may have promised to increase the salary of a given group of municipality employees and so on and so forth. To complicate matters further for the budget maker: She has to base her estimates regarding costs of operating the institution on information gathered from bureaucrats that do not necessarily have incentives to share their knowledge in a completely truthful way. To take an example: A bureaucrat in the office of education may choose to send only a low estimate for the cost of a new school if she believes that the probability of a goahead for that building will be increased by doing so. There are incentive mechanisms that can be tailored to induce bureaucrats to reveal correct information but at a cost (see Hindriks and Myles, 2006, chapter 4). Inflation complicates the task of information gathering and information processing in a way that is considerably different from the problems caused by the bureaucrat that supplies information selectively. Note that we have that inflation usually does not affect labour costs in the same manner as it does the price of asphalt or the price of electricity for streetlights. Inflation will also affect the income side of a public budget disparately compared to the cost side. Distinct parts of the costs will be affected in disparate ways, complicating the task of the budget maker still further. The implication is that no matter how good the inflation forecasts are that the budget maker makes there will be some discrepancies, sometimes of considerable magnitude. Prices of some categories of costs may develop differently from other categories. Hence, in times of high inflation the budget maker will have little difficulty in finding excuses for overspending. That opens an avenue for problems of agency at the level of politicians versus voters. Politicians can promise a high level of investment activity, low taxes and budgetary surplus. An outcome with high investment activity and a deficit rather than surplus could be blamed on the uncontrollable and unpredictable effects of inflation. A "smart" budget-making politician may find it opportune to forecast inflation in such a way that the budget gives room for unrealistically high investment level using inflation ex-post as a scapegoat vis-à-vis the electorate.
The purpose of this paper is to map out how inflation affected different parts of municipality budgets in Iceland during the 1960's and the 1970's. Furthermore, to look at how well budget makers in the Icelandic municipality sector managed the task of forecasting inflation during the high inflation period of the 1970s. The organization of the paper is such that an overview of the economic situation in Iceland in the 1960's and the 1970's is given in section 2. Section 3 discusses our calculation of budgeted inflation according to the budget documents of the municipality of Reykjavik. Section 4 reports attempts to convey if the municipality budgetmakers make their inflation forecasts based on the idea of rational expectation or adaptive expectations. Section 5 concludes.
The Economic Situation in Iceland during the 1960's and the 1970's
The 1960's and the 1970's were a period of rapid and fundamental changes in the Icelandic economy. The regulation of economic activity introduced during the crisis of the 1930's and the war of the 1940's was still in effect towards the end of the 1950's and the beginning of the 1960's. In the early 60's external trade was liberalized to some extent and a system of multiple exchange rates was simplified (but not abolished fully). The economy responded positively and growth rates of up to 5% per capita per annum were registered. The process of rapid growth was abruptly brought to a halt in 1968 with the disappearance of Atlanto-Scandic herring from the fishing grounds around Iceland. Atlanto-Scandic Herring had been the single biggest fish stock in the North-Atlantic and had been fished during its summertime breeding migration in the waters around Iceland by fishers from Iceland, Norway, and the Soviet Union, to name the most active. The intensity of the fishing increased dramatically during the 1960's as equipment improved and number of boats participating in the fishery expanded.
The Icelandic economy recovered quickly, helped by exceptional increases in cod catches that filled the vacuum left by the vanished herring. Economic growth was also inflated by demand pressure created by the reconstruction boom following the Heimaey eruption and investment in equipment to harness geothermal energy as replacement for oil in home heating in the wake of the first oil crises in 1972-3.
The boom of the early 1970's soon proved to be a bit too much of a good thing for the small Icelandic economy. Prices that formerly had increased faster than elsewhere, started to accelerate. Inflation was not only fuelled by demand pressures but also by institutions like wage indexation on the supply side. The praxis of wage indexation had been introduced as early as 1941 (see Jóhannes Nordal, 1996, p. 176). With increasing inflation the wage indexation rules became more and more rigid. Wage indexation and demand-pull proved a dangerous composition where demand driven price increases fed into wage costs inducing "cost-push" price increases. Rigid wage indexation ensured a second round of wage increases and cost induced price increases. Thus, one of the jokes among Icelanders at the time was that frost during early fall in Brazil would increase wages, the reason being that the Brazilian frosty nights would reduce the supply of coffee on the world market, which in turn would induce higher prices of coffee. The coffee price would eventually feed into the price index used to index wages in Iceland! As a consequence of these feedback links, inflation in Iceland took an accelerating path during the 1970's and the 1980's. This can be seen in Figure 1, which shows the percentage increase in the Consumer Price Index (CPI) as well as the percentage increase in the price index for public expenditure (PEI). The theme of this paper is to discuss how well bureaucrats managed to forecast the development of the latter index. The construction of the two indexes is different; the CPI is a traditional Laspeyres price index while the PEI is a Paasche index. Furthermore, the CPI reflects development of prices of consumer goods; both imported and domestically produced, while the PEI reflects the development of costs of producing governmental services. Hence, the wage development of governmental employees weighs heavily in that index. Indexation of wages was based on the CPI index, with a lag of 1-3 months. Compensation for increases in the consumer price index was 100% for long periods. Figure 1 shows that inflation in Iceland was highly volatile during the period even when yearly averages are compared. Public Expenditure inflation seems to lead the CPI inflation some of the years. That seems logical given the feedback mechanism described above: The PEI index reflects wages. A general increase in wages will affect prices with a lag of a few months, starting the process of staggered increases in wages and prices due to indexation. It is not to be expected that the CPI exactly replicate the movement of the PEI as price increase impulses could originate from other sources than that of general wage agreements. Frost in Brazil causing higher coffee prices would be an example of a price impulse that would affect the CPI ahead of the PEI.
Making of a Budget
Making a budget for a governmental entity is never an easy task. Politicians are subjected to pressures from special interest groups as well as from the public at large when deciding the size of the budget and its distribution between tasks and projects. Budget officials are subjected to pressure from fellow bureaucrats eager to have the funding they believe they need in order to do their assigned job properly. A multitude of interests is involved, partly parallel and partly conflicting. Adding inflation to this mix of interests does increase the degree of complication considerably.
Also adding to the degree of complication was the fact that exposure to inflation was different for different levels of government. Increase in inflation would increase state income and state expenditure, cet. par. as the state income was to a high degree based on indirect taxes (sales tax, import duties). Hence, both expenditure and income would be moved in the same direction when inflation took unexpected turns. The situation at the municipality level was a bit different before the Pay-As-You-Earn system was introduced in 1988. Municipality income was based on last year's income by individuals and firms and by the valuation of fixed assets (homes/real estate) owned by individuals by December 1st the year prior to payment of the tax. Municipalities thus had their income fixed in nominal terms at the start of the budget year. Changes in inflation would thus hardly have any effect on the nominal income accruing to the municipalities while having the potential of affecting the expenditure part seriously.
Inflation was taken into account in various ways in the state budget during the period under consideration. Towards the end of the period we are investigating it became almost a rule that the Minister of Finance fixed and disclosed a "multiplication factor" for the state budget. This was the factor used to extrapolate from the average price level of the year prior to the budget year into the average price level of the budget year. Hence, if the multiplication factor was 30%, while the average CPI for the year 19X0 was 100, then the budget would be presented on the assumption that the average CPI for year 19X1 would be 130. Ministers of Finance were eager to explain that the multiplication factor was not a projection for the inflation one year ahead as they were not willing to risk political capital on such a risky gamble. The multiplication factor was usually considerably lower than inflation at the time and also considerably lower than inflation expectations of economic agents at the time. The announcement of multiplication factor that was obviously so far out of line with expectations can be interpreted as a vain attempt to nudge inflationary expectations downwards.
The Municipalities were advised by the National Economic Institute on matters relating to general economic prospects, including inflation, when preparing their budgets Snaevarr, (2008). As an autonomous governmental institution the National Economic Institute prepared the National Budget based on present knowledge and governmental policy. The National Budget is a political document explicitly based on governmental policy and prospects for parameters exogenous to the Icelandic economy. It could prove a hard task to project the development of the economy when the government proposed unrealistic changes of policy.
The balance of the budget, size of the investment budget and increase in service charges were all matters of debate when municipalities represented their budgets. However, basic parameters such as expected inflation were not discussed at any length.
Calculating Unexpected Inflation
Presumably the municipalities projected the development of nominal costs based on known and expected wage increases and expected inflation as reported by the National Economic Institute. Expected wage increases would presumably weight in heavily as municipality services are labour intensive. We decided to back-calculate assumptions by using development of given posts in the budget and the accounts of the City of Reykjavik. We excluded investment and repair from our investigation and used the following formula to calculate unexpected inflation: (1) Here, is increase in price level for municipality inputs (inflation) in year in excess of what was expected at time t. and are accrued costs and budgeted costs as recorded in the accounts and the budget, respectively, for year . stands for price-level in year . The formula is based on the assumption that realized real-increase in accrued costs, , of the chosen budget posts was planned. Assume that the real costs do not change from year to year so that . Assume further that budgeted costs were assumed to increase by .
Then . Assume also that the accrued costs increase by from year to year so that . Plugging these assumptions into equation (1) yields an estimate for unexpected inflation as . Unexpected inflation would be estimated to be zero if . Data for the years 1959 to 1972 were collected and used to estimate unexpected inflation for the years 1960 to 1972. Figure 2 shows the development of budgeted versus realized (accrued) expenditure for the chosen budget items according to the budget and the annual profit and loss statements for City of Reykjavik. The reduction in budgeted and real expenditure in 1972 is due to transfer of funding for law enforcement (policing) from the City of Reykjavik, as well as from other municipalities, to the central government (see Matthiasson, 1983).
Calculating Expected Inflation
Once unexpected inflation has been estimated the calculation of expected inflation is straightforward: ( Here stands for realized inflation while stands for expected inflation. Figure 3 shows the development of expected inflation as well as the development of realized inflation. Figure 3 shows that the realized inflation is always higher than the expected inflation as we have estimated it. The figure indicates that expected (budgeted) inflation is systematically lower than realized inflation. The figure also indicates that the budget makers adjusted their expectations to reflect increased or decreased inflationary pressure in the economy.
Brands of Expectation Errors
Inflation-forecasting errors can have a multitude of origins. In governmental instructions errors can be caused by formal or informal rules and principles. An example is the rule long adopted by the Icelandic Ministry of Finance not to include assumed effects of unfinished wage negotiations in the inflation forecast used in the yearly governmental budget proposal. Such rules make the task of the budget maker easier but the forecast is subject to a systematic downward bias. Yet it is hard to see how to include an assumption about the size of next year's wage increase in the budget proposal, as the budget maker will usually be on the employer's side in wage negotiations. In that case the employees would take any announcement of assumed results from unfinished wage negotiations as a starting point for their demands! Thus no matter how realistically the budget maker forecasts the price and wage level his or her adversaries at the negotiation table will render such efforts inaccurate! Similar arguments can be made for other types of costs where prices are matter of negotiation. A governmental decision maker is at a disadvantage if making perfectly honest forecasts in budget documents prior to finishing important deals.
Inflation-forecasting errors can also be made due to "mechanism" failure. I.e. the budget maker may be using bad models or bad or wrong assumptions to make his/her forecasts.
Inflation-forecasting errors can also be intentional. Keep the setting for the municipalities in the 1960's and the 1970's in mind. Income was fixed in nominal terms a year ahead. A high estimate for inflation would inflate costs and leave a smaller amount available for investment than would a low estimate of inflation. Many politicians in local politics will agree that running the municipality is the dull part of the job. Working on investment projects and seeing a new project take shape is the fun part. Hence, the bias is to have as much money for investment as possible, taking all the usual side-conditions into consideration (re-electability etc.).
Rational Expectations
In reality errors will be caused by all the effects mentioned above. Models are never perfect, people nudge forecasts into a direction that is "good" for them and all bureaucrats have to follow some set of rules that are inflexible in some significant way. Thus, inflation forecasting is not an exact science. Economists have tried out two partially competing theories regarding the essence of the error. One line of theory suggests that economic agents do not err systematically, i.e. that their expectations are "rational". According to this line of argument we can write realized inflation as function of predicted inflation in the following way: (3) Here, is the error made by the forecaster. According to the theory we have that: Keane and Runkle, (1990) propose a test to see if expectations regarding the rate of inflation a period ahead have been formed according to the theory of rational expectation by estimating the parameters of the following equation: Here is a vector of variables that might be of relevance and known to the forecaster at the time of forecasting. If the theory of rational expectations does hold we have that , and . Variables that might be suggested as elements in the vector include various data on perceived macroeconomic balance (excess demand, external balance, prospects for the fisheries). Such information was known at a much lower level of precision during the 1960's and 1970's than now. Furthermore, little quantified information from the period is obtainable now. We are thus left with estimating parameters in a simplified version of equation (4). Running the appropriate regression yields: Figures in parentheses are standard deviation of the estimates. We can reject a joint hypothesis that and at level of significance ( ). The evidence suggests that budget makers consistently predict inflation at a level below the realized level. 2 2 The formulation of the model does not fare well in a Ramsey RESET test, indicating omitted variables. The formulation fares better in Breusch-Godfrey LM test for
Adaptive Expectations
The rational expectation hypothesis does not seem to fare well when confronted by how the budget makers in the Icelandic municipalities did make their forecasts. An alternative method to model expectations is to assume that the budget maker observes past error and adjusts new forecasts by accepting part of the error as a "mistake" that should be accounted for in the next forecast: Here it is assumed that . If then the budget maker does keep his forecast fixed, irrespective of the error experienced in the past. If the opposite is true and the budget maker uses last period's experience as his forecast for the present period.
A forecaster that sticks to a rule like the one revealed in equation (6) will make predictable errors. Agents making predictable errors are likely to lose money, as other agents will be able to use the knowledge of the errors to come to their own advantage. A bank that systematically under-predicted inflation would underestimate the cost of deposits and charge too low interest on non-indexed loans. Such a bank would soon go out of business in an environment of non-indexed assets. So economic agents can hardly survive in a competitive environment if making forecasts in the fashion of adaptive expectations for variables of importance.
There is no doubt that next year's inflation was a variable of significance for municipalities in the 1960's and the 1970's. But the municipalities did not operate in a competitive environment. And they were not making leveraged bets based on their forecasts regarding the level of inflation as many other players in the economy might have done. 3 Individuals could only take out loans from the municipalities by being slow in paying their taxes. Hence, while making forecasting errors was potentially bad, it was not to the same degree as in the private sector.
Note also that decisions in a municipality are not made by a monolithic dictator. Bureaucrats prepare information that should help politicians to decide. The politicians have to consider whether they are told the whole truth by the bureaucrats. Furthermore, the politicians may favour biased forecasts for political reasons as already hinted at. Note that the lower the inflation forecast the less of the nominal income already fixed will be needed for the day-to-day operation of the municipality and more can be channelled to investment. A politician in favour of investment in municipality infrastructure (roads, schools, other public buildings, libraries, kindergartens etc.) will also be in favour of not overstating the rate of expected autocorrelation, where the null hypothesis of no-serial correlation is not rejected (Prob=0.4575).
3
A financial institution does make long term commitments on the asset site but must usually rely on short term deposits for financing these commitments. Such an institution conspicuously underestimating the going rate of interest on deposits some periods into the future will have a long line of customers willing to take loans. If the lenders wait a bit they can redeposit the money in the bank at a profit! Needless to say, there is no limit to the demand that would be created in this way. inflation! Politicians with distaste for investment projects might favour a high estimate for expected inflation. Assume that investment-loving municipality politicians outnumber investment-averse politicians. Will it not ruin their reputation as politicians to constantly underestimate the rate of inflation? Well, they can always blame the bureaucrats! The case for survival of economic agents forming expectations about inflation according to the rule of adaptive expectations is much stronger in the case of the public sector than in the case of the private sector. So let's put the data to test and start out by using the adaptive expectation rule: The Prais-Winsten routine of STATA is used to estimate the coefficients ( ℎ =−0.31, adjusted . The three stars indicate significance at the 1% level. The figures below the estimated parameters are standard deviations. The regression was also run with a constant, which was not significantly different from zero at the 5% level. Assume that equation (7) is a correct description of the process used by budget makers to produce inflation forecasts. Then the budget makers will underestimate inflation when prices are accelerating and overestimate inflation when prices are decelerating. But our discussion so far suggests that politicians might react differently when the economy is overheated as compared to when the economic temperature is more normal. This is not easily tested given the data available. One attempt is reported as equation (8): The Prais-Winsten routine of STATA is used to estimate the coefficients , adjusted . The indication is that budget makers in the municipalities behave differently when predicting inflation in times of accelerating inflation as compared to when inflation is falling. When the economy is overheated, in the sense that inflation is accelerating, then budget makers in the municipalities had tendency to adjust their inflation estimate. When inflation was going down they were more prone to stick to last year's inflation estimate. Note that this behaviour is likely to add to pressure in the economy at inflationary times (by increasing the probability of deficit due to underestimation of inflation) and to reduce pressure in times of contraction (by increasing probability of a surplus as inflation may be overestimated).
Concluding Remarks
Economists have long assumed that economic agents form expectations in a rational manner, as not doing so would be harmful to their financial health. Some have wondered how frequently economic agents should bother to revise their expectations Haltiwanger, (1985) and Akerlof, (1985). The consensus from this literature seems to be that if some (even small) proportion of the public does revise their priceexpectations infrequently it will have macro-economic consequences.
Agents in the public sector, whether in central government or in the municipality sector may not have the opportunity to revise their spending plans as frequently as private actors. In addition they may have politically motivated incentives to underestimate inflation rather than overestimate it.
Our results indicate that municipality budget makers are not likely to adjust their projection for next period's inflation in big steps. Our results also indicate that their behaviour is different when there is pressure in the economy as opposed to when there is less pressure.
The asymmetry is likely to induce policy makers to underestimate inflation at time of pressure and overestimate inflation at time of less pressure. The consequence of underestimation of inflation is likely to be that "too much" funds are channelled to municipality investment. At the end of the day the municipalities are likely to overspend. The tendency to underestimate inflation would thus have contributed to further increasing the instability of the Icelandic economy in the time under consideration. | 2018-12-19T19:46:09.364Z | 2008-12-15T00:00:00.000 | {
"year": 2008,
"sha1": "fbcfc38d0ff5a8f92e996678387a5a82f7836b38",
"oa_license": "CCBY",
"oa_url": "http://www.efnahagsmal.is/article/download/a.2008.6.2.8/pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fbcfc38d0ff5a8f92e996678387a5a82f7836b38",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
226578734 | pes2o/s2orc | v3-fos-license | Initial Experience and Assessment of Surgical Margins after Robotic Assisted Radical Prostatec to my at a Mexican University General Hospital
Background: Prostate Cancer (PC) is the most common malignant neoplasm in men and it’s the second cause of cancer specific mortality. The finding of Positive Surgical Margins (PSMs) after Radical Prostatectomy (RP) is an
Background
Prostate Cancer (PC) is the most common malign neoplasm in elderly men. Radical Prostatectomy (RP) has been able to provide favorable oncologic control and a prolonged survival for localized PC by reducing the risk of metastasis and local tumor progression [1].
Robotic-assisted surgery continues moving forward and promises to play a major role in the field of urology.
Advantages of this resource
Low blood loss, low postoperative pain, short hospital stay, and speedy patient recovery, have made Robotic-Assisted Radical Prostatectomy (RARP) more common in the treatment armamentarium for PC [2].
After RP, pathologic assessment of tumor's cellular differentiation (Gleason score) and pathologic stage, with preoperative PSA, can be used for staging patients in risk groups, predict outcomes (such as Biochemical Recurrence (BR) risk) and guide immediate treatment [3].
Avoiding Positive Surgical Margins (PSMs) after RP depicts the most important oncologic factor associated with the surgical procedure of RP for PC. Despite the debate of the influence of PSMs on long-term outcome, patients with PSMs have an increased risk for BR when compared to patients with Negative Surgical Margins (NSMs) [4].
Only a few studies (multi-institutional or meta-analysis) have shown a benefit of RARP versus Open Radical Prostatectomy (ORP) in reducing the rate of PSMs [5].
A series of patients without adjuvant treatment showed that those with PSMs have a 57.5% disease-free survival at 5 years [6]. However, disease-free survival at 10 years for those with focal PSMs and extensive PSMs varies significantly between 64% to 38%, respectively [7].
Materials and Methods
An observational, descriptive, transversal, retrospective study was carried out. A review of clinical records from patients with diagnosis of PC that underwent RARP between December 2014 to December 2017 at our urology department was performed. The following variables were analyzed: age, PSA value, T clinical stage, biopsy's Gleason score, perineural invasion, lymphovascular invasion, surgical specimen's Gleason score, pathologic stage, and biochemical recurrence, with the aim of identifying the variables associated with PSMs. Continuous variables with normal distribution are expressed as mean and Standard Deviation (SD), otherwise they are expressed as median and range. Categorical variables are expressed as an absolute value and percentages. For the statistical analysis we used the Chi-squared test and SPSS v.23.0. Results were considered statistically significant if p value was <0.05.
Results
Patient demographics are shown in table 1 and 2. On statistical analysis, PSA level in patients with PSMs was higher than in patients with NSMs, having a significant correlation (p=0.021). When assessing the most frequent regions of PSMs, it was found that the most common site was the apex, followed by the posterior part of the prostate (Graph 1).
P value
Clinical Stage (cT) Biopsy's Gleason score underestimated the patients, since an increase in the Gleason score of the surgical specimens was found. However, the most frequent Gleason score in patients with PSMs was 3+3=6 (27.9%). No association was found when correlating surgical specimen's Gleason score and PSMs (P 0.57) (Graph 2).
Discussion
PSMs after RP in prostate cancer patients are considered a significantly predictive factor for BR and local recurrence, as well as for the necessity for adjuvant treatment.
In our study, the rate of PSMs is comparable with other reports. PSMs rate was significantly associated with tumor pathologic stage, as reported globally [8]. Other series report PSMs rates that oscillate between 11% to 37% after ORP, 11% to 30% after Laparoscopic Radical Prostatectomy (LRP), and 9.6% to 26% after RARP [9]. Coelho RF, et al. [10] remarked that clinical stage was the only preoperative independent variable associated with PSMs after RARP.
While most studies report similar PSMs rates for both surgical procedures, recent data has found that patients who undergo RARP are, in fact, more likely to have PSMs than those who undergo ORP [12].
In this study, pathologic stage, PSA and perineural invasion were significantly associated with PSMs rate.
Tewari A, et al. [13] concluded in an extensive systematic review that PSMs rates are equivalent for both ORP and RARP. Despite discussion about the true incidence of PSMs in these surgical procedures, it is not known whether the finding of PSMs predicts a greater or lesser risk depending on whether ORP or RARP are performed.
No consensus has been reached on the most frequent PSMs region after ORP, LRP, and RARP, and which of these regions are associated with BR. According to multiple studies, the most common PSMs site after RARP is the prostate apex [14]. In our series, the most frequent region was the prostate apex (>40%), followed by the posterior region (25%), the finding of increased PSMs at the prostatic apex can be explained by at least 3 important surgical aspects: 1) there is no obvious anatomical boundary between the prostatic apex and the external urinary sphincter, therefore, to maximize urethral length, apical surgical margins are often compromised by the surgeon [15], 2) there is a low content of periprostatic fat in these region, making it easier to get PSMs, and 3) surgical manipulation may cause ink to reach the tumor, leading to a false PSMs [16]. On the contrary, it has also been reported that the posterior or postero-lateral region is the most common PSMs site after RARP [16]. Some series suggest that biochemical recurrence was independent of PSMs location [17,18]. Furthermore, it has been reported that PSMs located in the posterolateral region are associated with worse prognosis [19]. being able to establish an association with PSMs, we will have to do a long-term follow-up to assess the behavior of BR in these patients.
Studies that directly compare the effect of PSMs with metastasis-free survival and mortality are less conclusive. One of the largest studies, out of a registry of 65,633 patients, demonstrated a significant effect of PSMs on cancer-specific mortality (OR: 1,70, [1.32-2.18]) [21].
It seems that experience and careful attention to the surgical procedure also play an important role in decreasing the incidence of PSMs. Sooriakumaran P, et al. [22] reported a significant correlation between surgeon's experience and PSMs rate [22]. Ahlering TE, et al. [23] also reported a significant improvement in the PSMs rate associated with extensive surgical experience [23].
Limitations of the present study include its retrospective nature and the relatively small sample size. Furthermore, clinical examinations and surgical samples assessment were not performed by the same clinicians or pathologists. However, the initial experience of our urology department is reported.
Conclusions
It is important to consider preoperative PSA as a predictive factor for PSMs and correlate it with the pathologic result of the surgical specimen, these should guide treatment election and the necessity for closer postoperative follow-up.
Prospective studies with larger sample size should be encouraged. Furthermore, because the RARP learning curve may differ by surgeon, studies involving multiple surgeons are still necessary. | 2020-09-03T09:05:07.225Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "a5c055607d3a5e4c5bbaf65edcba030e03e43fdc",
"oa_license": "CCBY",
"oa_url": "https://www.sciforschenonline.org/journals/surgery-open-access/article-data/JSOA218/JSOA218.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4b26c49e53b55bfab8b1b6c485ab74479e4d884a",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245234676 | pes2o/s2orc | v3-fos-license | Elective lower limb orthopedic arthroplasty surgery in patients with pulmonary hypertension
Abstract Patients with pulmonary arterial hypertension and chronic thromboembolic pulmonary hypertension (PH) are at increased risk when undergoing anesthesia and major surgery. Data on outcomes for elective orthopedic surgery in patients with PH are limited. A patient pathway was established to provide access to elective lower limb arthroplasty. This included assessment of orthopedic needs, fitness for anesthesia, preoperative optimization, and intra‐ and postoperative management. Patient data were retrospectively retrieved using patient's hospital records. Between 2012 and 2020, 29 operations (21 total hip replacements [THRs], 7 total knee replacements [TKRs], 1 total hip revision) were performed in 25 patients (mean age: 67 years). Perioperatively, 72% were treated with low‐dose intravenous prostanoid. All had arterial lines, and central access and perioperative lithium dilution cardiac output monitoring was used in 86% of cases. Four patients underwent GA, 21 spinal anesthesia, and 4 CSE anesthesia. Supplemental nerve blocks were performed in all patients undergoing general, and 12 of 21 undergoing spinal anesthesia. All were managed in high dependency postoperatively. Hospital length of stay and complication rates were higher than reported in non‐PH patients. Perioperative complications included hypotension requiring vasopressors (n = 10), blood transfusion (n = 7), nonorthopedic infection (n = 4), and decompensated right heart failure (n = 1). There was no associated mortality. All implants were functioning well at 6 weeks and subsequent follow‐up. EmPHasis‐10 quality of score decreased by 5.5 (±2.1) (p = 0.04). A dedicated multiprofessional pathway can be used to safely select and manage patients with PH through elective lower limb arthroplasty.
INTRODUCTION
Pulmonary hypertension (PH) is comprised of a heterogeneous group of conditions ranging from rare diseases such as pulmonary arterial hypertension (PAH) and chronic thromboembolic pulmonary hypertension (CTEPH) to more common, and usually milder, elevations in pulmonary artery pressure seen in cardiac and respiratory disease. 1 Advances in available therapies over the last two decades have resulted in improved survival of patients with PAH and CTEPH [2][3][4] and as such there has been an increasing focus on quality of life. 5 Osteoarthritis (OA) is an age-related degenerative joint disease, affecting 11% of people in England. 6 It causes progressive damage to articular cartilage and surrounding structures and most commonly affects the hip and knee. 6 It is the fastest-growing cause of disability worldwide and is often associated with constant severe pain, reduced quality of life, and economic burden. [6][7][8] The definitive treatment for severe hip and knee OA is arthroplasty surgery. 9 In patients without the cardiorespiratory disease, elective hip and knee arthroplasty is a cost-effective, low-risk procedure with high success rates. [9][10][11] Patients with PH, particularly those with PAH and CTEPH, are at increased risk when undergoing anesthesia and major surgery. 12 Major, prolonged and emergency surgery has been associated with increased morbidity, and better outcomes have been associated with regional anesthesia compared to general anesthesia. [13][14][15][16][17][18] Perioperative mortality rates have reported to vary between 1% and 18%. [13][14][15][16][17]19,20 To our knowledge, only one study has exclusively evaluated the perioperative mortality rate in patients with PH undergoing total hip or knee replacement surgery compared to those without PH. In this study, Memtsoudis et al. demonstrated a 4 to 4.5-fold increase in the adjusted mortality risk compared to patients without PH in a US database of 670,515 patients undergoing total hip or knee arthroplasty. 20 Price et al. reported a 7% mortality in 28 patients with mild to moderate PH undergoing nonobstetric and noncardiac surgery, with no disease deterioration in surviving patients when assessed at 3-6 months after surgery. They concluded that nonemergency procedures may not be contraindicated in patients with PH if they are carefully selected and managed in a specialist PH center. 13,15 In this study, we report outcomes from a prospective pathway established by a multiprofessional team to enable access for patients with PAH and CTEPH to elective lower limb orthopedic arthroplasty.
Setting
We performed a single-center retrospective study of patients with PAH and CTEPH undergoing elective lower limb orthopedic surgery via a dedicated pathway, including detailed preoperative assessment. All patients were managed at the Sheffield Pulmonary Vascular Disease Unit (PVDU), which is a referral center for the assessment and management of patients with PH with a referral population >15 million. Patients underwent systematic evaluation as described in the ASPIRE Registry including right heart catheterization (RHC), multimodality imaging, exercise, and lung function testing. 2
Data collection
All PH patients undergoing orthopedic surgery between December 2010 and January 2020 were identified. Patient characteristics, pulmonary hemodynamics, results of radiological investigations, therapies, and details of the referral process were obtained from hospital notes and databases. Anesthetic and operative data were obtained from the preoperative assessment and intraoperative anesthetic and operation notes. Perioperative outcomes were retrieved from inpatient records. Postoperative orthopedic and quality of life (QoL) outcomes were assessed from orthopedic clinic notes 6 weeks after surgery and by comparing the last preoperative and first postoperative emPHAsis-10 score (a PH-specific QoL tool) documented in patient's notes. 21 The census date for mortality was August 18, 2021.
Statistical analysis
The Shapiro-Wilk test was used to determine if the data were normally distributed. Normal distribution was assumed for all data which returned a p > 0.05. Data were also inspected via graphical methods (Q-Q Plots and Histograms). Paired sample t tests were performed on diagnostic and preoperative data where two observations from the patients had been collected. Data were presented as mean ± standard deviation or median (range). A p < 0.05 was considered statistically significant.
Preoperative assessment
During the study period, 31 patients with lower limb orthopedic problems were referred for orthopedic assessment ( Figure 1). All patients were deemed suitable surgical candidates from the orthopedic perspective pending further cardiopulmonary and anesthetic assessment. To be considered a suitable candidate for orthopedic surgery one or more of the following were required: constant severe pain at rest and at night, pain not responding to conservative treatment and a significant restriction in mobility negatively impacting quality of life.
The 31 patients were then electively admitted to the Sheffield PVDU for a detailed operative and anesthetic assessment with investigations including RHC, cardiac MRI, echocardiography, ECG, pulmonary function tests, and exercise testing using the incremental shuttle walk test (ISWT) (Figure 1). Twenty-nine patients were considered to have an acceptable medical risk, while two patients were deemed to be too high risk. In one case, this was due to the severity of their PH and estimated life expectancy, while in the second case the risks of surgery were felt to be prohibitive due to the presence of significant comorbidities. Three patients in the acceptable medical risk group decided against surgery following counseling regarding the risks and benefits of surgery. One patient decided not to proceed with surgery after their symptoms improved with further steroid injections between orthopedic and anesthetic assessment. Therefore, 25 patients decided to proceed with surgery and underwent final preoperative assessments to finalize a perioperative management plan with a consultant anesthetist and PH specialist following multidisciplinary assessment. Four patients had a subsequent second-sided operation meaning that 29 cases in total were performed.
Patient demographics, hemodynamics, and functional status
Twenty-nine elective lower limb operations were carried out on 25 patients; baseline characteristics are shown in Table 1. Twenty-four (96%) patients were female. All patients were in World Health Organisation Functional Class II or III at the time of their operation. All were categorized in American Association Anaesthesiologists Group 3 or 4. 22 Forty-one percent of patients had been established on oral monotherapy, 48% on oral combination therapy, and 3% on combination therapy involving inhaled iloprost. After a mean interval of 3.3 ± 2.93 years between diagnostic and preoperative RHC, significant improvements in mean pulmonary artery pressure (mPAP), pulmonary vascular resistance, and mixed venous saturations were observed (Table 1). Patients had moderate PH at the time of surgery with a mean mPAP of 37.2 ± 10.2 mmHg (10.2) and cardiac output 4.9 ± 1.5 L/min. The majority of patients had preserved or mildly impaired right ventricular (RV) function, Table 1.
Perioperative PH therapies
Twenty-one (72%) cases were admitted 48-72 h before surgery to commence a low-dose intravenous iloprost (Ilomedin) infusion (dose range: 1-3 μg/h), which was then continued intraoperatively and in to the immediate postoperative period for a maximum of 5 days. Baseline oral PH therapy was continued in all cases. The patients who did not receive preoperative intravenous iloprost all had had stable disease with preserved or mildly impaired RV function.
Operative details
Twenty total hip replacements (THRs), seven total knee replacements (TKRs), one THR with the removal of metalwork, and one total hip revision were carried out in 25 patients (Figure 2). Four patients had a second THR on the contralateral joint following a good outcome from the first operation. Uncemented implants were used for all THRs, whereas cemented implants were used for all TKRs. An intraoperative tourniquet was used for all TKR procedures. All operations were carried out by the same orthopedic surgeon (R. K.).
Anesthetic technique
Regional anesthesia was the preferred option for all procedures (Tables 2 and 3). For THR, spinal anesthesia with intrathecal diamorphine was the preferred regional anesthetic technique, being used in 18/22 hip procedures. Spinal anesthetics were supplemented with either local anesthetic infiltration by the surgeon, fascia iliaca block, or femoral nerve blocks to improve postoperative analgesia. General anesthesia (GA) was used in four THR operations, in two patients. One of these patients had a failed spinal for their first procedure, and so had a GA for their second operation, because of this previous failure. The second patient had a GA for both operations, due to their personal choice. All four GA cases were supplemented with either femoral nerve block or fascia iliaca block for postoperative analgesia. Regional anesthesia was performed for all TKR operations. For four patients, this was provided in the form of a spinal anesthetic with additional nerve block (either saphenous nerve/adductor canal block alone or in combination with a popliteal nerve block). In the remaining three TKR operations a combined spinal-epidural (CSE) was used to provide further postoperative analgesia. The initial two TKR operations were performed with spinal and nerve blocks, and both experienced severe postoperative pain and subsequent cardiorespiratory issues. Following multidisciplinary team discussion with the acute pain, critical care, and anesthetic teams, a CSE technique was chosen for TKR patients who were considered likely to experience more severe postoperative pain. For the patients who underwent general anesthesia, induction was performed with propofol (range: 60-80 mg) and fentanyl (100-150 μg); the patient who was given a GA because of the failed spinal was not given fentanyl.
A general principle was to avoid positive pressure ventilation where possible. Two GAs were managed with the patient breathing spontaneously via a supraglottic airway device (SAD), with minimal pressure support ventilation to maintain a normal carbon dioxide level. The other two patients were intubated and were given a nondepolarizing muscle relaxant.
Anesthesia was maintained with oxygen, air, and sevoflurane for all GA cases. Volume-controlled ventilation was used in one patient who was intubated, pressure control ventilation was used in the other patient who was intubated and had a background of bronchiectasis in addition to PH to avoid high airway pressures.
Intraoperative monitoring and support
All patients were monitored with arterial line and central venous access monitoring in addition to standard requirements. All patients were catheterized to measure urine output. In 25/29 cases, lithium dilution cardiac output monitoring (LiDCO) was used to enable a realtime and continuous assessment of cardiac output and goal-directed fluid therapy. A metaraminol infusion was used in 76% of all cases where intravenous iloprost pre-operatively had been started (iloprost was started preoperatively in 72% cases). One patient required intraoperative noradrenaline. Anesthetic management was provided by one of two anesthetists.
Postoperative care
All patients were extubated in theatre and received Level 2 postoperative care on a high dependency unit. In 26 out of 29 cases, patients were stepped down to a specialized PH ward. All patients received daily review from the PH, orthopedic, and physiotherapy teams. There was no perioperative mortality.
Physiotherapy
Patients were able to start physiotherapy after a mean time of 3 days following surgery. The mean time to complete physiotherapy was 6 days. Fifty-five percent of patients were deemed as "slow to mobilise" by the orthopedic physiotherapists and required 6 or more days of in-patient physiotherapy. Twenty-three percent of patients required further intensive physiotherapy in the community after discharge.
Complications
In 21 out of 29 cases (72%), patients experienced one or more complications in the immediate postoperative period. The most common complication was hypotension requiring vasopressor support immediately after surgery (34%), which was weaned over a mean time of 3 days. Other complications included blood loss requiring transfusion (24%), significant pain requiring additional opiates or hindering physiotherapy (10%), and lower respiratory tract infection requiring antibiotics (14%). No patients required readmission to critical care after discharge to the ward. All patients survived and were discharged home after a mean hospital stay of 13 nights. One patient, receiving hydroxychloroquine for a connective tissue disease, was readmitted shortly after discharge with fever due to CMV viremia. One patient, with systemic sclerosis, presented 2 months after surgery with occlusion of the radial artery which had undergone arterial cannulation and required a finger amputation.
Orthopedic outcomes
All patients attended their 6-week postoperative orthopedic review. One patient had developed a noninfective wound leak. No other complications were noted. All joints were reported as "functioning well" or "making good progress" with all patients reporting an improvement in pain. All five patients who used walking aids before their operation (sticks, wheelchair, or electric scooter outside) reported a reduction in their use. Nineteen (76%) patients were alive at the census date of August 18, 2021. Median (range) survival from date of surgery of the 6 patients who subsequently died was 24 months. 9-34
Quality of life
We investigated the impact of surgery on QoL by comparing patients' last documented preoperative and first documented postoperative emPHAsis-10 score. Paired data were available in 17 cases with 14/17 patients reporting either no change or improved quality of life with a mean (SD) score decrease of 5.5 (±2.1) (p = 0.04). short-term outcomes and improved quality of life. To our knowledge, this is the first study to primarily focus on a systematic multiprofessional approach to the provision of lower limb arthroplasty surgery in patients with PAH or CTEPH, with a focus on preoperative evaluation and patient selection, optimization of PH treatment, perioperative monitoring, and outcome.
Short-term outcomes and comparison with other studies
Our pathway has demonstrated lower mortality rates (0%) than previous studies evaluating the outcomes of noncardiac, nonobstetric surgery in patients with PAH. This may reflect, in part, patient selection, the nature of T A B L E 3 Anesthetic data for patients who underwent regional anesthesia 20 Meyer et al. described overall emergency and nonemergency perioperative mortality rates of 3.5%, and 2%, respectively, in an international prospective study evaluating the outcomes of patients with PAH undergoing either elective or emergency orthopedic, general, gynecological, or urology surgery. 14 15 In our series, 72% of patients experienced one or more complications in the period between surgery and discharge. These are high event rates in comparison with elective arthroplasty surgery in patients without PH, as well as in comparison with previous studies of operative interventions in patients with PH. [13][14][15]19,20,[23][24][25] Importantly no patients developed refractory acute heart failure or respiratory failure. The mean length of stay was 13 days, approximately 7-10 days longer than patients undergoing total hip or knee replacements in patients without PH. [26][27][28][29][30][31] Patient selection and perioperative management Patients were carefully selected having been established on PH therapy with hemodynamic improvement at repeat RHC and relatively well-preserved RV function. Regional anesthesia has previously been associated with better outcomes than general anesthesia in patients with PAH. 13,14,16 The transition from spontaneous breathing to intermittent positive pressure ventilation, the addition of positive end-expiratory pressure, hypoxemia, hypercapnia, and sympathetic stimulation from laryngoscopy can all increase pulmonary vascular resistance and therefore RV afterload. 12,16 Furthermore, the majority of induction and maintenance anesthetic agents cause systemic vasodilation leading to a decreased mean arterial pressure. 16 This can have severe consequences in a patient population with an already reduced functional cardiovascular reserve. 16,[32][33][34][35] For these reasons, surgery was carried out under regional anesthesia where possible. Where GA was necessary, we aimed to avoid intubation and fixed positive pressure ventilation if possible and used a SAD with spontaneous ventilation and pressure support at low pressures to control arterial carbon dioxide. For one patient, the anesthetist felt a SAD would not be appropriate. The lowest possible dose of propofol was used to avoid hypotension, facilitated by coinduction with a fentanyl dose of 1-2 μg/kg; this also helped to attenuate the sympathetic response to laryngoscopy and intubation. 16 The vasodilation effects of both the regional and general anesthesia were treated by giving metaraminol, either with intermittent boluses or by starting a metaraminol infusion before the anesthetic was given, with the aim of maintaining the mean systemic blood pressure within its normal, preoperative range. High airway pressures can lead to an increase in pulmonary vascular resistance and are associated with patients with obstructive lung disease. 36 Using pressure-controlled ventilation can reduce the peak airway pressures, and therefore, was used in preference to volume-controlled ventilation for the one intubated patient who had a background of bronchiectasis in addition to PH to avoid this risk. Each patient requires an individualized approach to airway maintenance and ventilation, to maintain the best oxygenation and least hypercapnia while using the lowest airway pressures possible. We used low-dose intravenous iloprost in the majority of patients. Although chronic administration of intravenous prostanoids is associated with hemodynamic and prognostic improvements 37,38 the role of short-term perioperative prostanoid therapy is not proven. It is possible that its use had a positive impact on outcomes, however, further study of its efficacy in this setting is required, especially with respect to its potential for reducing systemic vascular resistance and its antiplatelet effect. Of note, we did not observe any bleeding complications related to regional anesthesia in patients receiving iloprost.
All patients had an arterial line sited, which allowed both beat-to-beat monitoring of blood pressure, and in 86% of cases, LidCo was used for stroke-volume optimization and goal-directed fluid therapy. This allowed precise optimization of preload, which is particularly important in patients with RV hypertrophy and impairment. All patients had central venous catheters inserted before anesthesia, used to both monitor central venous pressures and central venous saturation as well as to be able to administer inotropic or vasoconstrictive medication in the case of deterioration.
Uncemented implants were used for all THR operations to eliminate the risk of bone cement implantation syndrome (BCIS); a rare, but important cause of perioperative mortality and morbidity in patients undergoing cemented hip arthroplasty. Cemented implants performed under an intraoperative tourniquet were used for all TKR operations, as BCIS appears to be a complication more commonly reported in THR than TKR. [39][40][41][42] All patients received Level 2 care and once our pathway was established, all patients were cared for by PH specialist with input from the orthopedic team. A key element was a multiprofessional team approach involving close cooperation between pulmonary vascular physicians, nurses, physiotherapists with a dedicated anesthetist, and orthopedic surgical team.
Orthopedic and patient reported outcomes
All patients were documented as making "good progress" from an orthopedic perspective 6 weeks after their operation at a postoperative orthopedic review. OA and PH both adversely affect QoL and financial status. 5,7,[43][44][45][46] We observed a significant improvement in QoL with a mean EmPHasis-10 score reduction of 5.5 points (p = 0.04), consistent with a meaningful change. 47,48 In conclusion, by using a dedicated patient pathway and a multiprofessional approach we have demonstrated that carefully selected patients with PH can undergo elective lower limb orthopedic surgery with excellent outcomes, although with a higher perioperative complication rate and longer length of stay than in patients without PH.
Limitation
This is a single-center retrospective analysis involving a relatively small number of cases. | 2021-12-17T16:27:10.364Z | 2021-12-09T00:00:00.000 | {
"year": 2022,
"sha1": "ba4e111afdf312c7712473f9fab1536908e16b0c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pul2.12019",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05da21c3d65bda6fcec859973ab60c1b1c20f3e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
247776294 | pes2o/s2orc | v3-fos-license | Fat in infants – Facts & implications
Each year, around 41 million people die due to non-communicable diseases (NCDs) as per the World Health Organization reports1. Raised blood pressure, overweight/obesity, hyperglycaemia and hyperlipidaemia are the important metabolic risk factors which increase the risk of NCDs. The foetal origin hypothesis by Barker & Osmond2, suggests that NCDs like coronary heart disease, type 2 diabetes mellitus and hypertension originate based on the responses of a foetus to undernutrition which can cause permanent changes in the structure and function of the body. According to this hypothesis, when the foetus is deprived of nutrition during crucial periods of development, the foetus can recourse to adaptive survival strategies, thus reorganizing the course of normal development. If the same individual is exposed to contrasting nutritional circumstances during his or her later life, these adaptations may become maladaptive. Intrauterine growth restriction or clinically abnormal thinness at birth strongly predicts the subsequent occurrence of hypertension, hyperlipidaemia, insulin resistance, type 2 diabetes and ischaemic heart disease3. To decrease the incidence of NCDs, it is important to understand the intricacies of foetal nutrition and how malnutrition may alter the physiology and metabolism. Based on the pathophysiologic findings, interventions can be initiated to decrease the damage.
Commentary
Each year, around 41 million people die due to non-communicable diseases (NCDs) as per the World Health Organization reports 1 . Raised blood pressure, overweight/obesity, hyperglycaemia and hyperlipidaemia are the important metabolic risk factors which increase the risk of NCDs. The foetal origin hypothesis by Barker & Osmond 2 , suggests that NCDs like coronary heart disease, type 2 diabetes mellitus and hypertension originate based on the responses of a foetus to undernutrition which can cause permanent changes in the structure and function of the body. According to this hypothesis, when the foetus is deprived of nutrition during crucial periods of development, the foetus can recourse to adaptive survival strategies, thus reorganizing the course of normal development. If the same individual is exposed to contrasting nutritional circumstances during his or her later life, these adaptations may become maladaptive. Intrauterine growth restriction or clinically abnormal thinness at birth strongly predicts the subsequent occurrence of hypertension, hyperlipidaemia, insulin resistance, type 2 diabetes and ischaemic heart disease 3 . To decrease the incidence of NCDs, it is important to understand the intricacies of foetal nutrition and how malnutrition may alter the physiology and metabolism. Based on the pathophysiologic findings, interventions can be initiated to decrease the damage.
It has been suggested that research related to weight gain and catch-up growth of preterm and small for gestational age (SGA) infants will help in devising better nutritional strategies. Animal studies have shown rapid catch-up growth of adipose tissue with low prenatal protein and postnatal high fat and calorie diet. During early postnatal days, SGA infants accumulate more fat than appropriate for gestational age (AGA) infants 4 . SGA infants have a rapid increase in skinfold thickness before their catch-up growth in weight. They also have rapid increase in insulin-like growth factor-1 and lipoprotein lipase concentrations indicating a rapid increase in neonatal fat deposition 5 . Better management of intrauterine undernutrition and later neonatal growth is important for better future outcome. Obesity is a key parameter for NCDs and it would be prudent to assess parameters related to fat mass especially during infancy.
In this context, the Chandigarh study by Kaur et al 6 on growth pattern of skinfold thicknesses in term symmetric and asymmetric SGA infants gain importance. This study included a total of 200 full-term SGA (symmetric SGA: male 50, female 50; asymmetric SGA: male 50, female 50) and 100 AGA infants born consecutively. Infants with birth weight within 10 th to 90 th percentile of intrauterine growth curves were considered as AGA, while those weighing <10 th percentile as SGA. Ponderal Index (PI) was used to categorize infants into symmetric SGA (PI ≥2.2 g/cm 3 ) and asymmetric SGA (PI <2.2 g/cm 3 ). Triceps, sub-scapular, biceps, mid-axillary and anterior thigh skinfold thicknesses using Harpenden's skinfold caliper were measured at 1, 3, 6, 9 and at 12 months. Care was taken to minimize observer bias and inter-observer variation. Mean and standard deviation were computed for different skinfold thicknesses measured among male and female symmetric SGA, asymmetric SGA and AGA infants at each age level. Infants who dropped out were replaced with other age-and sex-matched infants. The attrition rate varied from two to 6.7 per cent. They have observed rapid fat deposition during the first three months and gradual reduction thereafter among SGA infants. AGA age infants continued to increase fat deposition till six months and then reduced subsequently 6 .
Some of the drawbacks of this study were: (i) these infants were recruited during 2006-2008. It would have been interesting if they had followed them up till adolescence since obesity is becoming major issue during this period; (ii) feeding pattern, Indian J Med Res 154, September 2021, pp 410-412 DOI: 10.4103/ijmr.IJMR_75_21 educational and socio-economical levels also might have changed during this period although most of them were breastfed till five months of age; (iii) authors have not looked at blood levels of insulin-like growth factor-1 and lipoprotein lipase levels associated with fat metabolism. A simultaneous measurement of hormones or total body fat estimation by magnetic resonance imaging or dual-energy X-ray absorptiometry would have added value to the study. Future longitudinal studies may be required to confirm the hypothesis that these SGA infants are actually at risk for metabolic disorders later in life.
As one envisages the quantity of fat and distribution among infants, it would be worthwhile to analyze the fat phenotypes and their physiological aspects. Adipose tissue is not just a mere energy store, it is likely to be a potential target for intervention in several metabolic pathologies. Adipocytes and their precursors act as key metabolic regulators with their ability to integrate different systemic stimuli and responding with a specific endocrine secretion and modulating the energy balance 7 . White adipose tissue characterized by accumulation of triglycerides, is usually affected by malfunctions related to metabolic pathways. Brown adipose tissue (BAT) has a thermogenic function characterized by dense vascularity and sympathetic innervation. It consists of adipocytes filled with small lipid droplets and mitochondria specialized in dissipating energy derived from fatty acid oxidation. Apart from thermoregulation, BAT has also been demonstrated to act as an endocrine organ characterized by a specific brown adipokine secretion. Stimulating BAT content and activity may represent a suggestive target for the treatment of obesity and metabolic disorders [8][9][10] . Besides classical brown adipocytes, an additional type of uncoupling protein-1-expressing adipocytes with thermogenic properties has also been characterized [11][12][13] . These cells appear postnatally in white adipose depots through an adipogenic process called browning, following specific inductive stimuli and have been named inducible beige/brite adipocytes 14 . Classical BAT, developing from the same dermomyotome is confined to a discrete anatomic distribution (interscapular and perirenal) during neonatal life and dramatically regress in adults. It is still not clear whether classical brown and beige adipose cells play different functions in controlling metabolic processes. Describing the molecular signatures and functional properties of these two phenotypes of adipocytes and documenting the differences in their adipogenic processes are important areas of research and the findings will provide useful inputs for the development of effective metabolic therapeutic strategies in future 7 . | 2022-03-30T06:17:46.007Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "170457a4f86fcd3f15edace32e17347fe48eab72",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a69baa7d9522e335ebf3184f1b3045395807ef38",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119200281 | pes2o/s2orc | v3-fos-license | Viscosity scaling of fingering instability in finite slices with Korteweg stress
We perform linear stability analyses (LSA) and direct numerical simulations (DNS) to investigate the influence of the dynamic viscosity on viscous fingering (VF) instability in miscible slices. Selecting the characteristic scales appropriately the importance of the magnitude of the dynamic viscosity of individual fluids on VF in miscible slice has been shown in the context of the transient interfacial tension. Further, we have confirmed this result for immiscible fluids and manifest the similarities between VF in immiscible and miscible slices with transient interfacial tension. In a more general setting, the findings of this letter will be very useful for multiphase viscous flow, in which the momentum balance equation contains an additional stress term free from the dynamic viscosity.
Displacement processes through porous rocks and mixing of two miscible fluids are active areas of research, having several industrial and environmental applications, such as enhanced oil recovery, hydrology and filtration, carbon capture and storage, etc. VF, a hydrodynamic instability that occurs in both the immiscible and miscible fluids while displacing a more viscous fluid by a less viscous one, is inherent in such flow configuration [1][2][3]. In immiscible fluids surface tension force at the interface acts against the instability [4]. On the other hand, in miscible fluids, where a thermodynamically stable interface does not exist, a transition zone relaxes with time due to diffusion and acts against the finger growth. Experiments [5,6] reveal, when the diffusion is slow, a steep gradient in the form of density, concentration or temperature between the underlying fluids gives rise to a weak transient interfacial tension that mimic surface tension effect. This was first discussed by Korteweg in 1901 [7], who introduced an additional stress term, known as the Korteweg stress, in the equation of motion. The existence of Korteweg stress or transient surface tension is also observed in the experiments of colloidal suspensions [8] and in the binary liquid system of isobuteric acid and water [9].
Chen and Wang [10] analyzed the influence of VF instability on the spreading of a localized fluid slice having higher mobility than the surrounding fluid. On the other hand, De Wit et al. [11] studied the same problem in the context of separation in a chromatographic column when the viscosity of the sample is higher than the solvent. Mishra et al. [12] have shown that the onset of VF instability and the subsequent finger pattern near the onset are identical for both the less and more viscous slice. Influence of the Korteweg stresses on VF instability was investigated theoretically by Joseph and his co-workers [13,14] and Chen et al. [15] However, to the best of the authors' knowledge the influence of such stresses on the nonlinear VF instability of more and less viscous miscible slices in a Hele-Shaw cell has never been addressed adequately. In particular this letter addresses the question, what is the influence of the Korteweg stress that describes volume forces arising because of the nonlocal molecular interactions on the VF at the rear and frontal interfaces of a localized slice? Such a classical complex pattern dynamics has been investigated through a highly accurate Fourier-spectral method based direct numerical simulations [11]. It has been proved theoretically that an appropriate choice of the dynamic viscosity of underlying fluids results the identical onset of fingering instability and the subsequent finger patterns are also identical for both more and less viscous slices in the presence of the Korteweg stresses. The DNS results are found to be in excellent agreement with the corresponding LSA. Also, the similarities between the immiscible slices and miscible one with transient interfacial tension have been proved through LSA, which affirms the classical nature of this study.
I. MATHEMATICAL FORMULATION AND NONLINEAR SIMULATIONS
Consider a uniform rectilinear displacement of a finite fluid slice of viscosity µ 2 by another fluid of viscosity µ 1 in 2D porous media or a Hele-Shaw cell as shown in fig. 1. The frontal interface becomes unstable if µ 1 < µ 2 , otherwise it is the rear interface which features the fingering instability, for µ 1 > µ 2 . The viscosity of the fluids depends on the solute concentration c, i.e. µ = µ(c). Fluids are assumed to be incompressible and neutrally buoyant.
With an additional condition of slow diffusion, the above-mentioned flow problem can be described in terms of Darcy-Korteweg equations [15,17] coupled with a convection-diffusion equation for the mass conservation of the solute concentration. For the dimensionless formulation of the equations the diffusive length and time scales, D/U and D/U 2 , are used as the respective characteristic scales. Characteristic pressure, velocity, concentration and viscosity are taken to be µ 1 D/κ, U, c 2 and µ 1 , respectively. Here κ is the constant permeability of the homogeneous porous medium. For simplicity we have assumed a constant isotropic diffusion of the solute concentration, characterized by the diffusion coefficient D. The dimensionless equations in a Lagrangian frame of reference, moving with the speed U are written as, Here u is the gap-averaged velocity having longitudinal and transverse components u and v, respectively, p is the dynamic pressure, δ =δκU 2 c 2 2 /µ 1 D 3 (< 0) is the dimensionless Korteweg stress constant [14,15] and the operator ∇ ≡î ∂ ∂x +ĵ ∂ ∂y . The governing equations (1) -(2) are associated with the following boundary conditions: at the longitudinal boundaries, u = (0, 0), ∂c/∂x = 0, x → ±∞; and at the transverse boundaries, ∂v/∂y = 0 (representing the constant pressure), ∂c/∂y = 0, ∀x, in the Lagrangian frame of reference (shown in fig. 1). The initial velocity is considered to be u = (0, 0), while the initial distribution of the solute concentration is, c = 1 inside the finite slice and c = 0 outside of that.
The relationship between the dynamic viscosity of the underlying fluids and the solute concentration is assumed to be of Arrhenius type, µ(c) = e Rc , where R = ln(µ 2 /µ 1 ) is the log-mobility ratio [1,11]. Hence, the displacement of a less (more) viscous slice by a more (less) viscous ambient fluid is represented by R < 0 (R > 0).
II. RESULTS AND DISCUSSION
In the absence of the Korteweg stress VF dynamics for both R > 0 and R < 0 are the same until the interaction between the two interfaces [12]. Our aim in this letter is to and R < 0. Moreover, the wavelength of the unstable modes is larger for R < 0 than that corresponding to R > 0. These differences between R > 0 and R < 0 persist in the long-time behavior of the fingering dynamics of the slices.
It is customary to ask what causes such differences in the dynamics in the presence of the Korteweg stress? Conventionally, the µ − c relation, µ = e Rc , keeps µ 2 /µ 1 unchanged for both more and less viscous finite slices, but the dimensionless value of the less (more) viscous fluid changes with the sign of R. We notice that in the absence of transient interfacial tension eq. (3) is free from µ(c) (see eq. (9) in [12]). Hence, the onset of fingering and the which depicts that the onset of instability and the finger pattern, until the interaction of the unstable and the stable interfaces, are identical to those for R = 3, δ = −10 4 ( fig. 2(a)).
This suggests, in order to compare the numerical results of two similar flow configurations one must choose the dimensionless parameters in such a way that they correspond to the same dimensional value. Subsequently, we ask the question whether there exists a suitable characteristic scale for the dynamic viscosity which automatically takes care of this fact.
In particular we choose the less viscosity µ l as the characteristic viscosity and µ l D/κ as the characteristic pressure (see table ??). Hence, the dimensionless form of µ − c relation becomes, With this modifications we have R > 0 for both the more and less viscous slices, and the initial dimensionless dynamic viscosity of the ambient fluid (finite slice) becomes 1 and e R (e R and 1), respectively (see table ? depicts that the onset of VF and finger pattern are the same to those for a more viscous slice ( fig. 2(a)), until the interaction of the fingers with the respective stable interface.
The onset of fingering instability and the interaction between the stable and unstable interfaces can be quantified appropriately from the temporal evolution of the interfacial length [12,18], I(t) = than R = −3, δ = −10 4 , the degree of mixing, χ(t), is higher in the former than the latter (see fig. 5(a)). In fig. 5(b), χ(t) has been shown for the three cases: (i) µ = e 3c , δ = −10 4 ,
III. LINEAR STABILITY ANALYSIS
Here we discuss LSA of miscible slice in the presence of the Korteweg stresses with the viscosity relations given by eqs. (5)- (6) and compare the results obtained with those of DNS.
Finally, the obtained LSA results are compared with a LSA of immiscible slice which confirms that the Korteweg stress and the surface tension have identical effects on the instability.
A. Miscible slice
The initial-boundary value problem described above possesses a self-similar diffusive decaying solution in a similarity transformation (ξ, t)-domain [16], Here, C 0 (ξ) corresponds to the self-similar decay of a rectangular function of width l * = l/ as, (u , c )(ξ, y, t) = (Φ(ξ), Ψ(ξ))e iky+σ * (k,t 0 )t . Here Ψ(ξ) and Φ(ξ) are the amplitude of the perturbed concentration c and axial velocity u , respectively, with k, σ * (k, t 0 ) being the wave number and the growth rate of the perturbations [16]. Hence, the linear stability problem can be written as the following eigenvalue problem, where D n ≡ d n /dξ n , n ∈ N. The system of coupled ordinary differential equations (eqs. (7)-(8)) has been solved using a finite difference method [16] to determine the instantaneous growth rate of the perturbations in terms of the wave number k and the frozen diffusive time . 6(b)). Linearizing the stream-function form of eqs. (1)- (2) with µ − c relation given by eqs. (5)-(6) and using a pseudo-spectral method, an initial value calculation (IVC) is perfromed. The growth rate of the perturbed concentration, A·Ly 0 c 2 (x, y, t)dxdy, obtained from IVC is compared with those obtained from DNS [20]. The temporal evolution of the growth rates of IVC coincide with those obtained from DNS for both µ = e 3c and µ = e 3(1−c) with δ = −10 3 ( fig. 7(a)). The numerical algorithm of the IVC will be discussed elsewhere in detail. Fig. 7 µ l = µ r = 1, µ = e −2 ( ), µ l = µ r = 1, µ = e 2 ( ).
B. Immiscible slice
In what follows a comparison between an immiscible slice and a miscible slice with the Korteweg stresses. Chen et al. [21] have shown that in a rotating Hele-Shaw cell both the qualitative and quantitative features of the Korteweg stress in miscible fluids and surface tension in immiscible fluids are identical. Not only the stabilizing property of the Korteweg stresses was confirmed, but also the number of fingers were measured to be the same for the miscible and immiscible fluids, both theoretically as well as experimentally. Our aim is to understand the relative importance of the viscous force to the surface tension force in three layer immiscible displacement, in which the growth rates of the perturbations at the two interfaces can be represented as (see eq. (10) in [22]), Here σ + and σ − correspond to the larger and smaller growth rates, respectively. Furthermore, Korteweg stress, similar to the case when these stresses are absent [12]. Synergetic mixing with VF and alternating injection [23] will be the same with the present viscosity model irrespective of the choice of the viscosity of the fluid that fills the Hele-Shaw cell before the injection starts. Alike effect of the dynamic viscosity is also observed in immiscible fluids.
The findings of this letter will certainly help to understand multiphase viscous flows with different viscosity of each phase or when the viscosity depends non-monotonically on the solute concentration [24]; for instance, during the mixing process of chemical components, pollutant contamination in aquifers, CO 2 sequestration, etc. Miscible displacement of viscous fluids with Korteweg stress has important applications in chemistry [8,9]. In this letter we present a preliminary understanding of the relative importance of the Korteweg stress | 2015-02-27T10:11:46.000Z | 2015-02-27T00:00:00.000 | {
"year": 2015,
"sha1": "d25005d01a29a5b2a794bcba66c15430ee4b0e5b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1502.07851",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d25005d01a29a5b2a794bcba66c15430ee4b0e5b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
198194222 | pes2o/s2orc | v3-fos-license | Pathogen transmission risk by opportunistic gulls moving across human landscapes
Wildlife that exploit human-made habitats hosts and spreads bacterial pathogens. This shapes the epidemiology of infectious diseases and facilitates pathogen spill-over between wildlife and humans. This is a global problem, yet little is known about the dissemination potential of pathogen-infected animals. By combining molecular pathogen diagnosis with GPS tracking of pathogen-infected gulls, we show how this knowledge gap could be filled at regional scales. Specifically, we generated pathogen risk maps of Salmonella, Campylobacter and Chlamydia based on the spatial movements of pathogen-infected yellow-legged gulls (Larus michahellis) equipped with GPS recorders. Also, crossing this spatial information with habitat information, we identified critical habitats for the potential transmission of these bacteria in southern Europe. The use of human-made habitats by infected-gulls could potentially increase the potential risk of direct and indirect bidirectional transmission of pathogens between humans and wildlife. Our findings show that pathogen-infected wildlife equipped with GPS recorders can provide accurate information on the spatial spread risk for zoonotic bacteria. Integration of GPS-tracking with classical epidemiological approaches may help to improve zoonosis surveillance and control programs.
Results and Discussion
Cloacal swabs revealed that within the 19 GPS-tracked individuals, 37% (n = 5), 31% (n = 5) and 25% (n = 4) were positive for Salmonella, Campylobacter and Chlamydia, respectively, with no co-infections recorded. Previous studies found similar infection rates 18,22 . All movements of the infected-gulls were recorded throughout their estimated infection period [30 days [23][24][25] ]. Pathogen risk maps and critical habitats were modeled by overlapping gull resting and foraging positions with accurate high-resolution land cover information 26,27 . The 27,798 recorded GPS locations revealed the greatest bacterial spread risk within 5 km of the breeding colony (Figs 1 and S1 in Supplementary Material), without significant differences in the type of habitat used between Salmonella-infected, Campylobacter-infected and Chlamydia-infected individuals (Pseudo-F = 0.78, p = 0.67). Risk spatial extent varied between infected-gulls ( Fig. S1 in Supplementary Material), from areas close to the breeding colony to some infected-gull crossing-over from Spain to Portugal, stressing the importance of international health regulations and cooperation in disease control 28 .
Spread-risk areas overlapped with human-related habitats such as water ponds, fishing port or touristic beaches (Figs 2; S2 in Supplementary Material), increasing the risk of direct and indirect disease transmission to and from humans 10,14 Notably, the use of water reservoirs (built for human use) by infected gulls is likely to lead to the contamination of drinking, recreational and irrigation water sources 29 . For this reason, it is important to ensure correct water treatment in this sensible habitats to reduce any potential risk to public health. Similarly, the extensive use of fishing ports and fish farms as feeding areas by yellow-legged gulls could point to serious www.nature.com/scientificreports www.nature.com/scientificreports/ infection risk for seafood 30 . Moreover, the use of beaches by infected-gulls ( Fig. 2) exposes to pathogen spillover tens of thousands of tourists using these recreational habitats 14 . Moreover, the utilization of wetlands or estuaries by infected-gulls enhances the probability for pathogen transmission to other wildlife species 31 . Garbage dumps are also assumed to facilitate the infection of gulls by pathogens present in the human organic garbage, as well as cross-species and cross-individual transmission 13,18 . Yet, this habitat was seldom used by gulls in our study, due to its low availability in the area used by tracked-gulls (there are only two dumps in the area surrounding the breeding colony 27 ). If garbage dumps are not the main pathogen source, bacterial infection of GPS-tracked gulls may be associated with the use of other food sources in decomposition, such as stranded marine animals (notably mammals) that could present pathogenic-bacteria, human organic refuse food found in recreational beaches or urban parks, or urban prey such as pigeons and rats 32,33 . Our results strongly indicate the need for integrated waste and pest control at a landscape scale.
Overall, our study reveals that pathogen-infected gulls equipped with GPS recorders could provide accurate maps of zoonotic spread risk, from the local to regional and international scales. In some circumstances, this approach could be scaled up to build an international network, using gulls and other potential vectors of animal pathogens 34 , to achieve large scale zoonotic surveillance and to identify and implement prevention measures across potential sensitive habitats. Because this may trigger public concern, we recommend that these measures be coupled with environmental mediation work, to ensure that wildlife is not perceived as generally harmful to humans 35 .
Material and Methods
Fieldwork and tracking procedures. Fieldwork was carried out at the natural Biosphere reserve of Marismas de Odiel (37°13′N, 6°59′W; southwestern Iberian Peninsula; Fig. 1) in a colony of 250-300 breeding pairs of yellow-legged gulls. We deployed high-resolution GPS-trackers recording the positions of individuals at 5 minute intervals [Uva-Bits loggers 36 ] on 19 breeding gulls more than 4-years of age during their breeding period (May 2015). Uva-BiTS loggers can recharge themselves using solar energy, allowing to track the movements of birds continuously during several years 36 . The age of each individual was determined from plumage characteristics. Incubating birds were caught at the nest using a walk-in wire mesh trap and GPS-trackers were attached using a wing harness fixed with a reef knot in the tracheal pit, an attachment method recommended for large gulls 37 . The GPS-tracker and harness weighed less than 1.8% of the body mass of the birds [16 g for the GPS and harness, mean ± standard deviation = 1072 ± 110 g for the tracked gulls] 26 . GPS data were automatically downloaded remotely from devices to a field-based laptop when the birds were present at the breeding colony, where a network of 3 antennas provided complete coverage of the breeding area 36 . GPS data was parsed into the central database and immediately available in the UvA-BiTS Virtual Lab (www.UvA-BiTS.nl) for visualization and data exploration, therefore providing tracking information in real time 36 . To avoid potential biases associated with differences in the number of GPS data between individuals, tracking data were analyzed only when all individuals were equipped. We focused our analyses on the 30 days following deployment (from 14-May to 15-June 2015) to cover the potential infection period of each tracked pathogen [23][24][25] .
All fieldwork was approved by the Ethics Committee of CSIC (Ref: 28-04-15-237), in accordance with the Spanish and EU legislation on the protection of animals used for scientific purposes. pathogen determination. Cloacal swabs from each GPS-tracked gull were collected and placed in PBS medium (Deltalab, Barcelona, Spain), and stored frozen at −80 °C. The detection of each pathogen was performed in the Ecophysiology Laboratory of the Estación Biológica de Doñana CSIC (http://ebd.csic.es/lef/web/) using real-time PCR assays for each bacterial genus (Salmonella, Campylobacter and Chlamydia) following established protocols [38][39][40] . Before each PCR assay, DNA was extracted from each cloacal swab using a commercial DNA purification kit (Promega Maxwell ® ). CT values of 40 were used as cut-off points. As we used non-specific PCR primers, we only detected the genus of the pathogen. We selected these three bacteria because they are leading causes of zoonotic enteric diseases (Salmonella and Campylobacter) and respiratory diseases (Chlamydia) in developed and developing countries, affecting humans, wildlife and domestic animals 20,21 . The primers for Salmonella were able to detect 99.4% of 630 strains belonging to over 100 serovars 40 . The primers for Campylobacter successfully amplify C. jejuni and C. coli, but not other Campylobacter species. The primers for Chlamydia and Chlamydophila successfully detect the nine known species for these genus. However, as we only evaluate the presence of these bacteria at genus level, we unknown if all individuals infected with Salmonella, Campylobacter or Chlamydia are really infected with pathogens that can also infect humans. potential pathogen risk maps and habitat use. We only considered locations recorded outside the gull breeding colony (using a radius of 500 m around each nest. see 26 ). Further, we assumed that gulls mainly shed pathogens to the environment through their feces. Consequently, a high risk of infection was assumed to occur within feeding and resting areas. Therefore, we removed all locations associated with gull travelling behavior [speed >4 km·h −1 ] 26 and those location on the sea. Habitat use was assigned to each gull location by overlapping locations with land cover information. High resolution information on land cover was obtained from the program SIOSE (Soil Information System of Spain, Junta de Andalucía, last update 2013) and geographical references of waste dumps from the Spatial Reference Databases of Andalucía (DERA, last update 21/02/2014). This habitat classification was subsequently visually reviewed using the most recent satellite images offered by Google Earth V 7.1.2.2041 at a 0.5 m spatial resolution. All GPS foraging locations were finally classified into eleven categories: Estuary, wetland, touristic beach, natural beach, fishing port, salt mine, fish farm, water pond, agricultural area, urban area and garbage dump. Pathogen risk maps were constructed on the basis of the current distribution of GPS-tracked gulls infected by each pathogen. The transmission risk was estimated from the number of locations of infected gulls collected on a spatial grid of 750 × 750 m over the entire study area. Differences in habitat use (%) www.nature.com/scientificreports www.nature.com/scientificreports/ between Salmonella-infected, Campylobacter-infected and Chlamydia-infected yellow-legged gulls were tested using one way semiparametric permutation multivariate analyses of variance tests (PERMANOVA tests) on the Euclidean distance matrix 41 . PERMANOVA allows for the analysis of statistical designs without the constraints of multivariate normality, homoscedasticity and greater number of variables than sampling units. The method calculates a pseudo-F-statistic directly analogous to the traditional F-statistic for ANOVA tests, using permutation procedures to obtain P-values for each term in the model 41 .
Data Availability
All data are available in a central PostgreSQL database at UvA-BiTS (http://www.uva-bits.nl/virtual-lab). | 2019-07-25T13:03:56.122Z | 2019-07-23T00:00:00.000 | {
"year": 2019,
"sha1": "0fac603c1fdb3bdf1a322eb7c9b490bc738bc693",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-46326-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0fac603c1fdb3bdf1a322eb7c9b490bc738bc693",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
245791679 | pes2o/s2orc | v3-fos-license | An Evaluation Framework Combining Real-Time Transmission Electron Microscopy and Integrated Machine Learning-Particle Filter Estimation Enables Detection and Quantitative Tracking of Nanoscale Defects During Plastic Deformation Processes
14 Observation of dynamic processes by transmission electron microscopy (TEM) is 15 an attractive technique to experimentally analyze materials’ nanoscale phenomena and 16 understand the microstructure-properties relationships in nanoscale. Even if spatial and 17 temporal resolutions of real-time TEM increase significantly, it is still difficult to say 18 that the researchers quantitatively evaluate the dynamic behavior of defects. Images in 19 TEM video are a two-dimensional projection of three-dimensional space phenomena, 20 thus missing information must be existed that makes image’s uniquely accurate 21 interpretation challenging. Therefore, even though they are still a clustering high- 22 dimensional data and can be compressed to two-dimensional, conventional statistical methods for analyzing images may not be powerful enough to track nanoscale behavior by removing various artifacts associated with experiment; and automated and unbiased processing tools for such big-data are becoming mission-critical to discover knowledge about unforeseen behavior. We have developed a method to quantitative image analysis 27 framework to resolve these problems, in which machine learning and particle filter 28 estimation are uniquely combined. The quantitative and automated measurement of the 29 dislocation velocity in an Fe-31Mn-3Al-3Si autunitic steel subjected to the tensile 30 deformation was performed to validate the framework, and an intermittent motion of the 31 dislocations was quantitatively analyzed. The framework is successfully classifying, 32 identifying and tracking nanoscale objects; these are not able to be accurately 33 implemented by the conventional mean-path based analysis.
Introduction 38
In recent years, researchers have been trying to implement machine learning (ML) 39 based approaches in a wide range of scientific fields, and it has attracted considerable 40 attention [1]. ML has demonstrated its capability to implement semantic segmentation, 41 which classifies objects in an image pixel by pixel, and has been applied to practical 42 applications for example, automated driving technology and the medical field 43 individual objects must be separately tracked throughout the TEM video with being 80 distinguished from others. Especially in the case that the objects repeat unexpected 81 behaviors such as sudden move-and-stop and irregular change of own shape, tracking 82 the objects becomes highly challenging. The unexpected behaviors are often caused by 83 atomic to nanoscale local environment, which is closely related the inhomogeneity of 84 material. Thus, developing a model to predict such behaviors for data analysis would be 85 nearly impossible. 86 In this study, we developed a ML-based framework for quantitative analysis of 87 nanoscale objects' dynamic behavior based on the information obtained by detecting the 88 objects in a video using machine learning and tracking the detected objects with particle 89 filters. We confirmed that if a video presents a single experiment, the number of data is 90 sufficient for machine learning to detect dislocations in that video. We then applied the 91 developed ML-based framework to a video in which the dislocation gliding under 92 applied external tensile stresses in a metal was observed using TEM. By detecting and 93 tracking dislocations in the TEM video singly and as a whole using the framework, we 94 were able to calculate the time history of dislocation velocity and quantitatively 95 analyzed its behavior. In particular, we employed the particle filter to the quantitative 96 analysis part of the framework. Thanks to the probabilistic prediction of the particle 97 filter, we successfully captured the unexpected behaviors of individual dislocations. 98
Results 99
When a metallic material is plastically deformed by applying the stress , a slip 100 deformation occurs along a specific crystal direction (slip direction) on certain crystal 101 planes (slip plane). Slip deformation is localized by the movement of dislocations, 102 indicated by the "" symbol, on the slip plane as schematically shown in Supplemental 103 Differentiating both sides by time , the strain rate of the crystal ̇ can be written by the 108 average migration velocity of the dislocations . 109
̇=
(1) 110 For the dislocation velocity measurement by TEM observation, Johnston et al. 111 reported one of the first successful cases that measured the average dislocation velocity 112 [25]. They measured the average velocity of the dislocations by dividing the 113 displacement of the dislocations by the time that the stress was applied. However, since 114 the actual dislocation motion is intermittent, a continuous velocity measurement 115 providing the chronological changes is necessary to understand the intrinsic dislocation 116 behavior. Therefore, the overreaching goal of the framework development is to assess 117 the traverse speed of nanoscale objects such as dislocations without compromising the 118 original data's temporal and spatial resolutions. In this study, we attempt to archive 119 the10 nm/s order temporal and spatial resolutions by applying a U-Net based ML and 120 particle filter integrated method to in situ TEM deformation videos. 121 The actual validation of the framework proposed in this study was implemented by the 122 following steps described in the rest of this section. The experimental data, TEM videos, 123 were taken during in-situ TEM deformation experiments, in which an high-manganese 124 austenitic steel (Fe-31Mn-3Al-3Si) was subjected to a forced displacement with a 125 tensile rate of 100 nm/s. as shown in Fig.1 (a). In Fig.1 (b), a group of dislocation lines 126 like arcs moved to the left. Since TEM images represent a 2D projection of a 3D object, 127 the real space geometry of dislocations in the crystalline grain needs to be retrieved to 128 evaluate the stress condition in the observed area. The crystal orientation of the material 129 in the movie is shown in Fig.1 (b). In this particular case, the dislocations observed in 130 the movie are moving on the ABC plane and the incident electron beam is transmitted in 131 the direction of CD ⃗⃗⃗⃗⃗ in Fig.1 (b). Table 1 summarizes the Schmid factors for the ABC 132 and ABD planes, which indicate the contribution fraction of the load stress to the 133 resolved shear force acting on the slip system. There are two advantages to use ML for this task. The first one is that the detection 165 process is efficient and objective. ML detects dislocations in every frame of the video 166 after learning from a training data which is a correct image set created by the operators. 167 The detection of dislocations in the video will be conducted on the same criteria as the 168 one of the correct image set. The second one is that ML is more robust than numerical 169 filtering. ML is able to detect dislocations without being misled by non-dislocation lines 170 in a TEM image. For these reasons, we thought it is the best to use a ML method to 171 frames as training data and 101 -170 frames as test data. Fig.2 (b) shows the output 179 from U-Net. We were able to obtain the same output as the correct image for the test 180 data. 181 In the last step, in order to track down the same dislocation in the video, we used a 182 particle filter, which is one of object tracking methods in videos. Other methods such as 183 optical flow are commonly used for the object tracking. Optical flow, however, cannot 184 track dislocations accurately. Optical flow cannot track the point which moves quickly 185 and it is difficult to specify the feature points in a line with shape changes, although 186 movements of dislocations may be unpredictable and the shapes of dislocations may 187 change. In this study, we thought that a particle filter approach [33][34] is more suitable 188 for tracking dislocations. Since particle filter tracks objects using probability 189 distributions, it can retake and keep tracking individual dislocations even if the exact 190 location of more than one dislocations was temporary lost due to a sudden and unforeseen 191 movement. Particle filter is a better fit for this case as the dislocations' shape change likely 192 occur and the movement of dislocations may be unpredictable. 193 For the use of particle filter, it is necessary to identify individual dislocations in each 194 of video frames. We adopted a method to identify dislocations based on the spatial 195 continuity of pixels belonging to the dislocations. tracking dislocations that meet these conditions. 206 In here, the results of successful tracking four targeted dislocations are shown. The 207 dislocations (i)-(iv) are shown in Fig.3 (a), and the tracking of dislocation (i) is shown in 208 Fig.3 (b). In Fig.3 (b), the blue dots represent the particles distributed on the field, the 209 red dots represent the center of gravity of the blue dots, and the green dots represent the 210 midpoints of the dislocations closest to the red dots, i.e., the coordinates of the 211 dislocations being tracked. We confirmed that the green point stayed on a single 212 dislocation across frames. 213 We will show the results of the dislocation velocities measured by the above tools. Schmid factor between that direction and [1 ̅ 2 ̅ 2] is 0.136, which is the largest. Then we 236 calculated the strain rate in the tensile direction to be 43.5 µ/s. The strain rate in the 237 tensile direction at the experimental conditions is 100 µ/s, which is a reasonable value 238 considering the wide range of dislocation density values. 239 In Fig.4, we can observe intermittent dislocation motion. The reason for this may 240 be that the dislocations are stationary due to localized crystal defects in the sample, 241 which inhibit their motion, and they move when they gain an energy to overcome the 242 obstacles and advance due to external stress. It is also possible that the elastic field from
Discussion 248
In this study, we developed a Framework to detect dislocations in videos captured 249 using TEM using U-Net and measure their migration velocity using particle filters by 250 taking their intermittent motion and shape changes into account. The dislocation 251 velocities were measured and confirmed to be theoretically valid, and their intermittent 252 motions could be quantitatively evaluated. 253 This method has possibility to be applied not only to dislocation videos like the one 254 used in this study, but also to videos of TEM in situ experiments (dynamic observation) 255 on other phenomena. For example, immediate applications would be dynamically 256 measure the velocity and analyze the shape changes of dislocations in various 257 dislocation reactions including but not limited to Orowan mechanism (particle 258 dispersion strengthening mechanism), grain boundary migration, and deformation 259 twinning behavior induced by external stimuli such as magnetic field, heat or stress 260 field. It is also possible to chase the velocity, motion and shape change of nanoparticles 261 during an oriented-attachment reaction where dynamics in particles, translational and 262 rotational accelerations, is critical to gain the mechanistic understanding (e.g., DOI:
Method 280
The configuration of the dislocation velocity measurement tool developed in this 281 study is shown in Fig.5. With the developed velocity measurement tool, is capable of 282 automatic measurement of the velocity of each dislocation in the TEM video. 283
Optical Flow 284
In experimental TEM videos, we cannot accurately measure the velocity of 285 dislocation movement because the FOV moves. Therefore, we create a static coordinate (2) 295 Considering that the motion of the object is small, the Taylor expansion of the right-296 hand side yields 297 ( + ∆ , + ∆ , + ∆ ) = ( , , ) + ∆ + ∆ + ∆ .
Identification of each dislocation 322
In the binarized image, the dislocation pixels are distinguished from the 323 background pixels, but the dislocations are not distinguished from each other. The 324 particle filter needs to identify the dislocations in a frame because it needs to set the 325 target to be tracked. Therefore, we developed a program to search around the 326 dislocation pixels and identify them as the same dislocation if they are continuous, as 327 shown in Fig.7. 328
Particle filter to track dislocation 329
Particle filter [33][34] is a method for estimating the position of an object by 330 distributing a large number of particles on the screen and using the prediction from 331 the previous state and the current observation information. The particle filter 332 approximates the probability distribution of the object to be tracked in the entire state 333 space by a large number of particles with state quantities and weights (likelihoods), 334 which enables robust tracking against noise and environmental variations. The 335 particle filter algorithm is shown below (see Fig.8). 3. Obtain information necessary for likelihood calculation for each particle. 340 4. Calculate the likelihood for each particle based on the particle information. 341 The likelihood is computed by the brightness of the pixel where the particle 342 is located, and the similarity between the image of the region around each 343 particle and the image of the region around the dislocation in the previous 344 frame. 345 5. Calculate the weighted average with the likelihood of each particle as a 346 weight. 347 6. Re-spread particles with a probability proportional to the high likelihood 348 of each particle. 349 7. Move to the next state, and repeat from procedure 2. 350 351 By performing the above processes in each frame, particles are able to track the 352 target object. When implementing a particle filter, it is important to design the 353 prediction (Procedure 2) and the likelihood (Procedure 4). appropriately based on 354 information such as the motion and shape of the target object, in order to track the 355 target object accurately. In this study, we used the information that dislocations 356 moved only in one direction for prediction. We also used the information that 513 514 515 516 517 518 519 520 521 522 523 524 525 | 2022-01-07T16:09:01.292Z | 2022-01-05T00:00:00.000 | {
"year": 2022,
"sha1": "b8a5a76dd8d3efd9d258fcea3afe23d2d322b783",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1187114/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "ad1bf9f012fae5b241218661b5ec82d12ad16705",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
} |
244464910 | pes2o/s2orc | v3-fos-license | Wnt7b Inhibits Osteoclastogenesis via AKT Activation and Glucose Metabolic Rewiring
The imbalance between bone formation and bone resorption causes osteoporosis, which leads to severe bone fractures. It is known that increases in osteoclast numbers and activities are the main reasons for increasing bone resorption. Although extensive studies have investigated the regulation of osteoclastogenesis of bone marrow macrophages (BMMs), new pharmacological avenues still need to be unveiled for clinical purpose. Wnt ligands have been widely demonstrated as stimulators of bone formation; however, the inhibitory effect of the Wnt pathway in osteoclastogenesis is largely unknown. Here, we demonstrate that Wnt7b, a potent Wnt ligand that enhances bone formation and increases bone mass, also abolishes osteoclastogenesis in vitro. Importantly, enforced expression of Wnt in bone marrow macrophage lineage cells significantly disrupts osteoclast formation and activity, which leads to a dramatic increase in bone mass. Mechanistically, Wnt7b impacts the glucose metabolic process and AKT activation during osteoclastogenesis. Thus, we demonstrate that Wnt7b diminishes osteoclast formation, which will be beneficial for osteoporosis therapy in the future.
INTRODUCTION
Osteoporosis is an emerging global epidemic that severely increases the life burden of patients. It is well known that the imbalance between osteoblast-mediated bone formation and osteoclastmediated bone resorption causes osteoporosis (Lei et al., 2006;Hernlund et al., 2013). Recently, increasing studies have focused on the regulation of osteoblasts and that of osteoclasts. Osteoclasts are primarily bone-resorption cells and play an important role in bone homeostasis (Udagawa et al., 1990). Multinucleated giant cells are formed by the fusion of myeloid hematopoietic precursors and attach to bone surfaces to perform bone resorption (Yasuda et al., 1998). Multiple factors are involved in osteoclastogenesis and osteoclast activation. Indeed, hormones, cytokines, nutrients, and inflammatory factors can positively or negatively regulate osteoclast formation, most importantly RANK ligand (RANKL), which drives osteoclast precursors to differentiate into TRAP-positive multiple nucleated cells via numerous signaling pathways (Teitelbaum and Ross, 2003). RANKL is secreted by surrounding osteoblast-lineage cells, which provide a suitable microenvironment for osteoclast differentiation (Yasuda et al., 1998;Teitelbaum and Ross, 2003;Liu and Zhang, 2015). Wnt signaling has emerged as a promising pathway in skeletal development (Richards et al., 2008;van Amerongen and Nusse, 2009;Maupin et al., 2013). Several Wnt ligands have been identified as potential drugs for osteogenesis imperfecta and fracture healing (Baron and Gori, 2018). The two wellknown Wnt pathways are the β-catenin-dependent pathway (also known as the canonical Wnt pathway) and the β-cateninindependent pathway (also known as the non-canonical Wnt pathway) (Huybrechts et al., 2020). The canonical Wnt pathway transduces signals through LRP/Frz receptors on the cell surface. In contrast, non-canonical Wnt signaling modulates cell fate commitment and differentiation through binding to Ror1 and Ror2 (Hovanes et al., 2001;Kimelman and Xu, 2006;Tu et al., 2007). It has been reported that both canonical and non-canonical Wnt pathways are crucial during osteoblast formation (Day et al., 2005;Tu et al., 2007). Lossof-function mutations of LRP5 signals have been shown to cause osteoporosis by reducing the number of osteoblasts. Moreover, mice with gain-of-function mutations of LRP5 exhibited high bone mass phenotype (Cui et al., 2011). Wnt5a, a typical non-canonical Wnt ligand, has also been reported to be involved in bone formation. Specific Wnt5a deletion in mice osteoblasts caused a low bone mass phenotype with deceased osteoblast numbers and a reduced bone formation rate (Maeda et al., 2019). Considering that osteoblasts express osteoprotegerin (OPG) to block the interaction between RANKL and RANK, Wnt signals play a pivotal role in osteoclast formation. Although current studies have also focused on the direct modulatory role of Wnt signaling in osteoclast differentiation, the results remain contradictory. Constitutively active β-catenin in osteoclast precursors barely forms osteoclasts, whereas the β-catenin signal is required for the proliferation of osteoclast precursors (Sui et al., 2018). Furthermore, Wnt5a has been shown to activate the Ror2-JNK signaling and enhance RANKL-induced osteoclastogenesis (Maeda et al., 2012). On the contrary, Wnt16 and Wnt4 impair osteoclast formation by blocking the RANKL-RANK axis (Movérare-Skrtic et al., 2014;Yu et al., 2014).
Among Wnt ligands, Wnt7b has a profound effect on bone formation. Recent evidence indicates that Wnt7b contributes to the commitment and differentiation of the osteoblast lineage through multiple signaling pathways and rewiring of metabolic processing to enhance osteogenic function (Chen et al., , 2019Yu et al., 2020). Our group observed that forced activation of Wnt7b in mature osteoblasts in vivo (taking advantage of Wnt7b DMP1 mice) leads to a high bone mass phenotype. In detail, we demonstrate that Wnt7b enhanced self-renewal and osteogenic differentiation of bone marrow stromal cells (BMSCs) through Sox11 (Yu et al., 2020). Surprisingly, we also observed that osteoclasts are decreased in Wnt7b DMP1 mice. These data lead us to hypothesize that Wnt7b plays a negative role in osteoclast differentiation. However, the direct effect of Wnt7b in osteoclast formation remains underexplored.
In this study, we generate transgenic mice in which Wnt7b is specifically activated in BMMs to uncover an inhibitory role of Wnt7b during osteoclast formation and activation. We also further demonstrate that Wnt7b inhibits the differentiation of osteoclast precursors by disrupting AKT phosphorylation and rewiring glucose metabolism.
Mouse Strain
ROSA26-Wnt7b flox/flox mice were provided by Prof. Fanxin Long from University of Pennsylvania. LysM-Cre or RANK-Cre were purchased from Biocytogen Co. Ltd., Beijing, China. All animal procedures were approved by the Ethical Committees of the State Key Laboratory of Oral Diseases, Sichuan University. The research was carried out in accordance with accredited guidelines. Genotype was identified by PCR analysis with primers for ROSA26-Wnt7b flox/flox , LysM-Cre, and RANK-Cre.
Bone Dynamic Analysis
Eight-week-old mice were labeled with calcein (10 mg/kg, sigma) and alizarin red (30 mg/kg, Sigma) through intraperitoneal injection at 5 and 3 days before sacrificed. Undecalcified femurs were fixed with 4% paraformaldehyde and dehydrated with 30% sucrose dissolved in PBS. Twenty-micrometer sections were performed for microscopy analysis. Images were captured under 510-550 nm and 450-480 nm fluorescent light to image calcein and alizarin. Image processing was used Image Pro Plus software.
Detection of Serum Biomarkers
Mice 6 and 8 weeks old were fasted for at least 6 h; blood was drawn, allowed to clot at room temperature for 30 min, and then centrifuged at 3,000 rpm for 10 min at 4 • C. Serum samples were stored at −80 • C. ELISA was employed to determine mouse serum PINP and CTX-I (cloud clone, Wuhan, China) according to the manufacturer's protocol. Data processing was performed with Curve expect software.
Cell Culture
Primary bone marrow macrophages (BMMs) were obtained from femurs or tibias of 6-8-week-old C57BL6/J male mice, according to a published method. The cells were cultured and expanded with α-MEM (M0644, Sigma, United States) containing 10% FBS, 100 U/mL penicillin, 100 µg/mL streptomycin, and 50 ng/mL macrophage colony-stimulating factor (M-CSF) for 4 days . For osteoclast differentiation, the adherent BMMs were seeded at 3.125 × 10 4 cells/cm 2 and induced with 20 ng/mL M-CSF and 50 ng/mL RANKL for 3-5 days. RAW 264.7 were cultured in the same conditions without M-CSF, and 50 ng/mL RANKL was sufficient for osteoclast differentiation induction. For TRAP staining, cells were fixed with 4% of paraformaldehyde (PFA) for 5-10 min at room temperature and rinsed with water and stained with the acid phosphates, leukocyte (TRAP) kit (387A-1KT, Sigma, St. Louis, United States).
Intracellular Acidification by Acridine Orange Staining
Intracellular acidification was determined by acridine orange (AO) fluorescence method. BMM-derived OCs or RAW264.7 cell-derived OCs were treated with Ad-GFP or Ad-Wnt7b for 12 h. Osteoclasts induced with 50 ng/mL RANKL for 5 days were incubated with 10 µg/mL AO for 15 min at 37 • C. Cells were washed twice with PBS and processed for fluorescent microscopy analysis on a NIKON eclipse fluorescence microscope (Compix Inc., Sewickley, PA, United States) at excitation of 485 nm and emission of 520 nm. Images were performed using ImageJ (version 1.47).
Adenovirus-Mediated Overexpression
To overexpress Wnt7b, RAW264.7 cells were infected with adenovirus encoding mouse Wnt7b or green fluorescent protein (GFP) purchased from Hanbio, Shanghai, China. For viral transfection, cells at the confluence of 80% were incubated with virus overnight, and medium was changed to complete medium. Quantitative PCR (qPCR) and fluorescent imaging were performed to verify transfection efficiency after 24 h transfection. Cells were used for further experiments upon verification.
RNA Isolation and Quantitative PCR
Total RNA was extracted using TRIzol reagent (Invitrogen Inc., Carlsbad, CA, United States) and was reverse transcribed to cDNA with HiScript R Q-RT SuperMix for qPCR (+ gDNA wiper) (Vazyme, Nanjing, China). Taq Pro Universal SYBR qPCR Master Mix was used for qPCR in CFX96 real-time system. Genespecific primers are listed in Supplementary Table 1. Relative expression level was calculated by 2 − CT and was normalized by β-actin gene.
Western Blot
Cultured cells were washed with PBS and then lysed in radioimmunoprecipitation assay (RIPA) buffer containing protease and phosphatase inhibitor cocktail (Thermo Fisher Scientific, Hudson, NH, United States). Cell extracts were subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blotting. Primary Abs used are listed in Supplementary Table 2. Horseradish peroxidase (HRP)-conjugated secondary antibodies were probed and developed with ECL solution. Signals were detected and analyzed by Bio-Rad ChemiDoc system (Bio-Rad, Hercules, CA, United States).
Flow Cytometry Analysis and EdU Staining
Cells were cultured in complete medium for several hours at the confluence of 80%, and they were trypsinized, washed with PBS, and finally fixed in ice-cold 70% ethanol for at least 24 h. Cell cycle analyses followed manufacturer protocols (KGA512, KeyGEN Biotech, Nanjing, China). Cells were washed with PBS twice and incubated with RNase buffer for 30 min at 37 • C and stained with PI solution for 30 min on ice. Cells were analyzed by guava easyCyte HT (Millipore, Billerica, MA, United States) and analyzed with software InCyte2.7 (Millipore, Billerica, MA, United States).
EdU staining was performed according to manufacturer's guideline (C10310-1, Ribo Bio, Guangzhou, China). BMMs were cultured in complete medium at the confluence 50-70% and incubated with complete medium containing EdU at the concentration of 10 µM for 1 h. The labeled cells were subjected to EdU staining and counterstained with 4 ,6-diamidino-2-phenylindole (DAPI). Images were captured by a NIKON eclipse fluorescence microscope (Compix Inc., Sewickley, PA, United States).
Intracellular ATP Assays
Cells were cultured in 96-well plates and changed to 100 µL fresh complete medium before the assays. The CellTiter-Glo kit (G9241, Promega Corporation, Madison, WI, United States) was performed according to the manufacturer's procedure, and luminescence of each well was detected from the opaque-welled 96-well plates. ATP levels were calculated according to the standard curve generated with 0.1, 1, and 10 µM of ATP and normalized to total DNA amount in each well.
Cell total DNA extraction method was as follows. Cultured cells were washed twice with Hank's balanced salt solution (HBSS) (with CaCl 2 and MgCl 2 , Gibco, Life Technologies Corporation, NY, United States). The plate was emptied before being frozen at −80 • C for at least 1 h. About 100 µL or 1 mL of distilled water was added to each well of the plates and placed on an orbital shaker for 1 h at room temperature. One hundred microliters of cell lysates and 100 µL Hoechst 33342 (Thermo Fisher Scientific, Hudson, NH, United States) at 20 µg/mL in TNE buffer were mixed in a black 96-well plate, and fluorescence was detected. DNA amount was calculated according to the standard curve generated before.
OVX Mice Model
Eight 8-week-old female wild-type mice and eight 8-week-old female Wnt7b LysM mice were randomly divided into two groups to receive sham (four mice in this group) or bilateral OVX surgery (four mice in this group). After anesthesia by isoflurane and 100% oxygen for half hour, the mice in the OVX group were subjected to resection bilateral OVX, and the mice in the sham group underwent some fat tissue close to the ovaries, and all operations above were performed under aseptic conditions. Four weeks later, estrogen-deficiency−induced osteoporosis was successfully established by micro−CT scanning of the femurs of eight mice (four mice in each group) after sacrificed.
Cell Proliferation Assay Analysis
Two groups (wild-type and Wnt7b LysM group) of BMMs were cultured in six-well plates (Corning, Tewksbury MA, United States) at an initial density of 3.125 × 10 4 cells/cm 2 . Cell proliferation was assessed using a water-soluble tetrazolium salt- according to the manufacturer's instructions (DOJINDO, Tokyo, Japan). Then, CCK-8 data was read to obtain the absorbance at 450 nm, and the absorbance values of wild type were compared with those of Wnt7b LysM cells. Cell cycle was quantified by flow cytometry. Cells were stained with propidium iodide (PI) (KGA1015; KeyGEN Bio-TECH, Nanjing, China) following the manufacturer's directions. Flow cytometry analysis was performed on more than 50,000 events. Data were analyzed with system software (Millipore Guava easyCyte HT, Merck Millipore, Darmstadt, Germany).
Statistical Analysis
One-way ANOVA with post hoc Bonferroni correction was carried out for comparisons of multiple groups. Student's t-test was performed to determine the statistical significance for two groups. The chi-square test was used to assess the portion of cells in different phases. The level of acceptable statistical significance was set at p < 0.05. Numerical data and histograms were presented as mean ± SD (standard deviations). Results were presented in the presence of at least three independent biological experiments, and for each individual experiment, at least three technical repeats were carried out. All fluorescent microscopic images were processed with Image Pro Plus 6.0 (Media Cybernetics, Rockville, MD, United States).
Forced Expression of Wnt7b in Osteoclasts Enhances Bone Mass
We previously showed that mice with DMP1-driven Wnt7b overexpression exhibited a high bone mass phenotype. Moreover, the osteogenic differentiation of BMSCs was significantly promoted as expected, which indicated that Wnt7b is a potent activator for osteogenesis (Yu et al., 2020). The reduction in osteoclasts in mutant mice inspired us that Wnt7b might play an important role in osteoclastogenesis in either a cell autonomous or non-autonomous manner.
To test this, we planned to specifically activate Wnt7b in osteoclasts. To this end, we generated RANK-Cre;Rosa26 W nt7b/+ (hereafter referred to as Wnt7b RANK ) and LysM-Cre;Rosa26 W nt 7b/+ (hereafter referred to as Wnt7b LysM ) conditional knockout mice by crossing Rosa26-Wnt7b with either RANK-Cre or LysM-Cre transgenic mice. At 12 weeks of age, microcomputed tomography (µCT) scanning of the proximal femora was performed. Interestingly, similar to the overexpression of Wnt7b in osteoblasts, the three-dimensional reconstruction of µCT scanning showed a significant increase in bone mass in both trabecular and cortical bone in Wnt7b RANK and Wnt7b LysM mice compared to their littermates (Rosa26 W nt 7b , hereafter referred to as wildtype or WT) ( Figure 1A). The µCT analysis revealed that the trabecular bone mass (BV/TV) was increased in Wnt7b RANK and in Wnt7b LysM compared to WT. The increase in bone mass in mutant mice was due to higher trabecular number (Tb.N.), higher trabecular thickness (Tb.Th), and a decrease in trabecular spacing (Tb.Sp.). The µCT analysis also showed a thicker cortical bone diameter in mutant mice indicated by elevated cortical bone thickness (Ct.Th.) ( Figure 1B). Von Kossa staining and Goldner trichrome staining demonstrated deeper staining and thicker bone in mutant mice compared to WT, suggesting more calcium deposition ( Figure 1C). Consistent with the lack of change in new bone formation revealed by dynamic histomorphometry in the femur, the serum marker PINP was not impaired (Figures 1D,E). However, we observed a reduced number of TRAP-positive cells in the trabeculae ( Figure 1F) and a significantly decreased bone resorption serum marker CTX-1 in mutant mice compared to WT (Figure 1G). The number of TRAP-positive cells in the mandible was diminished in mutant mice ( Figure 1H). These data suggest that the specific gain-of-function of Wnt7b in osteoclasts significantly increased bone mass due to a defect in osteoclast differentiation and activity in vivo.
Wnt7b Increases Bone Mass in Ovariectomized Mice
According to the above results, we suggest that Wnt7b has the potential to relieve pathological bone resorption. To further examine this possibility, we generated an OVX mouse model, which exhibited intense bone resorption caused by an estrogen deficit. Six weeks after ovariectomy, µCT analysis revealed that Wnt7b overexpression increased bone mass in both sham and OVX group (Figure 1I). Although we found that the ratio of BV/TV with Wnt7b overexpression showed an increasing trend, the difference was not statistically significant. Due to the lack of changes in the level of reduction in BV/TV, we concluded that Wnt7b achieved a beneficial effect on bone mass, rather than playing a protective effect on OVX-induced osteoporosis in vivo (Figures 1I,J).
Wnt7b Impairs Osteoclast Formation in vitro
In vitro, the BMMs from Wnt7b LysM mice under RANKL induction formed less osteoclast than those from WT mice as indicated by TRAP staining. Besides, Wnt7b overexpression in RAW264.7 cells and BMMs also abolished osteoclastogenesis (Figures 2A,B). We next performed acridine orange staining on osteoclasts to visualize the intracellular acidification ( Figure 2C). The osteoclasts differentiated from BMMs from Wnt7b LysM mice had less acidification than those from WT mice in the presence of RANKL, which indicated a reduced osteoclast activity. Besides, qPCR experiments indicated that mRNA expression of osteoclast-specific genes, including Nfatc1, C-fos, Trap, Ctsk, Dcstamp, and Calcr, were significantly repressed in the presence of Wnt7b under osteoclastogenic induction ( Figure 2D). Of note, the negatively regulated genes during osteoclast differentiation were also screened by qPCR. Among these genes, Irf8 and Itsn were increased due to Wnt7b overexpression in BMMs ( Figure 2E). These results demonstrate that Wnt7b has inhibitory effects on osteoclast differentiation and maturation in vitro.
Wnt7b Suppresses β-Catenin-Dependent Signal in Bone Marrow Macrophages
Previous studies have demonstrated that Wnt ligands such as Wnt3a can modulate the proliferation of BMMs through the canonical Wnt pathway and that β-catenin is essential for osteoclast precursors (Hamamura et al., 2014;Weivoda et al., 2016). Therefore, we next examined the cell viability of BMMs in the presence of Wnt7b using CCK-8 cell counting experiments. As a result, we detected a reduction in the viability of BMMs derived from Wnt7b LysM mice indicated by CCK-8 experiment ( Figure 3A). Next, flow cytometry was used to analyze the phases of the cell cycle. Surprisingly, there was no remarkable difference between Wnt7b overexpression and control in terms of cell cycle ( Figure 3B). EdU staining of cultured BMMs from either WT or Wnt7b LysM mice also confirmed the flow cytometry results (Figure 3C). Meanwhile, the cell-cycle-related genes, such as Cdk2, Cdk4, P21, P27, and P53, did not change when enforced Wnt7b expression in BMMs ( Figure 3D). Next, mRNA expression of Wnt receptors, such as Reck, Gpr124, and Lrp5/6, and Wnt downstream genes, such as Axin2 and Gsk3β, Lef1, Ror2, and Pcna, were examined by qPCR ( Figure 3E). We found that the mRNA expression of majority of Wnt receptors showed no significant changes between WT and Wnt7b LysM , and genes related to β-catenin-dependent Wnt signal did not activate in response to Wnt7b in BMMs. Furthermore, the decreased transcriptional level of Lrp6 and Axin2 in the presence of Wnt7b indicated that the canonical Wnt signal might be suppressed in BMMs with Wnt7b overexpression. We further explored the protein levels of β-catenin in cell lysates by Western blotting and found that β-catenin was obviously decreased in the BMMs derived from Wnt7b LysM mice compared to those from the WT mice ( Figure 3F). In addition, we treated the BMMs with Wnt7b conditional medium and detected that the protein level of β-catenin in BMMs was also reduced in response to Wnt7b in a time-dependent manner ( Figure 3G). Data indicated that the β-catenin-dependent pathway was suppressed in the presence of Wnt7b in BMMs.
Wnt7b Inhibits Osteoclastogenesis by Suppressing the AKT Signaling Pathway
Several signaling pathways, such as nuclear factor kappa B (NF-κB), mitogen-activated protein kinase (MAPK), and mammalian target of rapamycin (mTOR), are involved in osteoclast differentiation (Liu and Zhang, 2015). We screened whether Wnt7b inhibited the key activation step of several signaling pathways induced by RANKL. In the canonical NF-κB pathway, phosphorylation and nuclear translocation of p65 plays a critical role during RANKL-induced NF-κB activation. Western blotting analysis revealed that Wnt7b inhibited or at least delayed RANKL-induced p65 phosphorylation ( Figure 4A) and the non-canonical NF-κb activation indicated by Ikka protein level (Abu-Amer, 2013). The RANKL-induced p38 and JNK phosphorylation also was not affected by Wnt7b overexpression (Figure 4B). Since AKT, a critical serinethreonine kinase, is reportedly mediating diverse signaling pathway downstream of mTORC2, we tested the phosphorylation of AKT at Ser473. We observed that Wnt7b significantly abolished AKT phosphorylation at Ser473, whereas activation of PKC and S6K1 was not significantly impaired with RANKL treatment for 15 min ( Figure 4C).
Next, we investigated whether SC79, an AKT activator, rescued osteoclast formation suppressed by Wnt7b (Zhang et al., 2016). We pretreated BMMs with SC79 or dimethyl sulfoxide (DMSO) for 24 h before inducing them with RANKL for additional 5 days. The protein expression of osteoclast-differentiation-related genes were upregulated after SC79 treatment ( Figure 4D). Of note, BMMs derived from Wnt7b Lysm mice previously failed to form TRAP-positive cells, whereas the number of osteoclasts increased with SC79 treatment in the presence or absence of Wnt7b ( Figure 4E). The mRNA expressions of osteoclast-differentiation-related genes were rescued after SC79 treatment ( Figure 4F). Thus, we concluded that Wnt7b abolished osteoclast formation by inhibiting AKT phosphorylation.
Wnt7b Rewires the Metabolic Process in Bone Marrow Macrophages
Recent studies have shown that the cell metabolic status could regulate signal pathway activation and contribute to the regulation of cell differentiation (Lecka-Czernik and Rosen, 2015). Moreover, ATP generated from anerobic glycolysis has been shown to fuel the PI3K-AKT pathway to support the Th17 cell response (Xu et al., 2021a,b). Previous studies have revealed that glycolysis and OXPHOS were both enhanced during osteoclast differentiation. Meanwhile, glucose consumption and the expression of glucose transporters (GLUTs) were elevated as the osteoclast differentiated . Therefore, (C) Acridine orange (AO) staining was applied to access the acidification of RANKL-induced multinucleated cells at day 5. Green represents neutral pH, and red represents acidic pH. Intracellular pH at day 5 is decreased after in the presence of Wnt7b compared to wild type. (D) The expression levels of genes including early osteoclastic differentiation markers and osteoclast functional markers were detected by real-time PCR analysis. (E) Negative regulators in BMMs without RANKL inducement were screened by quantitative PCR. N = 3. *p < 0.05; **p < 0.01; ***p < 0.001.
we examined ATP levels, glucose consumption, and lactate production to determine the metabolic condition of BMMs derived from Wnt7b Lysm mice compared to those from WT mice (Figures 5A-C). The ATP levels were unchanged; however, glucose consumption and lactate production dropped significantly. Then, we screened the transcriptional level of (E) Expression levels of genes including Wnt receptors (Reck, Gpr124, Lrp5/6) and Wnt-related genes (Gsk3β, Axin2, Ror2, Lef1, and Pcna) were determined by real-time PCR. (F) Protein level of β-catenin in BMMs isolated from WT or Wnt7b LysM mice. (G) Raw264.7 cells were treated with conditional medium from 293FT cell line transfected with adeno-GFP or adeno-Wnt7b for the indicated period. Whole-cell lysates were collected for Western blot detection for β-catenin. The relative densities of blots are shown on top. N = 3. *p < 0.05; NS: not significant.
GLUTs family in BMMs. Glut1, a member of glucose transporter family, identified as the predominant GLUTs in BMMs, showed no significant change in the presence of Wnt7b, while Glut3 presented a mild increase ( Figure 5D). Thus, GLUT family was dispensable for the inhibition of glucose utilization caused by Wnt7b.
Furthermore, the transcriptional levels of the metabolic enzymes were examined by qPCR. The data demonstrated that most of the glycolysis-related genes were steadily expressed, whereas the expression of Pgk1 and Pdha was dramatically decreased in the presence of Wnt7b (Figure 5E). Some published studies revealed that Pgk1 was a central enzyme in glycolysis process, which generated a molecular of ATP. The mRNA expression of Pgk1 decreased in BMMs with Wnt7b forced expression suggesting that Wnt7b impaired glycolysis in BMMs with Wnt7b overexpression. Meanwhile, we found that Wnt7b forced expression led to slight upregulation of Pdk2 and downregulation of Pdha, which controlled the turnover of TCA cycle. These data suggested that the glucose metabolic process was rewired by Wnt7b in BMMs.
Next, we assayed the metabolites in BMMs with or without Wnt7b overexpression for 24 h before RANKL treatment ( Figure 5F). The metabolites in the TCA cycle were elevated in RANKL-stimulated BMMs compared to the unstimulated BMMs, which indicated an increasing glucose usage in TCA during osteoclastogenesis. Importantly, with RANKL inducement, the content of citrate was significantly reduced in the presence of Wnt7b. Therefore, we investigated FIGURE 4 | Wnt7b inhibited osteoclast differentiation by suppressing AKT phosphorylation. (A) Raw264.7 cell line transfected with adeno-GFP as control or adeno-Wnt7b was treated with RANKL for the indicated periods. Cell lysates were subjected to Western blot (WB) analysis for canonical NF-κB signal pathway. The relative densities of blots are shown on top of IKK-a. (B,C) BMMs isolated from WT or Wnt7b LysM mice were stimulated with RANKL for increasing time. Whole-cell lysates were collected for Western blotting analysis of targets of MAPK pathway (p-p38 and p-JNK) and targets of mTOR pathway (p-AKT, AKT, p-PKC, PKC, p-S6K1, and S6K1). (D) BMMs obtained from WT or Wnt7b LysM mice were incubated with RANKL for 1 day together with vehicle (DMSO) or SC79 addition. The expression levels of early osteoclast differentiation proteins in BMMs were analyzed by WB. (E) Osteoclastogenesis were determined with RANKL for 5 days with SC79 or DMSO. TRAP staining was performed to identify the TRAP-positive multinucleated cells. (F) The expression levels of early osteoclast differentiation markers in BMMs were analyzed by quantitative PCR after treated with SC79 or DMSO. N = 3. *p < 0.05.
whether resupplemented citrate into the medium rescued osteoclast formation, which was suppressed by Wnt7b. To this end, we first tested the intracellular amount of citrate in BMMs after citrate compensation. After treatment with additional citrate in the medium, the intracellular citrate content was increased in BMMs (Figure 5G), as were the FIGURE 5 | Wnt7b rewires glucose metabolic process in BMMs through regulating AKT degradation. (A-C) Total ATP amount, glucose consumption, and lactate production of Raw264.7 infected with adeno-GFP or adeno-Wnt7b were determined. (D,E) The expression levels of genes of Glut family (Glut1, Glut2, Glut3, Glut4) and glycolysis-related enzymes were analyzed by real-time PCR in primary BMMs. (F) Metabolic profile of intermediates in TCA cycle in Raw264.7 cells infected with adeno-GFP or adeno-Wnt7b in the presence or absence of RANKL. The fold changes of citrate are shown to the right. (G) Intracellular citrate amount in Raw264.7 cells after adeno-GFP or adeno-wnt7b transfection in the presence or absence of exogenous citrate. (H) Raw264.7 cells after adeno-GFP or adeno-Wnt7b transfection were induced with RANKL plus citrate for 1 day. Western blot was used to determine the protein levels of β-catenin, p85, AKT, c-fos and P.U 1. (I) TRAP staining was used to identify TRAP-positive multinucleated cells. BMMs isolated from WT or Wnt7b LysM mice were induced with M-CSF and RANKL for 5 days in addition with or without 10 mM citrate; then, TRAP staining was performed as described. Each dot represented one sample. NS: not significant; *p < 0.05; **p < 0.01; ***p < 0.001. protein levels of AKT and the osteoclast differentiation markers PU.1 and C-fos ( Figure 5H). Moreover, the TRAP staining also demonstrated that the BMMs derived from Wnt7b LysM mice form more osteoclasts in the presence of 10 mM citrate (Figure 5I), which indicated that citrate compensation rescued osteoclastogenesis. Therefore, we concluded that Wnt7b impaired AKT phosphorylation and rewired the glucose metabolic process in BMMs during osteoclastogenesis.
DISCUSSION
Over the past decades, the Wnt signaling pathway has been demonstrated as a well-established regulator of skeleton homeostasis (Baron and Kneissel, 2013;Huybrechts et al., 2020). Several reports have shown that Wnt ligands have a dual effect on bone mass, which affects both osteoblasts and osteoclasts (Glass et al., 2005). In recent years, emerging evidence has indicated that the Wnt pathway plays an indispensable role in osteoclastogenesis . For example, forced expression of the dominant active form of β-catenin in osteoblasts leads to the development of a high bone mass phenotype with impaired osteoclast differentiation (Glass et al., 2005). The biphasic regulatory effect of the Wnt ligand Wnt3a on osteoclastogenesis is dependent on β-catenin. Wnt3a enhances the proliferation of osteoclast precursors through the activation of β-catenin and impairs the differentiation of osteoclast precursors due to β-catenin constitutive activation (Sui et al., 2018). Besides, the stimulation of OPG in osteoblasts can also block osteoclastogenesis. Wnt16, another Wnt ligand, has been reported to inhibit osteoclast differentiation through the noncanonical Wnt pathway (Mizoguchi et al., 2009;Movérare-Skrtic et al., 2014). During the inhibition, Wnt16 fails to induce the accumulation of β-catenin or expression of Axin2 in osteoclast precursors; however, it is able to decrease the expression of NFATc1 in these precursors (Mizoguchi et al., 2009). Besides, the NF-κB signal, a crucial signaling pathway for osteoclastogenesis, is also suppressed in osteoclast progenitors following Wnt16 treatment. Furthermore, in in vitro cell culture, Wnt16 can induce OPG expression, which protects the skeleton from excessive bone resorption and blocks osteoclast differentiation by binding to RANKL (Movérare-Skrtic et al., 2014). Therefore, these findings suggest that Wnt16 negatively regulates osteoclast formation via both direct and indirect mechanisms. Wnt4, another inhibitory Wnt for osteoclastogenesis, impairs osteoclast formation in a β-catenin-independent manner (Yu et al., 2014). Osteoclast precursors fail to differentiate into osteoclasts following treatment with Wnt4 due to suppression of NF-κB activation. Consequently, the expression of NFATc1 in these cells is inhibited by Wnt4. Wnt4 does not inhibit RANKL-induced osteoclast differentiation but blocks 1α,25(OH) 2 D3-induced osteoclast formation when co-cultured with osteoblasts. These results demonstrate that Wnt4 inhibits osteoclast formation through the interaction with osteoblasts. As a non-canonical WNT ligand, Wnt5a binding to Ror1 or Ror2 regulates cell polarity and invasiveness through the β-catenin-independent pathway (Maeda et al., 2012). Previous studies have demonstrated that Wnt5a derived from osteoblasts promotes osteoclastogeneis. Mechanistically, Wnt5a enhances the expression of RANK by activating c-Jun in the osteoclast precursors, thereby promoting RANKL-induced osteoclastogenesis (Ikeda et al., 2004). In addition, Wnt5a also facilitates Wnt/β-catenin signals by promoting the expression of Lrp5/6, which further reduces osteoclast formation. Thus, Wnt ligands can regulate osteoclast differentiation in both direct and indirect manners.
Taken together, these results indicate that Wnt7b has potent osteogenic activity (Chen et al., , 2019Yu et al., 2020). Specific overexpression of Wnt7b in osteoblasts markedly increased the number and function of osteoblasts. Further investigation of the molecular mechanisms suggested that Wnt7b promotes bone formation through multiple signals. mTORC1 was activated by Wnt7b in osteoblasts through the PI3k-AKT pathway instead of the canonical Wnt signaling pathway. Recent studies have uncovered the link between glucose metabolism and bone formation. Wnt7b has been shown to enhance glycolysis in osteoblasts through increasing Glut1 expression. In addition to the potent bone anabolic effect, Wnt7b appears to abolish osteoclast numbers. Moreover, overexpression of Wnt7b in DMP1-Cre mice showed a high bone mass phenotype and suppression of osteoclast numbers.
Our results showed that Wnt7b directly acts on osteoclast precursors through the β-catenin-independent pathway. Wnt7b had no significant effect on the cell cycle of osteoclast precursors Wnt7b but did inhibit osteoclast differentiation by suppressing AKT phosphorylation during RANKL induction. Akt activation was crucial for NF-κB signals and NFATc1 expression in response to RANKL in osteoclast progenitors (Tiedemann et al., 2017;Fu et al., 2020;Xin et al., 2020). We found that the inhibitory effect of Wnt7b was reversed by the Akt-specific activator SC79, and at the same time, the Wnt7b-mediated downregulated expression of NFATc1 was restored with SC79 treatment.
Recent studies have shown that the phosphorylation of AKT depends on glycolysis and is controlled by several metabolic intermediators (Lecka-Czernik and Rosen, 2015;Covarrubias et al., 2016;Martinez Calejman et al., 2020;Xu et al., 2021a). Wnt7b suppressed the glucose consumption and lactate production in osteoclast precursors and decreased the amount of citrate in these cells. As previously reported, additional low concentration of pyruvate or citrate augments the osteoclasts formation by enhancing cell energy metabolism, and the amount of citrate was significantly raised during RANKL-induced osteoclast differentiation (Fong et al., 2013). Citrate, a well-known TCA intermediator, can act as a signaling molecule to trigger some physiological response. Citrate induced upregulation and phosphorylation of AKT in endothelial cells and some tumor cells to enhance cancer cells invasion and metastasis. However, several studies indicated citrate suppressed tumor growth through activation of PTEN (Ren et al., 2017). We found that Wnt7b impaired AKT phosphorylation and decreased citrate content in TCA cycle, which indicated a glucose metabolic switch in the presence of Wnt7b. We supplemented BMMs culture medium with 10 mM citrate and found that the amount of AKT in Wnt7b-treated cells was rescued. All these results reflected that Wnt7b suppressed osteoclasts formation via AKT-and citrate-dependent pathways. However, we still cannot address the interaction between citrate and AKT currently.
CONCLUSION
In conclusion, our study provides evidence to demonstrate that Wnt7b increases bone mineral density and improves bone biochemical properties in mice by inhibiting bone resorption through β-catenin-independent pathways. The potential therapeutic use of Wnt7b as a novel bone formation and bone resorption dual regulator in postmenopausal osteoporosis is worthy of further investigation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by the Ethical Committees of the State Key Laboratory of Oral Diseases, Sichuan University.
AUTHOR CONTRIBUTIONS
FW, BL, and XH conducted the experiments and acquired the data. FW and FY analyzed the data. YS and LY helped with critical advices and discussion. LY designed the project and oversaw the project and revised the manuscript. FW and YS drafted the manuscript. All authors reviewed the manuscript. | 2021-11-22T14:09:30.329Z | 2021-11-22T00:00:00.000 | {
"year": 2021,
"sha1": "233dffbc5897a5559d5f2587bea2b2c40fc29cdb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2021.771336/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "233dffbc5897a5559d5f2587bea2b2c40fc29cdb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119262140 | pes2o/s2orc | v3-fos-license | Islands in minor-closed classes. I. Bounded treewidth and separators
The clustered chromatic number of a graph class is the minimum integer $t$ such that for some $C$ the vertices of every graph in the class can be colored in $t$ colors so that every monochromatic component has size at most $C$. We show that the clustered chromatic number of the class of graphs embeddable on a given surface is four, proving the conjecture of Esperet and Ochem. Additionally, we study the list version of the concept and characterize the minor-closed classes of graphs of bounded treewidth with given clustered list chromatic number. We further strengthen the above results to solve some extremal problems on bootstrap percolation of minor-closed classes.
Introduction
Let G and H be graphs. A model of H in G is a function µ assigning to vertices of H pairwise vertex-disjoint non-empty connected subgraphs of G such that for every uv ∈ E(H), there exists an edge of G with one end in µ(u) and the other end in µ(v). If H has a model in G, we say that H is a minor of G, otherwise, we say that G is H-minor-free. A class G of graphs is a (proper) minor-closed class if G does not contain all graphs, and for every graph G ∈ G every minor of G also belongs to G. We define the chromatic number χ(G) of a graph class G as the minimum integer t such that every graph G ∈ G is properly t-colorable (and χ(G) = ∞ if no such t exists). The famous Hadwiger's conjecture can be considered as a characterization of minor-closed graph classes with given chromatic number.
Conjecture 1 (Hadwiger's conjecture [12]). Let t ≥ 1 be an integer, and let G be a minor-closed class of graphs. Then χ(G) ≤ t if and only if K t+1 ∈ G.
For integers C ≥ 1 and t ≥ 0, a t-coloring with clustering C of a graph G is a (not necessarily proper) coloring of vertices of G using t colors such that G contains no monochromatic connected subgraph with more than C vertices. We denote by χ C (G) the minimum t such that G admits a t-coloring with clustering C. The clustered chromatic number χ ⋆ (G) of a graph class G is the minimum t such that there exists C so that χ C (G) ≤ t for every graph G ∈ G (χ ⋆ (G) = ∞ if no such t exists). The clustered chromatic number of minorclosed classes of graphs has been recently actively investigated, motivated in part by Hadwiger's conjecture. Let X (H) denote the minor-closed class consisting of all H-minor-free graphs. Kawarabayashi and Mohar [15] proved the first linear bound on χ ⋆ (X (K t )) showing that χ ⋆ (X (K t )) ≤ ⌈ 31t 2 ⌉. 1 The upper bound on χ ⋆ (X (K t )) has been successively improved in [25,10,18]. Most recently, using a beautiful self-contained argument, van den Heuvel and Wood [24] have shown that χ ⋆ (X (K t )) ≤ 2t − 2.
The above definitions can be straightforwardly extended from coloring to list coloring, and many of our techniques extend as well. A t-list assignment L for a graph G assigns a finite set L(v) of size at least t to every vertex v ∈ V (G). An L-coloring with clustering C of a graph G assigns to every vertex v a color from L(v) such that G contains no monochromatic connected subgraph with more than C vertices. Let χ l C (G) denote the minimum t such that G admits an L-coloring with clustering C for every t-list assignment L. Clearly, χ l C (G) ≥ χ C (G). The clustered list chromatic number χ l ⋆ (G) of a graph class G is the minimum t such that there exists C so that χ l C (G) ≤ t for every graph G ∈ G (and χ l ⋆ (G) = ∞ if no such t exists). In the sequel to this paper we show that χ ⋆ (X (K t )) = χ l ⋆ (X (K t )) = t−1, proving the weakening of Hadwiger's conjecture for clustered chromatic number. 2 In the current paper we introduce two less technical ingredients of the proof, which might be of independent interest. In particular, our results imply the above mentioned bound on χ ⋆ (X (K t )) of van den Heuvel and Wood, using very different techniques.
Rather than working with the clustered (list) chromatic number, we bound a different parameter which dominates it. The coloring number col(G) of a graph G is the smallest integer t such that every non-empty subgraph of G contains a vertex of degree less than t. A standard greedy argument shows that χ(G) ≤ col(G). Let us now introduce a similar notion for clustered coloring, first defined by Esperet and Ochem [11]. A t-island in a graph G is a non-empty subset S of vertices of G such that each vertex of S has less than t neighbors in V (G) \ S. Let col C (G) denote the smallest integer t such that each non-empty subgraph of G contains a t-island of size at most C. Hence, col 1 (G) = col(G). The following analogue of the bound χ(G) ≤ col(G) holds for the coloring with bounded clustering.
Proof. Let L be a t-list assignment for G. Suppose that each non-empty subgraph of G contains a t-island of size at most C. By induction on the size of G, we show that G has an L-coloring with clustering at most C. This is trivial if G is empty, hence assume that V (G) = ∅. Let S be a t-island in G of size at most C. By the induction hypothesis, G − S has an L-coloring with component size at most C. Color each vertex of v ∈ S by an arbitrary color from L(v) different from the colors of its neighbors in V (G)\S. Clearly, each connected monochromatic subgraph of G is contained either in S or in V (G) \ S, and since |S| ≤ C, we conclude that we obtained an L-coloring of G with clustering at most C.
For a class G of graphs, let the clustered coloring number col ⋆ (G) of G denote the smallest integer t such that there exists C ≥ 1 such that col C (G) ≤ t for every graph G ∈ G (col ⋆ (G) = ∞ if no such t exists). By Lemma 2, any upper bound on col ⋆ (G) gives an upper bound on χ ⋆ (G). This observation motivates our investigation of the clustered coloring number in this paper.
It follows from Observation 3 below that col ⋆ (X (K t+1 )) ≥ t. On the other hand, for every n ≥ 1, there exist minor-closed graph classes G containing K n with col ⋆ (G) = 1, e.g., the class of all graphs with at most n vertices. Thus the complete minors present in the minor-closed class do not determine the clustered coloring number of the class. This motivates us to ask for an exact description of minor-closed classes G with col ⋆ (G) ≤ t.
Two kinds of graphs seem to play an important role in this context. One of them are the complete bipartite graphs K n,m . For the other one, let I n +P m denote the graph consisting of a path on m vertices and n additional vertices adjacent to all the vertices of the path. The following is straightforward, and also follows from a stronger Lemma 5 below.
Observation 3. Let t ≥ 1 be an integer. For every C ≥ 1, there exists m ≥ 1 such that all t-islands in K t,m and I t−1 + P m have more than C vertices.
Thus a class G with col ⋆ (G) ≤ t can contain only finitely many graphs of form K t,m or I t−1 + P m . We conjecture that for minor-closed classes, this condition is also sufficient.
Conjecture 4.
A minor-closed class of graphs G satisfies col ⋆ (G) ≤ t if and only if there exists m ≥ 1 such that K t,m ∈ G and I t−1 + P m ∈ G.
Conjecture 4 implies that col ⋆ (X (K t+1 )) = t (and hence χ ⋆ (X (K t+1 )) = t), since K t+1 is a minor of K t,m and I t−1 + P m for all m ≥ t. In the next section we show that if Conjecture 4 holds, then χ l ⋆ (G) = col ⋆ (G) for every minor-closed graph class G. On the other hand, the parameters χ ⋆ and χ l ⋆ are not tied, and thus, unfortunately, one can not hope to extend the methods of this paper to characterize minor-closed graph classes with given clustered chromatic number. We refer the reader to [21,Conjectures 30 and 32] for a conjectured characterization.
In the next section we state our main results and present several applications, including proofs of two conjectures of Esperet and Ochem. We prove our two main results in Sections 3 and 4.
Our results
We start this section by presenting a strengthening of Observation 3 to clustered list chromatic number.
Lemma 5. For all positive integers C, t there exists a positive integer m such that χ l C (K t,m ) ≥ t + 1 and χ l C (I t−1 + P m ) ≥ t + 1. Proof. We present the proof for G = I t−1 + P m , the proof for K t,m is similar. Since χ C (G) ≥ 2 for every connected graph G with more than C vertices, we can assume that t ≥ 2. Let S be a set of size (t − 1)t. Let L be a t-list assignment for G such that L(v) ⊆ S for all v ∈ V (G) and the vertices of I t−1 are assigned disjoint subsets of S. Suppose further that for every subset T ⊆ S with |T | = t there exists a set Q T of tC 2 consecutive vertices of P m such that L(v) = T for every v ∈ Q T . Clearly such a list assignment exists if m is sufficiently large.
Consider an L-coloring of G. It remains to show that there exists a monochromatic connected subgraph of size more than C. Let T ′ be the set of colors assigned to vertices of I t−1 . Then |T ′ | = t−1 by the choice of L. Let T = T ′ ∪{c} for some color c ∈ S \T ′ , and let Q T be as defined in the previous paragraph. If some color in T ′ is used on at least C vertices of Q T then G contains a monochromatic connected subgraph induced by these vertices and a vertex of I t−1 . Otherwise, G contains a monochromatic subpath of P m of length at least (C + 1) induced by the vertices in Q T which are colored using color c.
Lemma 5 immediately implies the following. Corollary 6. Let G be a class of graphs such that χ l ⋆ (G) ≤ t. Then there exists m ≥ 1 such that K t,m ∈ G and I t−1 + P m ∈ G.
Corollary 6 in particular implies that if Conjecture 4 holds then χ l ⋆ (G) = col ⋆ (G) for every minor-closed graph class G, as mentioned in the introduction.
Let tw(G) denote the treewidth of the graph G. 3 We say that a class of graphs G is of bounded treewidth if there exists an integer w such that tw(G) ≤ w for every graph G ∈ G. Our first main result proves Conjecture 4 in the special case of minor-closed classes of graphs of bounded tree-width (or equivalently according to a result of Robertson and Seymour [22], minorclosed classes that do not contain all planar graphs).
Theorem 7. Let G be a minor-closed class of graphs of bounded treewidth, and let t ≥ 1 be an integer. Then col ⋆ (G) ≤ t if and only if there exists m ≥ 1 such that K t,m ∈ G and I t−1 + P m ∈ G.
Theorem 7 and Corollary 6 imply the following. Theorem 7 can be applied to bound the clustered chromatic number of minor-closed classes of unbounded treewidth. The key tool which allows such an application is the following theorem of DeVos et al.
Theorem 9 (DeVos et al. [7]). For every minor-closed class G there exists an integer w, such that for every graph G ∈ G there exists a partition For ordinary (not list) clustered coloring, one can use disjoint sets of colors on parts V 1 and V 2 of such a partition. Hence, combining Theorems 7 and 9 and Lemma 2 yields the following.
Corollary 10. Let G be a minor-closed class of graphs, and let t, m ≥ 1 be integers such that K t,m ∈ G and I t−1 + P m ∈ G. Then χ ⋆ (G) ≤ 2t.
In particular, and As mentioned in the introduction, a different proof of the bound (2) is given by van den Heuvel and Wood [24], while (1) improves on the bound χ ⋆ (X (K t,m )) ≤ 3t established in [24].
Our second main result bounds the clustered coloring number of minorclosed classes of graphs in terms of the maximum density of the class.
Theorem 11. For every graph H, integer t ≥ 1 and real α > 0 there exists C > 0 satisfying the following. Let G be an H-minor-free graph such that |E(G)| < (t − α)|V (G)|. Then G contains a t-island of size at most C.
Theorem 11 immediately implies the following.
Corollary 12 can also be used to determine col ⋆ (X (K t )) for t ≤ 9. By the results of [8,13,19,23] if G is a K t -minor free graph for some t ≤ 9 then |E(G)| ≤ (t − 2)|V (G)|. This implies the following.
Corollary 15. Let 1 ≤ t ≤ 9 be an integer. Then Finally, let us discuss a relationship to another concept, bootstrap percolation. Consider the following process on a graph G for some integer t ≥ 0. Let vertices of some set A 0 ⊆ V (G) be marked active. If there exists an inactive vertex v in G with at least t active neighbors, v becomes active. We repeat this procedure until there are no more inactive vertices with at least t active neighbors. If at the end, all vertices of G are active, we say that the set A 0 t-percolates. Bootstrap percolation was introduced by Chalupa, Leath and Reich [5] as a simplification of existing models of ferromagnetism. Extremal problems for bootstrap percolation, similar to the ones we consider in this paper, were studied for very structured graph families e.g. in [4,20].
For ε > 0, we say that a graph G is ε-resistant to t-percolation if no set of at most ε|V (G)| vertices of G t-percolates. For a class of graphs G, let us define the percolation threshold p(G) of G to be the minimum integer t such that for some ε > 0, all non-null graphs in G are ε-resistant to t-percolation (p(G) = ∞ if no such t exists). Clearly, all graphs in G are also ε-resistant to t ′ -percolation for every t ′ ≥ t.
How to show that a class is percolation resistant? Observe that a set t-percolates if and only if no t-island is contained in its complement. If a graph G contains more than ε|V (G)| pairwise disjoint t-islands, then each set of size at most ε|V (G)| is disjoint from at least one of them, and thus it does not t-percolate. Hence, the following notion gives an upper bound to the percolation threshold. Let the pervasive clustered coloring number pcol ⋆ (G) denote the minimum integer t such that for some If G contains linearly many pairwise disjoint t-islands, some of them must have constant size. Hence, for a subgraph-closed class G we have inequalities pcol ⋆ (G) ≥ p(G) and pcol ⋆ (G) ≥ col ⋆ (G). In general, percolation threshold may be smaller than the clustered coloring number; e.g., for t ≥ 3 the class of graphs G with maximum degree at most t and girth Ω(log(|V (G)|)) has clustered coloring number t (since in t-regular graphs in the class, all (t − 1)islands must contain cycles) but all the graphs in this class are (t−2)/(2t−2)resistant to (t − 1)-percolation (since the complements of sets of size at most t−2 2t−2 |V (G)| contain a vertex of degree at most t − 2, or two vertices of degree t − 1 joined by a path, or a cycle, forming a (t − 1)-island).
However, in Section 3 we prove the following.
Theorem 16. Let G be a class of graphs closed under taking subgraphs, such that G ⊆ X (H) for some graph H. Then pcol ⋆ (G) = p(G).
Hence for the minor closed classes percolation threshold bounds the clustered coloring number. Rather than proving Theorems 7 and 11 directly, we bound the percolation threshold and the pervasive clustered coloring number of the corresponding graph classes, proving the following strengthening of Theorems 7 and 11, respectively. Theorem 17. Let G be a minor-closed class. If G has bounded treewidth, then pcol ⋆ (G) = p(G) = col ⋆ (G) is equal to the smallest integer t such that for some m ≥ 0 neither K t,m nor I t−1 + P m belongs to G.
Theorem 18. For every graph H, integer t > 0 and α > 0 there exists δ > 0 satisfying the following. Let G be an H-minor-free graph satisfying Note that Theorem 18 implies the following strengthening of Corollary 12.
Corollary 19. Let G be a class of graphs closed under taking subgraphs, such that G ⊆ X (H) for some graph H.
We prove Theorems 16 and 18 (and thus Theorem 11) in Section 3, and we prove Theorem 17 (and thus Theorem 7) in Section 4.
Percolation and clustered coloring in classes with sublinear separators
In this section we prove Theorems 16 and 18. In fact we show that these results hold not just for minor-closed classes, but for a wider family of graph classes which admit "good" separators. We start by defining this family.
A separation of a graph G is a pair (L, R) of its subgraphs such that is finite 4 . We use an argument of Lipton and Tarjan [17] to prove the following.
Lemma 20. Let f : N → N be a non-decreasing significantly sublinear function. Let G be a class of graphs closed under taking subgraphs that has fseparators. For every ε > 0 there exists C as follows. For every n-vertex graph G ∈ G there exists X ⊆ V (G) such that |X| ≤ εn and every component of G − X has at most C vertices.
Proof. Since f is significantly sublinear, there exists i 0 such that Without loss of generality, we can assume that G is connected, as otherwise we can find the set X separately in each component. Let us define a rooted tree T and a mapping θ of its vertices to connected induced subgraphs of G as follows. For the root r of T , we set θ(r) = G.
We let X be the union of the sets X v over all non-leaf vertices v of T . By the construction of T , every component of G − X has at most C vertices, and thus it suffices to bound the size of X. For v ∈ V (T ), define rank(v) = ⌊log 3/2 |V (θ(v))|⌋. Observe that the rank is decreasing on each path in T starting in the root, and in particular, if rank(u) = rank(v) for distinct u, v ∈ V (T ), then θ(u) and θ(v) are vertex-disjoint. For any i ≥ ⌊log 3/2 C⌋, let V i be the set of non-leaf vertices v of T of rank i and let as required.
In order to prove a strengthening of Theorem 18, we need one more definition. For a subset A ⊆ V (G), let e G (A) (or e(A) for ease of notation when the graph G is clear from the context) denote the number of edges of G with at least one end in A. We say that A ⊆ V (G) is a t-enclave if e G (A) < t|A|. The following observation is easy, but useful.
Lemma 21. Let G be a graph, and let A be a t-enclave in G. Then there exists a t-island S ⊆ A in G.
Proof. Choose a minimal t-enclave S ⊆ A. Note that S = ∅, since 0 ≤ e(S) < t|S|. We claim that S is an island, as desired. Suppose for a contradiction that there exists v ∈ S with at least t neighbors in V (G) − S. Then Thus S \ v is a t-enclave, contradicting the choice of S.
Alon, Seymour and Thomas [2] proved that any minor-closed class of graphs G has f -separators for f (n) = O(n 1/2 ), which is significantly sublinear. Therefore the next result implies Theorem 18.
Theorem 22. Let f : N → N be a non-decreasing significantly sublinear function. Let G be a class of graphs closed under taking subgraphs that has f -separators. For every integer t > 0 and α > 0 there exists δ > 0 so that every G ∈ G satisfying |E(G)| ≤ (t − α)|V (G)| contains at least δ|V (G)| disjoint t-islands.
Proof. Let ε = α 2t . Let C be chosen to satisfy the conclusion of Lemma 20 for this ε and G, and let δ = α 2tC . Consider an n-vertex graph G ∈ G satisfying |E(G)| ≤ (t − α)n. By the choice of C there exists a set X ⊆ V (G) of size at most εn such that every component of G − X has at most C vertices. Let K be the collection of vertex sets of the components of G − X, let K ′ be the collection of the sets in K which are t-enclaves, and let K ′′ = K − K ′ . By Lemma 21 it suffices to show that |K ′ | ≥ δn. If not, then Moreover, we have e(K) ≥ t|K| for every K ∈ K ′′ , and thus which contradicts the assumption of the theorem.
Let us remark that the argument of Theorem 22 also directly implies Theorem 11, since all the t-enclaves we obtain have bounded size (it is possible to simplify the proof a bit in this weakened case, since we only need to prove the existence of one such t-enclave).
The following result with an analogous proof implies Theorem 16.
Theorem 23. Let f : N → N be a non-decreasing significantly sublinear function. Let G be a class of graphs closed under taking subgraphs that has f -separators. Then p(G) = pcol ⋆ (G).
Proof. Since pcol ⋆ (G) ≥ p(G) in general, it suffices to show that pcol ⋆ (G) ≤ p(G). Letting t = p(G), there exists ε > 0 such that G is 2ε-resistant to t-percolation. Let C be chosen to satisfy the conclusion of Lemma 20 for ε and G. We claim that every graph G ∈ G contains at least ε C |V (G)| disjoint t-islands. The claim implies that pcol ⋆ (G) ≤ t, and hence the theorem.
It remains to establish the claim. Let G ∈ G be an n-vertex graph. By the choice of C, there exists X ⊆ V (G) such that |X| ≤ εn and every component of G − X has at most C vertices. Let K be the set of all components K of G − X such that there exists a t-island S ⊆ V (K) in G. It suffices to show that |K| ≥ ε C n. If not, then the set Z = X ∪ K∈K V (K) has size at most εn+C|K| ≤ 2εn. By the choice of ε there exists a t-island S ⊆ V (G)−V (Z). Choose such S to be minimal, then G[S] is connected, and so S ⊆ V (K) for some component K of G − X. However, K ∈ K by the choice of Z. This contradiction finishes the proof.
Clustered chromatic number of classes of bounded treewidth
We start by introducing the necessary concepts. For sets A, B of vertices of a graph of the same size k, an A − B linkage is a set of k pairwise vertexdisjoint paths with one end in A and the other end in B.
Let G and H be graphs. An H-decomposition of G is a function β that to each vertex z of H assigns a subset of vertices of G (called the bag of z), such that for every uv ∈ E(G), there exists z ∈ V (H) with {u, v} ⊆ β(z), and for every v ∈ V (G), the set {z : v ∈ β(z)} induces a non-empty connected subgraph of H. When H is a path or a tree, we say that (H, β) is a path or tree decomposition of G, respectively. The width of an H-decomposition is defined as max{|β(x)| : x ∈ V (H)} − 1, and the adhesion of an H-decomposition is max{|β(x) ∩ β(y)| : xy ∈ E(H)}. The treewidth tw(G) of a graph G is the minimum width of a tree decomposition of G.
either appears in all bags of the decomposition, or in at most two (consecutive) bags. A vertex v is internal if it appears in only one bag. A path decomposition (H, β) has large interiors if for every z ∈ V (H) with two neighbors x and y, there exists an internal vertex contained in β(z) and no internal vertex has a neighbor both in β(x) \ β(y) and in β(y) \ β(x).
We now show how to transform a tree decomposition of large order and bounded width into a path decomposition of large order and bounded adhesion, and then to clean it up, making it linked, appearance-universal, and with large interiors. The techniques to do so are standard and appear e.g. in [14]; we give brief arguments here to adjust for minor technical details and notational differences.
Observation 24. A coarsening of a proper decomposition is proper. A coarsening of a linked decomposition is linked. A coarsening of a decomposition of adhesion p has adhesion at most p. A coarsening of an appearance-universal decomposition is appearance-universal; furthermore, if an appearance-universal decomposition has large interiors, then its coarsening has large interiors.
Lemma 25. Let k, n ≥ 1 be integers. If a graph G has a proper tree decomposition (T, β) of order at least n n with bags of size at most k, then G has a proper path decomposition of adhesion at most k and order at least n.
Proof. If T has a vertex z of degree m ≥ n, then let T 1 , . . . , T m be the components of T − z, let H be a path z 1 z 2 . . . z m and let γ(z i ) = β(z) ∪ x∈V (T i ) β(x). Otherwise, T contains a subpath H with at least n vertices; for z ∈ V (H), let T z be the component of T − (V (H) \ {z}) containing z, and let γ(z) = x∈V (Tz ) β(x). In both cases, (H, γ) is a proper path decomposition of G of adhesion at most k.
Lemma 26.
There exists a function f link : Z + 0 × Z + → Z + as follows. Let p and n be integers. If G has a proper path decomposition (H, β) with adhesion at most p of order at least f link (p, n), then G has a proper linked path decomposition of adhesion at most p and order n.
If p = 0, then (H, β) is linked and the claim holds trivially. If there exist at least f link (p − 1, n) − 1 edges z 1 z 2 ∈ E(H) such that |β(z 1 ) ∩ β(z 2 )| ≤ p − 1, then (H, β) has a coarsening with adhesion at most p − 1 and order at least f link (p − 1, n), and the claim follows by the induction hypothesis. Hence, assume that there are at most f link (p − 1, n) − 2 such edges, and thus there exists a coarsening (H 1 , β 1 ) of (H, β) of order f link (p, n) − f link (p − 1, n) + 2 such that any two adjacent bags intersect in exactly p vertices. If there exist at least f link (p − 1, n) − 1 vertices z of H 1 with broken bags, then G has a proper path decomposition with adhesion at most p − 1 of order at least f link (p − 1, n), and the claim follows by induction. Otherwise, since (H 1 , β 1 ) has order greater than (f link (p − 1, n) − 2)(n − 1), it contains n consecutive vertices with unbroken bags, and thus it has a coarsening of order n in that no bag is broken. This coarsening is a proper linked path decomposition of G.
Lemma 27. Let p ≥ 0 and n ≥ 1 be integers. Every path decomposition (H, β) of a graph G with adhesion at most p of order at least n p+1 has an appearance-universal coarsening of order n.
Proof. We prove the claim by the induction on p. If p = 0, then every vertex appears in exactly one bag and (H, β) is appearance-universal. If a vertex v appears in at least n p bags of the decomposition, then there exists a coarsening (H 1 , β 1 ) of order n p such that v appears in all the bags. Let is a path decomposition of G − v of adhesion at most p − 1, and by the induction hypothesis, it has a coarsening (H 2 , β ′ 2 ) of order n that is appearance-universal. Let β 2 (z) = β(z) ∪ {v} for all z ∈ V (H 2 ). Then (H 2 , β 2 ) is an appearance-universal coarsening of (H, β) of order n.
Hence, we can assume that every vertex appears in at most n p − 1 bags of the decomposition. Let (H ′ , β ′ ) be the coarsening obtained by dividing H into subpaths with n p vertices (plus possibly one shorter path at the end) and merging the bags in the subpaths. Then every vertex appears in at most two consecutive bags, and thus (H ′ , β ′ ) is appearance-universal.
Lemma 28. Let n ≥ 1 be an integer. Let S ⊆ V (G). Every proper appearance-universal path decomposition (H, β) of order at least 3n has a coarsening of order n with large interiors.
Proof. Divide H into subpaths with three vertices and merge the bags in each subpath, obtaining a coarsening (H 1 , β 1 ) of order n. We claim that this coarsening has large interiors.
Let z ′ and z ′′ be the neighbors of z in H 1 . Consider now an internal vertex v ∈ β 1 (z). Since v is internal in (H 1 , β 1 ), it does not appear in all bags of (H, β), and by appearance-universality, v appears in at most two consecutive bags of (H, β). Consequently, v does not appear in the bags of both z 1 and z 3 , and by symmetry, we can assume that v ∈ β(z 3 ). Hence, any neighbor of v in β(z 4 ) must belong to β(z 2 ), and by appearance-universality it must belong to all bags. It follows that v does not have a neighbor both in β 1 (z ′ ) \ β 1 (z ′′ ) and in β 1 (z ′′ ) \ β 1 (z ′ ).
. Let H 0 be the path obtained from H by removing its endpoints. Let L be the union of the linkages L z over all z ∈ V (H 0 ). Note that L is the disjoint union of paths L 1 , . . . , L p . Order the path H arbitrarily, and let z be a vertex of H 0 whose predecessor in H is x and whose successor in H is y. Let l z : [p] → β(z) and r z : [p] → β(z) be functions defined so that l z (i) is the vertex of L i belonging to β(z) ∩ β(x) and r z (i) is the vertex of L i belonging to β(z) ∩ β(y). The 4-tuple E z = (G[β(z)], L z , l z , r z ) is the extended bag of z. A finite extended bag property is a function that to each extended bag of adhesion p assigns an element of a finite set X p . By the pigeonhole principle, we have the following.
Observation 29. For all integers p ≥ 0 and n ≥ 1 and for any finite extended bag property π, there exists an integer N as follows. For any linked decomposition (H, β) of adhesion at most p and any Z ⊆ V (H) such that |Z| ≥ N, there exists a set U ⊆ Z of size n such that π(E x ) = π(E y ) for all x, y ∈ U.
We are now ready to start working towards the proof of Theorem 17. Let us start with a lemma that enables us to obtain many t-islands separated from the rest of the graph by small cuts.
Lemma 30. For all integers p ≥ 0 and t, m, l ≥ 1, there exists an integer N ≥ 1 as follows. Let (H, β) be a linked path decomposition of a graph G, with large interiors, adhesion at most p and order at least N. Then either G contains K t,m or I t−1 + P m as a minor, or there exists a subpath H ′ of H of length l such that the internal vertices of the bag of z form a t-island in G for every z ∈ V (H ′ ).
Proof. Suppose that there does not exist a subpath H ′ of H as above. Let Z be the set of vertices of H of degree two such that the internal vertices of the bag of z do not form a t-island in G. Then |Z| ≥ (N − 2)/l. Consider any vertex z ∈ Z of degree two, and let (G[β(z)], L z , l z , r z ) be its extended bag. Since the internal vertices of the bag of z do not form a t-island, there exists an internal vertex v z ∈ β(z) with at least t non-internal neighbors. Choose t of the non-internal neighbors v 1 , . . . , v t , and let π(z) = (σ 1 , . . . , σ t , σ z ), where for i = 1, . . . , t, we have σ i = j if v i = l z (j) or v i = r z (j), and σ z = j if z lies on the path in L z with ends l z (j) and r z (j), and σ z = 0 otherwise. Observe that σ 1 , . . . , σ t are distinct, since H has large interiors. Note that π is a finite extended bag property, and by Observation 29, we can assume that there exist distinct vertices z 1 , . . . , z m in V (H) \ Z of degree two such that π(z 1 ) = π(z 2 ) = . . . = π(z m ). Without loss of generality, we can assume that π(z 1 ) = (1, 2, . . . , t, a), where a ∈ {0, 1, t + 1}. Let L be the union of the linkages L z over all vertices z ∈ V (H) of degree two, and let L 1 , . . . , L p be the paths of L.
If a ∈ {0, t + 1}, then contract each of the paths L 1 , . . . , L t to a single vertex x 1 , . . . , x t ; since v z 1 , . . . , v zm are adjacent to all of x 1 , . . . , x t , we obtain K t,m as a minor of G.
If a = 1, then contract each of the paths L 2 , . . . , L t to a single vertex x 2 , . . . , x t , and contract the parts of the path L 1 between vertices v z 1 , . . . , v zm so that they form a path on m vertices. We obtain I t−1 + P m as a minor of G.
We utilize Lemma 30 in an inductive argument as follows.
Lemma 31. For all integers k ≥ 0 and m, t ≥ 1, there exist real ε, C > 0 as follows. Let G be a graph with at least C vertices, and let S ⊆ V (G) be such that |S| ≤ ε|V (G)| + 2k + 3. If G has tree-width at most k and contains neither K t,m nor I t−1 + P m as a minor, then G contains a t-island disjoint from S.
The graph G has a proper tree decomposition (T, β) with bags of size at most k + 1. Clearly, the decomposition has order at least |V (G)|/(k + 1) ≥ C/(k + 1) = n n 1 1 . Hence, by Lemma 25, G has a proper path decomposition of adhesion at most k + 1 and order n 1 . By Lemma 26, G has a proper linked path decomposition of adhesion at most k + 1 and order n 2 . By Lemma 27, this decomposition has an appearance-universal coarsening of order n 3 , and by Lemma 28, this decomposition can be further coarsened to a decomposition (H, β) of order n 4 with large interiors. By Lemma 30, there exists a subpath H ′ of H of length l such that the set internal vertices I z of the bag of z form a t-island in G for every z ∈ V (H ′ ). If I z ∩ S = ∅ for some z ∈ V (H ′ ) then I z is the required t-island, and so we assume I z ∩ S = ∅ for every such z. In particular, |S| ≥ 4k+6, and thus |V (G)| ≥ (|S|−2k−3)/ε ≥ (2k + 3)/ε ≥ (2k + 3) 2 C.
Note that by our earlier assumption on the existence of certain separations, we have that |β(z)| ≥ C for at most one z ∈ V (H ′ ), and therefore we can select a subpath H ′′ of H ′ of length l/2 = 2k + 3 such that |β(z)| < C for every z ∈ V (H ′′ ). Let X = z∈V (H ′′ ) β(z) and Y = z∈V (H)−V (H ′′ ) β(z). Then |X ∩ Y | ≤ 2k + 2, |X| ≤ (2k + 3)C, and |S ∩ (X \ Y )| ≥ 2k + 3 by our assumptions. Let By the induction hypothesis, G ′ contains a t-island I disjoint from S ′ . Clearly, I is a t-island in G disjoint from S, as desired.
The main result now readily follows.
Proof of Theorem 17. By Theorem 16 we have pcol ⋆ (G) = p(G). By Observation 3 we have pcol ⋆ (G) ≥ col ⋆ (G) ≥ t. Finally, by Lemma 31 we have p(G) ≤ t (if G ∈ G has less than C vertices, then G is 1/C-resistant to t-percolation for every t ≥ 1).
Concluding remarks
In this paper we studied improper colorings of minor-closed graph classes, where we used the size of the maximum monochromatic component as a measure of impropriety. One can attempt building a similar theory for other impropriety measures as follows. We say that a graph parameter f with values in R + ∪{+∞} is connected if for a disconnected graph G with components G 1 , G 2 , . . . , G k we have f (G) = max 1≤i≤k f (G i ). Let f be a connected monotone graph parameter. The f -chromatic number χ f (G) of a graph class G as the minimum t such that there exists real C satisfying the following. For every G ∈ G there exists a partition V 1 , . . . , V t of V (G) so that f (G[V i ]) ≤ C for every 1 ≤ i ≤ t. For example, Theorem 9 implies that χ tw (G) ≤ 2 for every minor-closed class G.
One can also define the f -list chromatic number χ l f (G) and the f -coloring number col f (G) of a graph class G analogously to the clustered list chromatic and coloring numbers. The inequalities col f (G) ≥ χ l f (G) ≥ χ f (G) hold for any choice of f .
Note that considering f to be a connected graph parameter defined by f (G) = 1, if G is edgeless, and f = +∞, otherwise, one recovers from the definitions above the ordinary chromatic, list-chromatic numbers, and the coloring number.
The graph parameter ∆ (maximum degree) is particularly well-studied in this context. Edwards et al. [10] have shown that χ ∆ (X (K t )) = t−1. Ossona de Mendez, Oum and Wood [6] characterized minor closed graph classes with given ∆-list chromatic number as follows.
Theorem 32. A minor-closed class of graphs G satisfies χ l ∆ (G) ≤ t if and only if there exists m ≥ 1 such that K t,m ∈ G.
As we mentioned before, Conjecture 4 implies that col ⋆ (G) = χ l ⋆ (G) for every minor-closed class G. Analogously, Theorem 32 and Conjecture 4 imply that χ l ∆ (G) ≤ col ∆ (G) ≤ col ⋆ (G) ≤ χ l ∆ (G) + 1. Unfortunately, the left inequality does not always holds with equality, as shown in the next lemma. It remains to show that col ⋆ (O) ≥ 3 , i.e. for every C > 0 there exists an outerplanar graph G such that G contains no 2-island S such that ∆(G[S]) ≤ C. Indeed, consider an outerplanar graph constructed as follows. Let P 1 , P 2 and P 3 be three vertex-disjoint paths on C + 1 vertices. Let v i be an end of P i for i = 1, 2, 3. The graph G is obtained from P 1 ∪ P 2 ∪ P 3 by joining v i by an edge to every vertex of P i+1 for i = 1, 2, 3, where P 4 = P 1 by convention. Let S be a 2-island in G. Suppose that v 1 , v 2 , v 3 ∈ S. Then without loss of generality there exists a vertex in v ∈ V (P 2 ) ∩ S. Choose such a vertex v closest to v 2 along P 2 . Then v has two neighbors not in S: one along P 2 and v 1 . This contradiction implies that v i ∈ S for some i ∈ {1, 2, 3}. As v i has degree C + 2, at least C + 1 of its neighbors lie in S, implying ∆(G[S]) > C, as desired.
Following this line of inquiry one might ask how closely col f (G) and χ l f (G) are related for other (natural) connected graph parameters f . Dvořák, Pekárek, and Sereni [9] study this question in more detail. | 2017-10-07T19:23:01.000Z | 2017-10-07T00:00:00.000 | {
"year": 2017,
"sha1": "4d419abb737f5b64ffe3d325b88091b655f3aaaf",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4d419abb737f5b64ffe3d325b88091b655f3aaaf",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270490999 | pes2o/s2orc | v3-fos-license | Kinetic Landscape of Single Virus-like Particles Highlights the Efficacy of SARS-CoV-2 Internalization
The efficiency of virus internalization into target cells is a major determinant of infectivity. SARS-CoV-2 internalization occurs via S-protein-mediated cell binding followed either by direct fusion with the plasma membrane or endocytosis and subsequent fusion with the endosomal membrane. Despite the crucial role of virus internalization, the precise kinetics of the processes involved remains elusive. We developed a pipeline, which combines live-cell microscopy and advanced image analysis, for measuring the rates of multiple internalization-associated molecular events of single SARS-CoV-2-virus-like particles (VLPs), including endosome ingression and pH change. Our live-cell imaging experiments demonstrate that only a few minutes after binding to the plasma membrane, VLPs ingress into RAP5-negative endosomes via dynamin-dependent scission. Less than two minutes later, VLP speed increases in parallel with a pH drop below 5, yet these two events are not interrelated. By co-imaging fluorescently labeled nucleocapsid proteins, we show that nucleocapsid release occurs with similar kinetics to VLP acidification. Neither Omicron mutations nor abrogation of the S protein polybasic cleavage site affected the rate of VLP internalization, indicating that they do not confer any significant advantages or disadvantages during this process. Finally, we observe that VLP internalization occurs two to three times faster in VeroE6 than in A549 cells, which may contribute to the greater susceptibility of the former cell line to SARS-CoV-2 infection. Taken together, our precise measurements of the kinetics of VLP internalization-associated processes shed light on their contribution to the effectiveness of SARS-CoV-2 propagation in cells.
Introduction
COVID-19, caused by the SARS-CoV-2 virus, led to a major disruption of everyday life, demonstrating how zoonotic events can occur unexpectedly and have a catastrophic impact on the general population [1,2].At present, more than 774 million cases have been reported, of which more than 7 million have been reported to be fatal (WHO).Part of the β-coronavirus family [3,4], SARS-CoV-2 is a positive-sense single-stranded RNA virus harboring 14 ORFs that encode 27 proteins [5].These include structural proteins, namely, the S (spike), M (membrane), N (nucleocapsid), and E (envelope), which play major roles in viral survival, propagation, infectivity, and virulence [6,7].Different systems have been developed to recreate the SARS-CoV-2 infection process in low-biosafety lab conditions.A Viruses 2024, 16, 1341 2 of 22 prominent approach is the use of virus-like particles (VLPs) [8][9][10][11][12][13] which self-assemble in cells after expression of the SARS-CoV-2 structural proteins.VLPs faithfully recapitulate the step of viral entry, while unable to replicate, which renders them non-infectious [8].SARS-CoV-2 VLP formation is driven primarily by the M protein, while the E protein plays a potentiating role [10,12].While these two proteins are sufficient to form a particle, the S protein is required for entry into host cells.Interestingly, the addition of S to SARS-CoV-2 VLPs leads to a decrease in the amount of M and E. The optimal proportion of structural proteins M, N, E, and S for the production of stable SARS-CoV-2 VLPs is 3:12:2:5 [10,12].Further, addition of a specific cis-acting RNA element derived from SARS-CoV-2 to the VLPs increased packaging efficiency [8].
Two mechanisms, namely, membrane fusion and endocytosis, have been demonstrated to play a role in SARS-CoV-2 entry into cells [14].SARS-CoV-2 binds to target cell membranes through the interaction of S with cell membrane proteins (SR-B1, AXL, KIM1/TIM1, CD147, Neuropilin-1,2, DC-SIGN, L-SIGN, and others), the most prominent of which is angiotensin-converting enzyme 2 (ACE2) [14][15][16][17][18][19][20][21][22][23][24][25].The considerable number of proteins participating in the recognition of SARS-CoV-2 account for its wide tissue and cell tropism.The ACE2-bound S protein is recognized by the membrane-bound transmembrane serine protease 2 (TRMPRSS2), which cleaves the S protein at the S2 ′ site, leading to a dramatic conformational change that allows fusion between the virus and host cell membranes [14].However, it was reported that fusion takes place in the absence of ACE2 when tethering between the VLPs, and liposome membranes occurs, with the presence of ACE2 only stimulating the process [26] Membrane fusion results in the release of viral genetic information into the cell, setting the stage for viral replication [27].If the S protein-receptor complex is not engaged by this protease due to a low membrane concentration or absence of the latter, ACE2-bound SARS-CoV-2 is internalized via clathrin-mediated endocytosis [28].The pH of the virus-containing endosomes then drops, leading to activation of the cathepsin L protease [29].Cathepsin L-mediated cleavage of SARS-CoV-2 at the S2 ′ site enables fusion of the viral and endosome membranes as well as nucleocapsid release into the host cell [30].
Recent real-time imaging studies provided valuable insights into the process of SARS-CoV-2 internalization.Tracking of single vesicular stomatitis virus (VSV) chimeras containing the SARS-CoV-2 S protein revealed that SARS-CoV-2 entry requires an acidic environment [31,32].A detailed understanding of viral entry, however, requires a comparison of kinetic parameters of VLP internalization to those of acidification and endosomal ingress, which requires VLPs possessing all four SARS-CoV-2 structural proteins.
Herein, we employed live-cell imaging and a dedicated image analysis pipeline (Single-Particle Tracking Analysis in Cells Using Software Solutions, SPARTACUSS 1.0) to precisely follow and quantify the timing of VLP internalization, VLP-containing endosome ingression, acidification, active microtubular transport, and nucleocapsid release.Our temporal characterization reveals the sequence and interdependence of the above-described processes.Rapid VLP acidification, which coincides with dynamin-mediated endosome scission and nucleocapsid release, occurs 4 and 12 min after plasma membrane binding in VeroE6 and A549 cells, respectively, quickly followed by the initiation of active microtubuledependent VLP motion.Our results suggest that VLP fusion (nucleocapsid release) occurs in parallel to or shortly after endosome formation.Surprisingly, the VLPs do not co-localize with early endosomes during VLP internalization.The more rapid internalization observed in VeroE6 cells may contribute to the infectivity of SARS-CoV-2 observed in these cells relative to A549.Further, neither Omicron or del-1 mutations influence the kinetics of SARS-CoV-2 VLP internalization.
Transfection
For expression of Ace2NeonGreen and mNeonGreen, we performed transient transfection via baculovirus-mediated gene transduction of mammalian cells (BacMam) using the Montana Molecular ACE2 green kit (C110G, Bozeman, MT, USA) as per the manufacturer's protocol.Expression of the fluorescent proteins was evaluated via fluorescence microscopy two days after transfection, whereafter cells were treated with VLPs.
To observe endosome trafficking, we used CellLight™ Early Endosomes-GFP, BacMam 2.0 (Catalog number: C10586, Thermo Fisher Scientific, Waltham, MA, USA), which allowed us to introduce GFP-tagged Rab5a into cells and thus visualize early endosome vesicles.We plated 10,000 cells in 35 mm glass-bottom culture dishes (MatTek Corporation, Ashland, MA, USA) and incubated these with 2 µL of CellLight™ Early Endosomes-GFP overnight.On the following day, we added VLPs and imaged the cells at 30 s intervals in 11 Z-planes with a Z-step size of 0.2 µm.
To observe late endosomes, we used LysoTracker (ThermoFisher Scientific, Waltham, MA, USA).Cells were incubated with the VLPs for 30 min, treated with 50 mM Lysotracker for 1 min, and imaged every 30 s in 11 Z-planes with a Z-step size of 0.2 µm.
To image tubulin, we used abberior LIVE 610 conjugated to cabazitaxel.Cells were inoculated with VLPs for 30 min and incubated with abberior LIVE 610 conjugated to cabazitaxel for 15 min, followed by washing with fresh FluoroBrite™ DMEM (ThermoFisher Scientific, Waltham, MA, USA).The cells were imaged every 30 s in a single Z-plane.
Time-Lapse Live-Cell Imaging
Forty-eight hours before imaging, all cells were transferred to MatTek glass-bottom dishes (MatTek Corporation, Ashland, MA, USA) at 20% confluence.Live-cell imaging was performed on an Andor Dragonfly spinning-disk confocal system with a Nikon Eclipse Ti2-E inverted microscope equipped with the Nikon Perfect Focus System (PFS), a Nikon CFI Plan Apo VC 60× (NA 1.2) water immersion objective, a Nikon Apo 60× (NA 1.4) oil objective, or a Nikon HP Plan Apo 100× (NA 1.35) silicone λS objective, and a highsensitivity iXon 888 Ultra Electron Multiplying Charge-Coupled Device (EMCCD) camera.Time intervals between consecutive frames varied between 15 and 30 s depending on the type of experiment, cell line, and the labeled protein.Images were acquired with variable z-stacks of between 1 and 36 steps depending on the type of experiment and a z-step size of 0.2 µm.Prior to imaging, Petri dishes mounted on the microscope were left to thermally equilibrate for at least 30 min.All cells were incubated in FluoroBrite™ DMEM (ThermoFisher Scientific, Waltham, MA, USA) for imaging and maintained at 37 • C and 5% CO 2 during imaging.To visualize cells as transparent and opaque, we used the Imaris 9.6.1 imaging software tool (Oxford Instruments, Abingdon, UK).
Electron Microscopy
MLE-12 cells (a gift from Dr Kristi Warren) were grown on ACLAR disks and incubated with SARS-CoV-2 VLP Wu for 5 min.Cells were fixed in 2.5% glutaraldehyde plus 1% paraformaldehyde in 0.1 M cacodylic buffer for 30 min and then embedded in resin using an Embed 812 kit (Electron Microscopy Sciences, Hatfield, PA, USA) and sectioned at 80 nm with a diamond knife (Diatome, Nidau, Switzerland) using a Leica EM UC6 (Leica Microsystems, Wetzlar, Germany).Sections were visualized using a JEM 1400 Plus electron microscope (JEOL, Tokyo, Japan) at 120 kV.
Virus-like Particle Preparation
VLPs were prepared as previously described [11,12].Once generated, these were kept on ice until use 1 to 6 days after production.
VLP Tracking
The channel containing VLP fluorescence was isolated in Fiji and smoothed with 3D Gaussian filtering (1.5-pixel radius in XY and Z).Maximum intensity projection (MIP) was then performed.
The particles were tracked using the MTrackJ plugin, clicking on the particle location in each time frame, using the option "Apply local cursor snapping during tracking" with a range of 5 × 5 pixels.Completed tracks were exported as .mdffiles.Multiple particles were tracked in the same session.To continue previous tracking sessions, we imported the previous .mdffile and continued, so that one .mdffile contained all the tracks of one movie.
The subsequent processing steps outlined below were performed with a set of Python scripts and Fiji macros.
To convert the 2D + t tracks to 3D + t, the script adds the Z coordinate by going back to the 3D movie before MIP and finding the plane that contains the maximum signal along a cylinder with a 2-pixel radius, centered at the location of the particle.
Instantaneous VLP speed v(t) at each time t was measured by subtracting the positions of a particle at times (t + 1) and t, then dividing by the time interval between successive frames.The positions were converted to real physical units (nanometers) using the pixel sizes in XY and Z.
To visualize single particles over time, a cuboid with size dx, dy, and dz was cropped out of the 3D image stack for each time point, such that the particle was located at the center of the square (dx, dy), while dz was equal to the full width of the stack.Thus, the motion of the particle in Z could be visualized.The cuboids were maximum-projected along either the X or Y axis, resulting in ZY or ZX rectangles, respectively.For completeness, we also generated the XY squares, by performing max-Z projections.The stacking of these three ZY, ZX, and XY crops is referred to as a kymograph and discussed throughout the paper.
VLP Analysis
After having information on the position, speed, and intensity of each particle, we determined the precise moment when the intensity started to decrease and the speed started to increase-specific hallmarks of the VLP internalization process.Every particle was then aligned to these positions, and an average for the speed and intensity was obtained.For every intensity value, a measured background value was subtracted.The curves were then plotted and compared showing the standard deviation for each point as error bars.
When determining the pH of each particle, we used the intensity values and transformed them to pH values using the following formula: 7.11 − log10(1/(X × 0.886) − 1) [33], where X stands for the corresponding intensity value.
Statistical Analysis
Between-group comparisons were performed using Student's t-test.The significance threshold was set at p < 0.01.Data are presented as the mean ± standard deviation.
Visualization of VLP Internalization
To study the kinetics of SARS-CoV-2 entry into cells, we employed VLPs derived from HEK293 cells overexpressing SARS-CoV-2 structural proteins M, E, and S of the Wuhan variant [11].These VLPs successfully recapitulate SARS-CoV-2 internalization [10,12], binding to the host cell surface via S protein, whereafter they are internalized, as shown by a TEM micrograph of thin-sliced MLE-12 cells in the process of endocytosing a SARS-CoV-2 VLP Wu (Figure S1).To visualize internalization, we used VLPs containing M-mCherry (as well as unlabeled M) [12].Hereafter, VLP Wu :M Ch is short for VLP Wuhan :(E, S, M&M-mCherry).We treated U2OS cells which overexpress NeonGreen-tagged ACE2, non-labeled ACE2, and TRMPRSS2 with these VLPs and visualized the movement of particles in 3D at 30 s intervals via multi-point spinning-disk live-cell microscopy.We observed adherence of the VLPs to the cell membrane, often on the filopodia, as previously suggested [34] (Figure 1A, Videos S1-S3).We manually tracked each particle after membrane binding in order to acquire its speed and position in 3D (see Section 2).Interpreting these multidimensional data requires clear and intuitive visualization, as well as precise measurement.To this end, we coupled the tracking procedure to a post-processing pipeline for extracting and visualizing the multidimensional tracking results for each single VLP.We named this procedure SPARTACUSS (Single-PARTicle Tracking Analysis in Cells Using Software Solutions).
For the visualization of a single particle via SPARTACUSS, we first cropped the immediate space around the particle's x and y coordinates while keeping the entire z-stack; additionally we used Gaussian blur to make the VLPs more distinct (Figure S2A).Next, we performed maximum intensity projections as follows: maximum intensity of x-to show the particle's position in z and y; maximum intensity of y-to visualize the particle's position in z and x; and maximum intensity of z-to depict the particle's position in x and y.We repeated this for each time point and combine the crops in a one-block kymograph (Figure S2B,C).Finally, we created such a block for the 488 channel (Green) and 591 mCherry (Red) channel, and a merged block for both channels (Figure S2C).SPARTACUSS allows us to measure and plot the speed in 3D (x,y,z), as well as the changes in the intensity of the labeled structural proteins in individual VLPs (Figure S2C).
Using the SPARTACUSS workflow, we observed that, after binding, the VLPs initially moved slower (10-20 nm/s), whereafter their speed increased.This speed increase frequently coincided with downward motion in z, suggestive of particle internalization (Figure 1A,B,F,G, Videos S1-S7).
To confirm if this downward motion indeed marks VLP entry into cells, we expressed mNeonGreen, which freely diffuses throughout the whole cell volume, using it to reconstruct the cells in 3D (Videos S4-S6).To see both the membrane-bound and internalized VLPs, we use transparent visualization of the mNeonGreen cell volume (Figure 1C, Video S7).In order to distinguish between the two VLP populations, we make the volume opaque (Figure 1E, Video S7), which renders most VLPs not visible once inside the cell.Internalized VLPs can be subsequently visualized using a side view of the transparent cell volume image (Figure 1D, Video S7).Visualization of a single VLP confirms that its speed increase in 3D coincides with cell surface penetration, as observed with the ACE2 tagging discussed above (Figure 1B,G).These results clearly demonstrate the capability of SPARTACUSS to capture the exact moment of VLP entry into cells and thus measure its 3D dynamics in real time.To understand if active transport along microtubules underpins the increased movement of particles once inside the cell, we pre-incubated cells with VLP Wu :M Ch , treated them with abberior LIVE 610 tubulin dye (cabazitaxel conjugated with LIVE 610 dye) for 10 min, and immediately proceeded with live-cell imaging.We observed continuous co-localization between VLPs and microtubules (Figure 1H), indicating that fast particle movement occurs along the microtubule network (Video S8).
Next, we sought to assess how anti-SARS-CoV-2 S protein antibodies neutralize the SARS-CoV-2 virus.To this end, we incubated cells with VLP Wu :M Ch pre-treated with an Anti-SARS-CoV-2 Spike S1 (CR3022 clone) antibody.Antibody pre-incubation precipitated most of the VLPs, preventing cell entry by effectively reducing their free concentration (Figure 1I and Video S9).
Dynamin-Dependent VLP Entry
Endocytosis has been suggested to play a role in VLP entry [35,36].To determine the duration of time between VLP binding and ingress, we expressed GFP-tagged dynamin in Vero E6 cells which are highly susceptible to SARS-CoV-2 infection [37].Dynamin binds to the invaginated clathrin-coated vesicles for 10 to 15 s and is responsible for vesicle scission during endocytosis [38][39][40].The transient dynamin foci formed during this process are a standard marker of endosome formation [38,[41][42][43].Time-lapse imaging of dynamin-1-GFPexpressing cells treated with VLP Wu :M Ch allowed us to visualize VLP movement across the cells as well as the "blinking" of short-lived dynamin foci, indicative for endosome ingress (Videos S10-S12).A share of 49% of the VLPs co-localized with dynamin foci during or immediately after the above-described speed increase and downward movement (Figure 2A,B).This transient co-localization strongly suggests that endocytosis is involved in VLP internalization.
Dynamin accumulation does not always reflect successful scission of the endocytic vesicles, and several abortive cycles are often observed before a productive scission occurs [44].If a single VLP co-localizes transiently with dynamin more than once, it would be indicative of such abortive events.Indeed, we observed that 64% of the particles colocalized with a dynamin focus only once, 17% co-localized twice, and 19% experienced three or more co-localization events.These results suggest that in 36% of VLP entries, there is at least one abortive dynamin binding (Figure 2C).In addition, dynamin foci formation allowed us to measure the time between VLP binding to the membrane and endosome vesicle scission, which was 5.24 ± 6.8 min.SPARTACUSS also allowed us to measure the time between dynamin foci formation and VLP speed increase.Particles increased their speed within 45 s of dynamin binding, and, in 30% of cases, this increase occurred in parallel with the dynamin scission event (Figure 2D,E).To further evaluate the role of endocytosis in VLP internalization, we used Dynol 34-2, a potent inhibitor of dynamin 1, which prevents receptor-mediated endocytosis [43].After Dynol 34-2 treatment, VLPs did bind to the cell surface but did not enter (Figure 2F, Video S13).However, it should be noted that Dynol 34-2 also greatly altered cell morphology, inducing a round phenotype, indicative of an effect on the cell cortex.Such considerable changes in morphology could affect membrane-related processes, including endocytosis.To check if VLPs localized within the early endosome, we used cells expressing Rab5a-GFP.Surprisingly, we did not detect co-localization of the VLPs with early endosomes (Figure 2G and Video S14).As endosome maturation is paralleled by a drop in pH, we asked whether the VLPs co-localize with acidic vesicles [45].To this end, we stained cells with LysoTracker, which marks acidic vesicles, after incubation of cells with VLPs.Some of the VLPs co-localized with the stained acidic vesicles (Figure 2H and Video S15).
Taken together, our results indicate that dynamin-mediated endocytosis is involved in VLP internalization.Furthermore, VLPs are not internalized via Rab5a-positive early endosomes but rather localize within acidic vesicles.
Dynamics of VLP Acidification
The pH of medium was previously shown to influence the internalization of an SARS-CoV-2 S protein-containing VSV chimera, with a more acidic environment promoting internalization preferentially via fusion [46][47][48].To evaluate the role of pH in VLP internalization, we sought to precisely measure the pH dynamics of VLPs in which a fraction of the M protein is tagged with superecliptic pHluorin in its C-terminal end [33].This protein emits bright fluorescent light at pH ≥ 8, but its intensity sharply decreases at pH < 7.5, completely disappearing at pH < 5 [49].As the M protein C-terminal domain lies inside the VLPs, the fused pHluorin serves as a real-time indicator of the intra-VLP pH.The pHluorin-labeled VLPs in the medium (pH8) emitted a bright green signal; however, shortly after binding to the membrane of Vero E6 cells, the intensity rapidly disappeared (Video S16), suggesting that the pH of VLPs decreases sharply.To understand when this happens relative to VLP internalization, we used VLPs, which contain unlabeled E, S, N, and M proteins but also two labeled fractions of M protein-one with mCherry and another with pHluorin, in addition to the cis-acting RNA element [8].We will refer to these VLPs as VLP Wu :M Ch M pH (short for VLP Wuhan :(E, S, N, M&M-mCherry&M-pHluorin).The dual VLP labeling enabled us to follow the VLP and analyze its dynamics even when complete disappearance of the pHluorin signal occurred due to a sharp drop in pH.We observed that in 63% of the VLPs, the pHluorin signal disappeared without any change in mCherry intensity, while in 34% of the VLPs both fluorescent signals disappeared simultaneously, and in 3% of the particles there was no change in the fluorescent signal during the time course of our experiment in Vero E6 cells (Figure 3A).These results indicate that in twothirds of the cases, the disappearance of the pHluorin fluorescent signal is a result of pH decrease without VLP disassembly.
observed that in 63% of the VLPs, the pHluorin signal disappeared without any change in mCherry intensity, while in 34% of the VLPs both fluorescent signals disappeared simultaneously, and in 3% of the particles there was no change in the fluorescent signal during the time course of our experiment in Vero E6 cells (Figure 3А).These results indicate that in two-thirds of the cases, the disappearance of the pHluorin fluorescent signal is a result of pH decrease without VLP disassembly.Next, we studied the influence of concurrent ACE2 and TMPRSS2 protease overexpression on VLP internalization in Vero E6 cells.To this end, we used VLPs containing RNA (VLP Wu :M Ch M pH , R), which included the cis-acting T20 element reported to enhance packaging [8].The inclusion of this RNA element did not affect the percentages of VLPs, in which the pHluorin signal disappeared without any change in mCherry intensity (Figure 3.Meanwhile, ACE2 and TMPRSS2 overexpression in cells led to a significant increase in the number of such VLPs (up to 100%) (Figure 3A).Next, we performed the same experiments with human lung adenocarcinoma A549 cells, which are the standard pulmonary epithelial cell model for SARS-CoV-2 infection (Video S17).Without ACE2 and TMPRSS2 overexpression, 28% of the VLPs exhibited a decrease in pHluorin fluorescence without any change in mCherry intensity, 15% exhibited simultaneous disappearance of both signals, and 57% did not show any change in both signals throughout the experiment (Figure 3B).These results suggest that A549 cells are less susceptible to SARS-CoV-2 VLP internalization than Vero E6 cells.Overexpression of ACE2 and TMPRSS2 considerably increased the fraction of VLPs exhibiting a decrease in pHluorin intensity, without changes in mCherry intensity (89% versus 28%) (Figure 3B).Overall, ACE2 and TMPRSS2 overexpression increased the number of VLPs exhibiting a decrease in pH without VLP disassembly in both cell lines (Figure 3A,B).
To evaluate the speed with which the pHluorin signal decreases, we aligned all VLP tracks to the start of the pHluorin intensity decrease (Figure 3C).We thus measured the half-time of pHluorin signal disappearance, which was 1.4 and 1.6 min in Vero E6 and A549 cells, respectively.Conversion of pHluorin intensity to pH values [33] showed that the VLP pH in Vero E6 cells decreased from 8 to 6.3, while that in A549 cells decreased from 8 to 6.9 over a period of 1.5 min (Figure 3C,D), attesting to the rapid acidification of VLPs.Our results also demonstrated that the speed of VLPs tends to increase following the start of pH decrease (Figure 3E).
Dual labeling with both M-mCherry and M-pHluorin allowed us to infer the temporal order of the three stages of VLP internalization at the single VLP level: VLP binding to the cell membrane, VLP pH decrease, and the increase in its speed.We thus measured the time intervals between each possible pair of the above-described internalization steps (Figure 3F-H, and Table 1).In VeroE6 cells, VLP pH began to decrease 4.1 ± 3.6 min after plasma membrane binding, and 2.4 ± 3.7 min later the VLP speed started to increase (Table 1).In A549 cells, VLP pH began to decrease 12.5 ± 8.4 min after binding, and 1.4 ± 3.7 min later the VLP speed started to increase.These results indicate that, on average, the pH change occurs during or immediately before the VLP speed increases (Figure 3J,K).At a single VLP level, however, we have examples where the speed increase occurs after the pH decreases, but also examples where the speed increases simultaneously or before the start of pH decrease (Figures S3 and S4, and Videos S18-S21), indicating that there is no direct causal relationship between the two events.This uncoupling is further supported by the significant variation in pH at which the VLP speed increase begins (Figure 3I).As we demonstrate above, the speed increase is a hallmark for active microtubule movement of the vesicles containing VLPs.Taken together, for the majority of VLPs, acidification starts before or during microtubule attachment (Figure S3C).As acidification takes less than 2 min, we observe cases when movement via the microtubules occurs after the pH is already <5.Direct comparison of VLP internalization kinetics among the two cell lines (Figure 3H) revealed no statistically significant difference in the interval between VLP pH decrease and speed increase (start of the microtubule movement).However, the intervals between VLP binding and both pH decrease and speed increase are 2-3 times shorter in VeroE6 cells than in A549 cells (Figure 3, Table 1).This difference in internalization efficiency may contribute to the greater SARS-CoV-2 susceptibility of Vero E6 relative to A549 cells, generally attributed to the lack of interferon signaling in the former [50][51][52][53].Close examination of the distribution of the intervals between VLP binding and both pH decrease and speed increase in A549 cells revealed a small population of VLPs for which these intervals are significantly longer (Figure 3F,G).These longer intervals may be attributed to cycles of abortive dynamin-mediated endosome scission, as previously discussed.Importantly, ACE2 and TMPRSS2 overexpression had no effect on the dynamics of VLP speed increase and pH decrease in both Vero E6 and A549 cells (Figures S5 and S6), despite enhancing VLP uptake efficiency (Figure 3A,B).In summary, our findings highlight the notable speed and efficacy of VLP internalization, which are more pronounced in VeroE6 relative to A549 cells.Furthermore, the dynamic profiles of VLP internalization-related processes, namely, membrane binding, acidification, and initiation of microtubule transport, show that the latter two processes, while temporally proximal, are not interdependent.
Kinetics of VLPs Lacking the Furin Cleavage Site
In contrast to SARS-CoV, SARS-CoV-2 harbors a PRRA furin cleavage site (FCS) at the S1/S2 junction of the S protein [53].Cleavage by cellular protease furin followed cleavage at the S2 ′ site by TMPRSS2 or cathepsins results in the separation of the two S sub-domains.Thus, we sought to determine how absence of the FCS would affect internalization at the single VLP level.To this end, we used VLPs containing M-mCherry, M-pHluorin, and S protein lacking an FCS (del-1) [54][55][56][57][58].We refer to these as VLP Wu(del-1) :M Ch M pH ,R, short for VLP Wuhan (del-1) :(N, E, S, M&M-mCherry & M-pHluorin, T20 RNA).Treatment of A549 cells overexpressing ACE2 and TMPRSS2 with VLP Wu(del-1) :M Ch M pH ,R, revealed a decrease in the percentage of VLPs for which pHluorin signal decrease occurred without a change in mCherry intensity, from 89% for VLP Wu :M Ch M pH , R to 73% for VLP Wu(del-1) :M Ch M pH , R (Figure 4A).Further, the rate of pHluorin decrease was similar between the two (Figure 4B).The distribution of the time intervals between VLP binding and pHluorin decrease/VLP speed increase was also comparable (Figure 4C-G, Table 1).As observed for VLP Wu , the pH decrease of VLP Wu(del-1) initiated either a little before, in parallel to, or after the speed of the same VLPs began to increase (Figure S7; Videos S22 and S23).Similar results were obtained in VeroE6 cells (Figures S8 and S9; Videos S24 and S25).Taken together, our results indicate no significant role for the FCS in the internalization of SARS-CoV-2 VLPs.
Kinetics of VLPs Lacking the Furin Cleavage Site
In contrast to SARS-CoV, SARS-CoV-2 harbors a PRRA furin cleavage site (FCS) at the S1/S2 junction of the S protein [53].Cleavage by cellular protease furin followed cleavage at the S2' site by TMPRSS2 or cathepsins results in the separation of the two S subdomains.Thus, we sought to determine how absence of the FCS would affect internalization at the single VLP level.To this end, we used VLPs containing M-mCherry, M-pHluorin, and S protein lacking an FCS (del-1) [54][55][56][57][58].We refer to these as VLP Wu(del-1) :M Ch M pH ,R, short for VLP Wuhan (del-1) :(N, E, S, M&M-mCherry & M-pHluorin, T20 RNA).Treatment of A549 cells overexpressing ACE2 and TMPRSS2 with VLP Wu(del-1) :M Ch M pH ,R, revealed a decrease in the percentage of VLPs for which pHluorin signal decrease occurred without a change in mCherry intensity, from 89% for VLP Wu :M Ch M pH , R to 73% for VLP Wu(del- 1) :M Ch M pH , R (Figure 4A).Further, the rate of pHluorin decrease was similar between the two (Figure 4B).The distribution of the time intervals between VLP binding and pHluorin decrease/VLP speed increase was also comparable (Figure 4C-G, Table 1).As observed for VLP Wu , the pH decrease of VLP Wu(del-1) initiated either a little before, in parallel to, or after the speed of the same VLPs began to increase (Figure S7; Videos S22 and S23).Similar results were obtained in VeroE6 cells (Figures S8 and S9; Videos S24 and S25).Taken together, our results indicate no significant role for the FCS in the internalization of SARS-CoV-2 VLPs.
Kinetics of Omicron VLPs
Emergence of the SARS-CoV-2 Omicron variant in the autumn of 2021 was followed by its rapid spread, overtaking previous variants in global prevalence.Omicron harbors more than 50 amino acid substitutions, 37 of which in the S protein, with 15 affecting the receptorbinding domain.In light of its considerable transmissibility and rapid replication in human bronchi (70-fold greater than of previous variants), we sought to measure the internalization kinetics of Omicron [59].To this end, we employed VLPs composed of unlabeled N, E, S, and M proteins harboring Omicron substitutions.VLPs also included mCherrytagged and pHluorin-tagged M protein and are referred to as VLP Omi :M Ch M pH R, short for VLP Omicron :(N, E, S, M, M-mCherry & M-pHluorin, T20 RNA).Treatment of A549 cells overexpressing ACE2 and TMPRSS2 (Videos S26-S28) revealed a decrease in the proportion of VLPs in which the pHluorin signal decay occurred without a change in mCherry signal, that is, from 89% in VLP Wu :M Ch M pH R to 65% for VLP Omi :M Ch M pH R (Figure 4A).However, the rate of decrease pHluorin intensity was largely identical between the two VLP types as well as the Wuhan del-1 mutant.The average and the distribution of intervals between VLP binding and pHluorin decrease/VLP speed increase were also comparable among the three VLP types (Figures 4E and S10).Taken together, no considerable differences in internalization kinetics were noted for the Omicron variant.
Rate of VLP Nucleocapsid Release
Nucleocapsid release occurs via VLP fusion either to the plasma or endosomal membrane, representing an essential step in the internalization process.Thus, we set out to measure the dynamics of VLP nucleocapsid release, using VLPs in which a fraction of the nucleocapsid-forming N protein is tagged with EGFP, and a fraction of M is tagged with mCherry (VLP Wu :N eG M Ch R), short for SARS-CoV-2:(E, S, N&N-eGFP, M&M-mCherry, T20 RNA) VLPs.We observed that, while initially greater than 90% of VLPs emitted both green and red fluorescence, less than 1% of VLPs emitted green fluorescence at 2 days after production.Although we have previously shown that fluorescently tagging M or N with GF results in stable VLPs as imaged by atomic force microscopy [12]), our current observations suggest that tagging both M and N proteins simultaneously reduces VLP stability [13].In 57% of these double-labeled VLPs, the EGFP signal disappeared before the mCherry one, which may reflect N protein release (Figure 5A).The rate of N-EGFP signal decay in VLP Wu :N eG M Ch R was slower than that of M-pHluorin intensity decrease in VLP Wu :M Ch M pH R (Figure 5C).The observed slow decay suggests that nucleocapsid release may not be a single-step rapid event, but a rather gradual process of continuous N-EGFP release, which cannot be followed thereafter.Only in a single case (of 18 VLPs) were we able to follow the N-eGFP signal after rapid release of the nucleocapsid.We tracked both fluorescent signals (M-mCherry and N-eGFP) for this VLP and observed that the M-mCherry signal did not move in z during signal separation, while the eGFP did.This could suggest that, in this case, fusion occurs either at the cell plasma membrane or during VLP ingression, prior to active movement via the microtubular network (Figure 6 and Video S29).On average, there was no statistically significant difference between the interval from VLP binding to the start of pH decrease of VLP Wu :M Ch M pH R (Table 1) and the interval from VLP binding to the start of nucleocapsid release of VLP Wu :N eG M Ch R (Figure 5D-F).This result suggests that, on average, VLP acidification coincides with nucleocapsid release.
Viruses 2024, 16, x FOR PEER REVIEW 15 of 22 decay in VLP Wu :N eG M Ch R was slower than that of M-pHluorin intensity decrease in VLP Wu :M Ch M pH R (Figure 5C).The observed slow decay suggests that nucleocapsid release may not be a single-step rapid event, but a rather gradual process of continuous N-EGFP release, which cannot be followed thereafter.Only in a single case (of 18 VLPs) were we able to follow the N-eGFP signal after rapid release of the nucleocapsid.We tracked both fluorescent signals (M-mCherry and N-eGFP) for this VLP and observed that the M-mCherry signal did not move in z during signal separation, while the eGFP did.This could suggest that, in this case, fusion occurs either at the cell plasma membrane or during VLP ingression, prior to active movement via the microtubular network (Figure 6 and Video S29).On average, there was no statistically significant difference between the interval from VLP binding to the start of pH decrease of VLP Wu :M Ch M pH R (Table 1) and the interval from VLP binding to the start of nucleocapsid release of VLP Wu :N eG M Ch R (Figure 5D-F).This result suggests that, on average, VLP acidification coincides with nucleocapsid release.Next, we sought to determine nucleocapsid release kinetics in Omicron VLP Omi :M Ch N eE R, short for SARS-CoV-2 Omicron :(E, S, M&M-mCherry, N&N-eGFP, T20 RNA) VLPs.Surprisingly, the percentage of intact, double-labeled particles was considerably higher among VLP Omi :M Ch N eE R (2-3%) as opposed to their VLP Wu :M Ch N eE R counterparts (<1%), while remaining much lower than that among VLP Omi :M Ch M pH R (>90%).In approximately 80% of VLP Omi :M Ch N eE R, N-eGFP fluorescence disappeared before mCherry, reflecting a 23% more effective release than observed for VLP Wu :M Ch N eE R (Figure 5A).However, the rates of N-eGFP signal decrease (Figure 5B) were similar between the Wuhan and Omicron VLPs, which suggests that nucleocapsid release occurred at a comparable rate.In addition, there was no statistically significant difference between the interval from VLP binding to nucleocapsid release for the Wuhan and Omicron VLPs (Figure 5D, Table 1).Thus, Omicron mutations had no effect on the nucleocapsid release step of internalization.In the Omicron VLP, speed increase occurred shortly after nucleocapsid release, as observed for Wuhan VLP (Figure 5E,F, Table 1) indicating that VLP membrane fusion occurs, on average, before the start of microtubule-mediated transport.However, at the single-VLP level, we observed cases of speed increase occurring after, in parallel to, or before nucleocapsid release (Figures 5H,I, S11 and S12, and Videos S30-S33), indicating no causality between these two events, as was also observed above for the pH decrease.
Discussion
In this work, we developed a customized software pipeline (SPARTACUSS) to reconstruct the timeline of SARS-CoV-2 internalization through comprehensive kinetic characterization of five key events during viral internalization, at the single-VLP level-VLP binding, start of pH decrease, start of nucleocapsid release, dynamin-VLP co-localization, and start of active microtubule-dependent VLP movement.
Comparison between the timing of these events demonstrates that about 4 min after VLP binding to Vero E6 cells and 12 min after VLP binding to A549 cells, pH starts to rapidly decrease, which coincides with dynamin binding and nucleocapsid release, quickly followed by the initiation of active microtubule-dependent VLP motion.Our results suggest that VLP fusion (N-nucleocapsid release) occurs simultaneously with or shortly after endosome formation.Surprisingly, the VLPs do not co-localize with Rab-5a-
Discussion
In this work, we developed a customized software pipeline (SPARTACUSS) to reconstruct the timeline of SARS-CoV-2 internalization through comprehensive kinetic characterization of five key events during viral internalization, at the single-VLP level-VLP binding, start of pH decrease, start of nucleocapsid release, dynamin-VLP co-localization, and start of active microtubule-dependent VLP movement.
Comparison between the timing of these events demonstrates that about 4 min after VLP binding to Vero E6 cells and 12 min after VLP binding to A549 cells, pH starts to rapidly decrease, which coincides with dynamin binding and nucleocapsid release, quickly followed by the initiation of active microtubule-dependent VLP motion.Our results suggest that VLP fusion (N-nucleocapsid release) occurs simultaneously with or shortly after endosome formation.Surprisingly, the VLPs do not co-localize with Rab-5a-positive early endosomes during VLP internalization.The two to three times shorter time required for VLP internalization in VeroE6 cells compared to A549 cells could explain VeroE6 higher susceptibility to SARS-CoV-2 infection.We also evaluated the effects of ACE2 and TMPRSS2 overexpression on VLP internalization dynamics and efficiency.Interestingly, while we observed a considerable increase in VLP uptake efficiency (i.e., a greater proportion of VLPs were internalized), there was no effect on the dynamics of specific steps in the internalization process, namely, pH decrease and speed increase.We consider it plausible that an abundance of ACE2 and TMPRSS2 would facilitate greater particle binding on the cell membrane, for the highly effective internalization observed.However, once bound to the host cell membrane, the subsequent steps occur with similar kinetics, as the overexpressed proteins do not affect the events downstream of membrane binding (e.g., endocytosis and related events, such as the pH decrease and speed increase via microtubule transport).It should be noted that an increasing number of factors beyond the well-established ACE2 and TMPRSS2 are shown to affect uptake/internalization efficiency, including heparan-sulfate proteoglycans and syndecans.Their relevance to internalization dynamics, however, is unclear and thus remains to be addressed in future research [60,61].
In the present work, neither Omicron or del-1 mutations influence the internalization steps of SARS-CoV-2 VLPs.It is known that passage of SARS-CoV-2 in VeroE6 cells leads to mutations in the S1/S2 junction, suggesting that the FCS is dispensable for virus propagation, in line with our results [58,62].The lack of significant difference in internalization dynamics between VLPs with and without FCS deletion observed herein may be attributed to non-effective cleavage by furin during maturation even in VLPs with an intact FC.Our Omicron data suggest that the considerable number of S mutations and the greater transmissibility observed in humans are not related to pronounced differences in internalization kinetics.Thus, these results point away from enhanced internalization as a basis for superior transmissibility, suggesting greater stability or replication efficiency as potential underlying mechanisms.
It is our vision that the comprehensive measurement of the SARS-CoV-2 VLP internalization steps can be utilized in the evaluation of novel antiviral therapeutics, in addition to shedding novel insights into the molecular mechanisms driving the viral life cycle.
Conclusions
In summary, the current work provides a novel live-cell imaging-based pipeline for studying VLP internalization kinetics at a single-VLP level.Utilizing this approach, we meticulously characterize the timing of key events during the internalization process in two widely used SARS-CoV-2 infection model cell lines.Furthermore, we compared whether the differences in infectivity observed for the del-1 and Omicron variants can be attributed to differences in their internalization dynamics.
Viruses 2024 , 22 Figure 1 .Figure 1 .
Figure 1.Visualization of SARS-CoV-2 VLPs' binding and internalization into host cells.(A) Binding and internalization of SARS-CoV-2 VLP Wu :M Ch into U2OS cells overexpressing ACE2-Neon green and TMPRSS2.The montage shows VLPs binding to filopodia first and then migrating into the cell Figure 1.Visualization of SARS-CoV-2 VLPs' binding and internalization into host cells.(A) Binding and internalization of SARS-CoV-2 VLP Wu :M Ch into U2OS cells overexpressing ACE2-Neon green
Figure 2 .
Figure 2. Dynamics of dynamin recruitment to the site of SARS-CoV-2 VLP Wu :M Ch binding in Vero E6 cells.(A) A representative image of SARS-CoV-2 VLP Wu :M Ch added to Vero E6 cells and dynamin recruitment.The montage shows consecutive images from the area shown by the white square; recruitment of dynamin is observed at 5:45 min (white arrow).(B) The top graph represents the speed
Figure 2 .
Figure 2. Dynamics of dynamin recruitment to the site of SARS-CoV-2 VLP Wu :M Ch binding in Vero E6 cells.(A) A representative image of SARS-CoV-2 VLP Wu :M Ch added to Vero E6 cells and dynamin
Figure 3 .
Figure 3. Dynamics of SARS-CoV-2 VLP Wu :M Ch M pH R binding, pH decrease, and speed increase in A549 and Vero E6 cells.(A) Percentage of VLPs in which only M-pHluorin intensity decreases (blue), the intensities of M-pHluorin and M-mCherry decrease simultaneously (orange), or neither decreases (gray) for 100 min after addition of VLP Wu :M Ch M pH R in VeroE6 cells with or without ACE2 and TMPRSS2
Figure 4 .
Figure 4. Comparison of VLP binding, acidification, and speed increase dynamics between VLP Wu :M Ch M pH R, VLP Omi :M Ch M pH R, and VLP del-1 :M Ch M pH R during internalization in A549 cells.
Figure 5 .
Figure 5.Comparison of VLP binding, acidification, and speed increase dynamics between VLP Omi :M Ch N E R, VLP Wu :M Ch N E R, and VLP Wu :M Ch M pH R during internalization in A549 cells.(A) Percentage of VLPs in which only the signal intensity of N-EGFP (middle and right) or M-pHluorin (left)
Figure 6 .
Figure 6.Tracking of nucleocapsid release following VLP internalization.(A) Time-lapse images of N-EGFP release during internalization of VLP Wu :M Ch N E R in A549 cells where the separation of the nucleocapsid (N-EGFP) from the VLP membrane (M-mCherry) can be observed.(B) Changes in N-EGFP and M-mCherry during nucleocapsid release for the above VLP (tracked based on the N-EGFP signal).(C) VLP speed profile during nucleocapsid release measured based on M-mCherry movement for the same VLP.(D) VLP speed profile during nucleocapsid release measured based on N-EGFP movement for the same VLP.Note the speed increase during nucleocapsid release, which is missing in (C).(E) Kymogram in all dimensions measured based on M-mCherry tracking.(F) Kymogram in all dimensions based on N-EGFP tracking.Note the change in movement along the Z-axis during nucleocapsid release, which is missing in (E).
Figure 6 .
Figure 6.Tracking of nucleocapsid release following VLP internalization.(A) Time-lapse images of N-EGFP release during internalization of VLP Wu :M Ch N E R in A549 cells where the separation of the nucleocapsid (N-EGFP) from the VLP membrane (M-mCherry) can be observed.(B) Changes in N-EGFP and M-mCherry during nucleocapsid release for the above VLP (tracked based on the N-EGFP signal).(C) VLP speed profile during nucleocapsid release measured based on M-mCherry movement for the same VLP.(D) VLP speed profile during nucleocapsid release measured based on N-EGFP movement for the same VLP.Note the speed increase during nucleocapsid release, which is missing in (C).(E) Kymogram in all dimensions measured based on M-mCherry tracking.(F) Kymogram in all dimensions based on N-EGFP tracking.Note the change in movement along the Z-axis during nucleocapsid release, which is missing in (E).
Figure S2: Schematic representation of the SPARTACUSS pipeline for measurement and analysis of VLP internalization dynamics.
Figure S3: Example of single VLP Wu :M Ch M pH R entries into A549 cells.
Figure S4 :
Example of single VLP Wu :M Ch M pH R entries into Vero E6 cells.
Figure S5: Comparison of VLP Wu :M Ch M pH R binding, acidification, and speed increase dynamics in A549 cells with and without overexpression of ACE2 and TMPRSS2.
Figure S6: Comparison of VLP Wu :M Ch M pH R binding, acidification, and speed increase dynamics in VeroE6 cells with and without overexpression of ACE2 and TMPRSS2.
Figure S7: Example of single VLP del-1 :M Ch M pH R entry into A549 cells.
Figure S8: Dynamics of VLP binding and speed increase for VLP Wu :M Ch M pH and VLP del-1 :M Ch M pH during internalization in VeroE6 cells.
Figure S9: Examples of a single VLP del-1 :M Ch M pH entry into Vero E6 cells.Figure S10: Examples of single VLP Omi :M Ch M pH R entries into A549 cells.
Figure S11: Examples of single VLP Omi :M Ch N E R entries into A549 cells.
Figure S12: Examples of single VLP Wu :M Ch N E R entries into A549 cells.Video S1: U2OS ACE2 Neon Green No Tracks.Video S2: U2OS ACE2
Table 1 .
Measured intervals between different steps of VLP internalization. | 2024-06-15T13:10:38.129Z | 2024-06-11T00:00:00.000 | {
"year": 2024,
"sha1": "9a0001c93f70fd7a49a32739e2d543d4ca00940a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d9fea13d7bfc88b0c23156d61500b8a4511ecaec",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
16307918 | pes2o/s2orc | v3-fos-license | Specialization and Bet Hedging in Heterogeneous Populations
Phenotypic heterogeneity is a strategy commonly used by bacteria to rapidly adapt to changing environmental conditions. Here, we study the interplay between phenotypic heterogeneity and genetic diversity in spatially extended populations. By analyzing the spatio-temporal dynamics, we show that the level of mobility and the type of competition qualitatively influence the persistence of phenotypic heterogeneity. While direct competition generally promotes persistence of phenotypic heterogeneity, specialization dominates in models with indirect competition irrespective of the degree of mobility.
Phenotypic heterogeneity is a strategy commonly used by bacteria to rapidly adapt to changing environmental conditions. Here, we study the interplay between phenotypic heterogeneity and genetic diversity in spatially extended populations. By analyzing the spatio-temporal dynamics, we show that the level of mobility and the type of competition qualitatively influence the persistence of phenotypic heterogeneity. While direct competition generally promotes persistence of phenotypic heterogeneity, specialization dominates in models with indirect competition irrespective of the degree of mobility. Genetic diversity and phenotypic heterogeneity are both commonly found in microbial and viral populations [1][2][3][4][5][6][7]. However, in a homogeneous environment without differentiated niches, genetic diversity is difficult to maintain [8]. Cyclic dominance has been identified as a factor promoting biodiversity in spatially extended systems [9][10][11][12][13][14][15][16]. For example, bacterial model systems comprised of three genetically distinct strains of E. coli exhibit three-strain coexistence in spatially extended homogeneous environments [12,14]. In this system, a toxinreleasing strain kills a sensitive but not a resistant strain. The sensitive strain grows faster than the resistant strain which in turn grows faster than the toxin-producing strain. Recent theoretical studies have explored how demographic noise [15][16][17][18][19][20][21][22] or variability [23,24], mobility of individuals [15,16,25], as well as the topology of the food web [26] and the interaction network [27] affect maintenance of genotypic diversity. All of these studies assume that genotypes are linked to a single phenotype. However, some bacteria use a bet-hedging strategy, stochastically switching between different phenotypic states to minimize the risk of population extinction, e.g. during exposure to antibiotics [2,28]. Switching between cyclically dominating phenotypes in E.coli can be experimentally realized using synthetic genetic switches, which lead to stochastic switching between toxin production, immunity and sensitivity [29]. Is phenotypic heterogeneity maintained under these conditions or does specialization pervail, and what is the role of mobility and the interaction between individuals?
We address these questions by studying the dynamics of spatially extended populations which initially contain N individuals of G different genotypes. Each of these genotypes α ∈ {1, . . . , G} is defined by its degree of phenotypic heterogeneity, i.e. a set of probabilities p α = p 1 α , . . . , p M α with p m α signifying the probability that a genotype α is in a particular phenotypic state s m ∈ {s 1 , . . . , s M } (e.g. capable of producing immunity proteins) at the moment of interaction with another genotype β, cf. Fig. 1. For specificity, we will focus on systems with M = 3 phenotypic states and defer a dis- cussion of a larger number of states to the Supplementary Material (SM) [30]. Then, the phenotypes s m may, for example, refer to one of the three traits of E. coli discussed above. We consider two distinct ecological scenarios, where, as in the E.coli model system, phenotype s m outcompetes phenotype s m+1 cyclically. In the first class of models, termed Lotka-Volterra (LV) models [35,36], selection and reproduction occur simultaneously, in that competition is combined into a single event where the competition between two individuals leads to the immediate replacement of the weaker by the stronger individual: I + J → I + I. LV models mimic predator-prey interactions and they are applicable to situations in which competition is not limited by the availability of resources, such as nutrients on an agar plate. They have, for example, been used to study beneficial mutations in growing bacterial colonies [37] or spatial competition in strains of budding yeast [38]. In the second class of models, originally proposed by May and Leonard (ML) [39], selection and reproduction are two separate processes. An interaction between two individuals with different phenotypes leads to the death of the weaker phenotype and makes resources available: I + J → I + ∅. Reproduction then follows as a second process which recolonizes this empty space: I + ∅ → I + I. In an ecological context, these empty sites effectively introduce the factor 'carrying capacity' and thus mimic the effects of resource limitation. ML models have been employed to model synthetic E. coli systems [12,14]. In both models, the genotype α i of individual I is transmitted to its offspring.
In the well-mixed case both models possess a fixed point given by an equal abundance of each genotype, as well as N absorbing states, corresponding to the extinction of all but one genotype. However, the nonlinear dynamics in both models is vastly different: The LV model shows a maximum number of conserved quantities corresponding to neutrally stable, closed orbits in the space of genotype abundancies [26]. By contrast, the ML model shows heteroclinic orbits emerging from trajectories spiraling out from an unstable reactive fixed point [30].
In this Letter we show that the degree of mobility and the type of competition qualitatively influence the loss of genetic diversity, and that each of these factors has a major impact on the persistence of phenotypic heterogeneity. For direct competition, as in LV models, the evolutionary outcome strongly depends on the mobility. We find that in well-mixed populations phenotypic heterogeneity is favored, whereas spatial correlations promote unique phenotypes at low mobility levels. By contrast, if competition is mediated by the limited availability of resources as in the ML model, phenotypic heterogeneity is lost irrespective of the degree of mobility.
Specifically, we study a lattice gas model where at a given time t the state C of the population is characterized by a set of genotypes p αi and lattice positions r i (t) for each individual i ∈ {1, . . . , N }: Each lattice site on a two-dimensional square lattice with L 2 sites is occupied by at most one individual. The linear dimension of the lattice is taken as the basic length unit. When two neighboring individuals interact each randomly chooses a phenotype according to its respective probability vector. The outcome of these pairwise competitions is described in terms of an interaction matrix A, whose entries A ss denote the rate at which phenotype s outcompetes phenotype s . For simplicity, we choose a symmetric model, where all finite rates are the same, and equal to 1 to fix the time scale [40]. Mobility of individuals is implemented as a nearest-neighbor exchange process at a rate , I + J → J + I, where I and J denote individuals or empty spaces ∅. Macroscopically this exchange process leads to diffusion with an effective diffusion constant D = /(2L 2 ) [15]. In dimensionless units D gives the mean-square displacement of a particle between two reactions.
We performed stochastic simulations of both classes of ecological models employing periodic boundary conditions and a sequential updating algorithm. All simulations were started from an initial state comprising G genotypes chosen randomly according to a uniform distribution on the unit simplex ∆ 2 , and then distributed randomly over the lattice. As time progresses, competition between these genotypes reduces genetic diversity in the population: Figures 2 (a,c) show the number of different hH ( t)i C Initially, quite independent of the value for D and the class of ecological model, we observe H(t) C ∝ t −1 . This is because genetic diversity is high and, therefore, selection occurs irrespective of the genotype: Loss of genetic diversity is then described by a neutral coalescence process; the rate is given by the probability that the two competing individuals are in distinct phenotypic states, k = 2/3. Fluctuations can be neglected and the dynamics of this process can be described in terms of meanfield kinetics, with ∂ t H = −kH 2 , and integration yields H(t) = N/(1 + kt) in good agreement with our numerical results [Figs. 2 (a, c)]. As time proceeds and genetic diversity decreases, spatio-temporal patterns form and correlations emerge [Figs. 2(b,d)]. As a consequence, the neutral regime ends at some characteristic time t 1 , and thereafter the genealogical dynamics is driven by evolu- tionary forces, i.e. success in reproduction depends on how each genotype interacts with its neighbors. We observe that while for the ML model t 1 scales logarithmically with the population size, t 1 ∝ ln N , it scales linearly for the LV model, t 1 ∝ N ; see Supplemental Material [30]. This is due to the nature of the respective orbits in phase space [41]: In the ML model heteroclinic orbits generate a drift towards the phase space boundary, such that the ensuing extinction process is exponentially accelerated, which results in logarithmic scaling. In contrast, the phase portrait of the LV model exhibits neutrally stable orbits, and the stochastic dynamics performs an unbiased random walk [26]. This implies t 1 ∝ N , and thereby fixation occurs on a larger time scale. We also find that the rate of decrease of genetic diversity changes with the diffusion constant, most prominently in the ML model: The smaller D the slower the extinction of genetic diversity. Hence, spatial structures not only stabilize systems of cyclically interacting species [15,20,22,25,42], but also promote genetic diversity therein. The reason for this remarkable behavior is that spatial structures consist of genetically identical individuals. Reactions between different genotypes, therefore, only occur at domain boundaries and thereby globally at a lower rate.
May-Leonard model
As time progresses spatial structures become more pronounced and genetic heterogeneity reaches a stationary level [ Fig. 2]. We find two qualitatively different regimes: For low D, we observe a metastable state comprised of three distinct genotypes. This transient biodiversity is maintained by spatial alliances of individuals with identical genotypes, resulting in spiral waves (ML) or strong spatial correlations (LV), as previously studied for competing species with pure strategies [15, 16, 18-20, 22, 25, 42, 43]. By contrast, for large D, the population ends up in one of the absorbing states corresponding to the extinction of all but one genotype. We refer to those states as asymptotic genotype π. Which genotype becomes dominant under what conditions, and how is this affected by the kind of competition between individuals? To answer these questions we consider many realizations C of the population dynamics and determine the probability density P ∞ ( π) of asymptotic genotypes on the simplex π ∈ ∆ 2 [Figs. 3(a,b)]. ∆ 2 is also called a Pareto front, i.e. the set of all Pareto-optimal strategies in response to three conflicting objectives given by the environment. While previous work mainly concerned with the distribution of strategies in stationary environments [44], we here study how these strategies dynamically distribute in response to objectives given by the local composition of the population. Maxima of P ∞ identify the evolutionarily most successful genotypes [45].
We start the discussion with the LV model, cf. Fig. 3. Our simulations show that, which genotype is evolutionarily most successful strongly depends on the mobility, and one can identify three distinct regimes: If diffusion is slow, it is evolutionarily most advantageous to specialize, i.e. to adopt and retain any one of the three phenotypes; P ∞ is largest in the corners of the simplex. In contrast, for large D, the most successful individuals are bet-hedgers, i.e. genotypes with nearly equal probabilities for each of the three phenotypes. For intermediate values of D, the most successful individuals adopt a bethedging strategy that is biased towards one of the three phenotypes. The boundaries between these three qualitatively different regimes, D 1 and D 2 , are clearly visible in Fig. 3(b), which shows the marginal probability distribution for each of the three components of π [46]. Beyond that, the threshold D 2 also separates neutrally stable from metastable dynamics and therefore marks a sharp transition in the first passage times to any of the absorbing states.
For fast diffusion, D > D 2 , the characteristic length scale of spatial patterns is larger than the system size, and, therefore, the dynamics is effectively that of a wellmixed system [15,42]. Then the interaction between individuals with two different genotypes is well described by a mean-field approximation: the probability that an individual of genotype α outcompetes one of genotype β is given by This implies a net transition rate between genotypes, W αβ = w αβ − w βα , such that the fraction x α of individuals with genotype α obeys the rate equation: Since W αβ is a skew-symmetric matrix, this corresponds to the replicator equation of a G-species conservative LV model, whose dynamics has recently been classified [26]. Obviously, a strictly bet-hedging strategy with (1); since W Bβ = 0 it can not be outcompeted by any other genotype β. Moreover, the particular form of W αβ implies that all orbits are neutrally stable and periodic [47]. Since the bethedging genotype, p B , is furthest away from the boundaries of the simplex, the corresponding mean first passage time into the absorbing states is the longest [18,48,49]. Hence, for large times, bet-hedging genotypes are the most abundant [ Fig. 3(a), bottom right].
With decreasing diffusion constant D the hopping rate between neighboring lattice sites eventually becomes much smaller than the reactions rates, 1. This defines a threshold for D which should scale as D 1 ∼ 1/N , as confirmed by our simulations [Fig. 3 (c)]. For D < D 1 , the dynamics is reaction-dominated and, therefore, a domain boundary between two different genotypes advances mainly due to competitive takeover and not due to hopping between neighboring lattice sites. This leads to rather smooth domain boundaries, which move at a speed proportional to the net transition rate W αβ . This invasion speed is highest, if either genotype is a specialist. Hence, while specialists invade other genotypes fastest, they also are also most susceptible to displacement by other genotypes. This makes it difficult to see who will eventually win the race. The decisive factor is that the initial coarsening process leads to spatial domains consisting of selectively neutral genotypes which, in addition, are spatially organized such that fast advancing specialists form a strategic alliance with generalists, who are able to defend the territory, because they are intrinsically more resistant to invasion [30]. Those profiting the most from this alliance are the specialists since it enables them to invade new territory fast. Hence by a 'first come first served' principle, specialized genotypes outcompete their bet-hedging counterparts, and, for large times, the dynamics shows (transient) cyclic competition between three specialized genotypes [ Fig. 3(b)].
Interestingly, we also find an intermediate parameter where the dynamics shows prolonged metastable states. Unlike the specialists observed for D < D 1 , the surviving genotypes now partly favor one particular phenotype, but retain a non-negligible propensity to adopt the other phenotypic states [propeller-like structure in Fig. 3(a), bottom left]. Since now nearest neighbour exchange processes occur at the same time scale as competitive interactions the domain boundaries are fuzzy. Moreover, due to an increasing mean path length associated with D, domains are frequently intruded by particles with a distinct genotype. As a result, the surviving genotypes are characterized by a trade-off between invasion speed, given by W αβ and robustness against hostile invasion, given by a broad distribution of phenotypic states. A more detailed discussion is given in the SM [30]. For the ML model, we find a remarkably different behavior. There, independent of the value of the diffusion constant D, the population is asymptotically dominated by specialists [30]. Phenotypic heterogeneity does not provide an evolutionary advantage in a setting, where limited resources lead to indirect competition. The dynamics asymptotically approaches the classical ML model [15,20,22,25], as is demonstrated in the SM [30].
In conclusion, we have investigated the spatiotemporal dynamics of heterogeneous populations with an initially high degree of genetic diversity where individuals show a varying degree of phenotypic heterogeneity. We have found that the degree of mobility, as well as the type of competition, qualitatively affect both the loss of genetic diversity and the maintenance of phenotypic heterogeneity. In the LV model, the degree of phenotypic heterogeneity changes qualitatively at certain threshold values of the diffusion constant. In contrast to this behavior, in the ML model specialists always dominate the population in the long run. For heterogeneous bacterial populations this means that the survival of phenotypic heterogeneity depends both on the degree of mixing and the relative availability of nutrients. The impact of mobility and the type of competition on the survival of phenotypic heterogeneity is not restricted to these models. In fact, we think that the mechanisms behind these phenomena are generic, in the sense that they only rely on basic properties of the underlying nonlinear dynamics, namely neutrally stable orbits as in LV models or heteroclinic cycles as in ML models. This view is supported by the fact that we observed the same behavior in a more complex model with four species [30,50]. We therefore believe that our findings apply to a broad class of ecological contexts. While we have reported results for one [30], two and infinite spatial dimensions the dynamics in three dimensions remains an open question for future research.
This research was supported by the German Excellence Initiative via the program 'NanoSystems Initiative Munich' and the Deutsche Forschungsgemeinschaft via the Priority Programme "Phenotypic heterogeneity and sociobiology of bacterial populations" (SPP 1617). S.R. gratefully acknowledges support of the Wellcome Trust (grant number 098357/Z/12/Z). We thank Alejandro Zielinski, Johannes Knebel and Markus Weber for fruitful and stimulating discussions.
In the Supplementary Material we provide calculations and numerical results supporting the arguments presented in the main text.
SCALING OF THE CROSSOVER TIME t1 WITH THE SYSTEM SIZE
The time t 1 marks the crossover from neutral evolution to selection driven evolution. To determine how it scales with the system size N we try -motivated by the phase portraits of the classical LV and ML models -the following scaling ansatz for the average number of distinct genotypes: where h LV and h ML are scaling functions of the heterogeneous LV and the ML model, respectively. As can be inferred from Fig. 1, this scaling ansatz works very well in the relevant time window, and, therefore, the crossover time t 1 scales as t 1 ∝ N and t 1 ∝ ln N for the models of LV and ML type, respectively.
A GEOMETRIC INTERPRETATION OF THE NET TRANSITION RATES
In this section we give a geometric interpretation of the net transition rates W αβ on the simplex ∆ 2 , which signify the rates by which genotype α defeats another genotype β. The net transition rates W αβ are bilinear forms W ( p α , p β ) := W αβ in the vectors p α and p β characterizing the genotypes α and β, respectively. For a given genotype α, W ( p α , x) = s (with s some constant and x ∈ R 3 ) defines a plane whose intersection with the simplex ( p β ∈ ∆ 2 ) is a line. Hence, the isolines W ( p α , p β ) = s are those genotypes β which compete with α at the same rate s. We signify the corresponding set of parallel lines by {I s α }. A particular representative of this set of lines are those where the competing genotypes are selectively neutral: W ( p α , p β ) = 0. Since W αα = 0 and W αB = 0, the corresponding line I 0 α runs through p α and the center of the simplex, p B = ( 1 3 , 1 3 , 1 3 ) [ Fig. 2(a)]. This neutral isoline I 0 α divides the simplex into two regimes, ∆ + and ∆ − , to the right (+) and left (−) with respect to the direction pointing from α to the center B of the simplex ∆ 2 . As W is linear in both of its arguments, in particular in p β , the net transition rate W αβ = s is a monotonically Fig. 3(a), top left, from the main text with isolines for the three specialists. Color denotes the probability density of asymptotic genotypes, such that red signifies successful genotypes and blue signifies unsuccessful genotypes. One can see that, for each specialists, there is a surviving bet-hedging genotype on the same isoline. (d) Solid lines indicate genotypes which are neutral with respect to one of the specialist genotypes. Genotypes in shaded regions (e.g. α, closed circle) outcompete a larger number of genotypes than genotypes in the white regions (e.g. α , open circle). The arrow indicates the isoline of genotype α (closed circle). All genotypes right to this line are prey and all genotypes left to this line are predator with respect to α.
increasing/decreasing function of the distance d to I 0 α in ∆ +/− ; recall that the isolines I s α are parallel to I 0 α . Note also that the sign of the slope of s(d) is simply defined by the rules of cyclic dominance. Hence for a genotype α , on the opposite side of B, the roles of ∆ + and ∆ − are interchanged.
Taken together this implies that the best response (largest value of W αβ ) to α is a genotype β with the largest distance from I 0 α , i.e. a genotype on the boundary of the simplex in regime ∆ − . For the example in Fig. 2(a) the best response to α is the specialist p 1 , and the best response to α is p 3 . In short, the best response to any genotype p α ∈ 2 is a genotype on the boundary of the simplex 2 . If the line I 0 α is not parallel to one of the borders, the best response against the genotype α is unique and given by one of the three specialist genotypes. In conclusion, the best and worst responses to almost all genotypes in 2 are specialists.
From the monotonicity of W we can draw some further conclusions: Since W αB = 0 for any genotype on α ∈ 2 , the slope of s(d) must decrease with α approaching the center B of the simplex. Hence the best response to a genotype α is weakest for bet-hedgers (those close to the center of the simplex) and strongest for specialists (those closest to the boundary of the simplex). This implies that with increasing distance from the center of the simplex the value of the expected net invasion rate for genotype α ('selective power') [1], is a monotonically increasing function [ Fig. 2(b)]. Figure 2(b) also shows that W + α exhibits a rotational symmetry, i.e. all genotype α with the same distance from the center B have identical selective power. This symmetry is simply due to fact that we have cyclic dominance with all equal rates. Note that the above arguments do not rely on a specific choice of the interaction matrix A. Rather, we only assumed that net interactions of genotypes with themselves and with the bet-hedging genotype is zero. Then, since specialists define the outer hull of the set of genotypes, the monotonicity of W and the following line of argumentation should hold for a broad class of models satisfying these conditions.
In the following we will employ the above geometric interpretation of the net reaction rate in order to understand the struggle for survival in spatially extended systems.
SURVIVAL OF SPECIALISTS FOR SMALL MOBILITIES
How can we understand the dominance of specialists for very small values of the diffusion constant D? In a spatially extended system the key quantity to consider is the magnitude of W αβ . It determines the speed and direction of propagation of domain borders between genotypes, and thereby the outcome of pairwise competition.
Starting from an initial state with a high degree of genetic diversity the system first coarsens into spatial domains containing pairwise selectively neutral genotypes [ Fig. 3]. Consider now a spatial cluster comprised of such selectively equivalent genotypes α [e.g. all genotypes on the dashed red line in Fig. 2(a)]. As we have learned in the previous section, the closer a genotype α is to the boundary of the simplex, the larger is its expected rate of being invaded (W − α ) or to invade (W + α ) domains of any other genotype not lying on the isoline I 0 α . Hence, specialists within such a cluster have a higher net reproduction rate in boundary regions adjoining domains of their respective prey (∆ + region), and, therefore, are able to invade such regions fast. Pictorially speaking, specialists are "good in offense". In contrast, generalists have a lower overall rate of being invaded, and, therefore, they will dominate in boundary regions facing their respective predators (∆ − region); they are "good defenders". This leads to an internal organisation of the selectively neutral clusters, where generalists and specialist form strategic alliances [2][3][4][5]. Those profiting the most from this alliance are the specialists since it enables them to invade new territory fast. Taken together, this leads to a "first come first served" principle in the sense that those with the fastest expected invasion rate W + α will dominate the population in the long run.
As a final aside we note that while specialists are the most dominant genotype the population for large times they are actually not the only surviving genotypes. In addition to each specialist genotype there is an associated bet-hedging genotype located on the same isoline I 0 α as the specialist, cf. the yellow areas in Fig. 2(c) with the isoline indicated as a white dashed line. Recall that the initial coarsening process leads to spatial domains containing pairwise neutral genotypes and, geometrcally, those are located along the neutral isoline I 0 α on the simplex. For specificity let's pick the specialist p 1 . Then, the associated bet-hedger is given by p bet−hedge ≈ ( 1 3 − 2 , 1 3 + , 1 3 + ) with some small positive number: Together they form a strategic alliance where p bet−hedge protects p 1 from its predator p 3 due to the enhanced probability to be in phenotypic state s 2 .
BIASED BET HEDGING FOR INTERMEDIATE MOBILITIES
The regime with intermediate values of the diffusion constant D is characterised by the survival of biased bethedgers as mentioned in the main text. In order to survive they need to balance invasion speed with robustness against hostile invasion. Hence, the most successful genotypes can be neither full specialists nor full bet-hedging genotypes. Surprisingly, our numerical simulations show that their degree of bet-hedging is not only determined by the radial distance from the center of the simplex. Actually, we find that the survival probability is highest in three of the six triangles defined by the perpendicular bisectors of the simplex [shaded regions in Fig. 2(d)]. In the following we will explain (on the basis of the geometric interpretation of the net rates W αβ ) why these particular biased bed-hedging genotypes are evolutionary most successful. Similar as for low values of D there is an initial coarsening process, but with the domain boundaries less well defined [ Fig. 4]. Then, for a given phenotype α, its evolutionary success is given by how well it is able to invade territories of its prey while still being able to successfully defend against its predators.
We first observe that genotypes in the gray triangles have more preys than predators, and, therefore, outcompete more genotypes than they are outcompeted by: Consider a genotype α in one of the gray triangles [ Fig. 2(d)]. As we have learned from the previous discussions, the prey and predator of a given genotype are located in the areas ∆ + and ∆ − , respectively. Basic geometry tells us that 1 < Z + α /Z − α < 5/4, where Z +/− are the number of genotypes in areas ∆ +/− , respectively, i.e. the number of prey and predator of α. In contrast, for genotypes α in the white triangles we find 4/5 < Z + α /Z − α < 1. Second, we recall that due to cyclic symmetry the expected overall invasion rate W + α is constant on circles surrounding the center of the simplex. For a pair of genotypes, α and α , this implies the inequality for the expected invasion rate against a particular prey, Therefore, as the areas of all triangles are the same, the average invasion rate for a randomly picked individual in a gray area is lower than in the white areas. We conclude that the surviving genotypes achieve robustness against random invaders by outcompeting a larger set of genotypes in cost of a lower average invasion rate. As a final aside, we observe that the most successful genotypes obey a hierarchy in the components of the probability distributions p α : They have a relatively high probability to be in one phenotypic state, the genotype's bias. The ensuing phenotype with the highest rate of invasion takes the second largest value. Last, the component which is dominated by the genotype's bias has the lowest probability. Mathematically speaking, the components obey p 1 α > p 3 α > p 2 α , or cyclically. This hierarchy of phenotypic states ensures that domains are less susceptible to invasion by the most aggressive genotypes.
THE HETEROGENEOUS MAY-LEONARD MODEL
To investigate the evolution of phenotypic heterogeneity in indirect competition we investigate situations, where competition is mediated by the limited availability of resources. In this case, for the ML model, we find a remarkably different behavior. There, independent of the value of the diffusion constant D, the population is asymptotically dominated by specialists (Fig. 5). This can be understood as follows: Self-interactions between genetically identical individuals are potentially disadvantageous, as they may lead to the creation of empty sites which in turn may then be colonized by individuals of a different genotype. Since self-interactions are impossible for specialists (w ii = 1−p i 2 = 0), they are typically better off than their bet-hedging competitors. This advantage is most pronounced at low mobilities where the formation of stable spatial structures is inhibited by reactions between identical genotypes, and thereby promotes the breakup of compact spatial domains. For cyclic competition between three specialists the mean field dynamics is described by the May-Leonard equations. In this section, we study the nonlinear dynamics of the heterogeneous version of the May-Leonard model and argue why specialists dominate the population in the long term. For large particle numbers, and for well-mixed systems the dynamics of the heterogeneous May-Leonard model is aptly described by G coupled differential equations for the concentrations of the genotypes α, The first term on the right hand side does not depend on the specific genotype p α and it gives the rate at which empty sites are populated. The second term is the rate at which individuals are replaced by empty sites. In the following we will show that these equations show qualitatively similar behavior as the May-Leonard equations which leads to the dominance of specialists in the heterogeneous model. To study the nonlinear dynamics in more detail we rewrite Eq. (4) in terms of the k th "moments" of p α : With this definition, σ 0 ≡ α x α gives the total concentration of individuals. The first moment gives the mean genotype, σ 1 ≡ α p α x α , and the second moment, σ 2 ≡ α x α p α · p α , can be interpreted as the degree of specialization in the population. To obtain the time evolution of the k th moment we multiply Eqs. (4) by ( p α ) k and sum over all genotypes α. We obtain for the time evolution of the momentṡ For simplicity, we restrict the following analysis to cyclic competition between M = 3 phenotypes: One immediately sees that in this case σ 0 = 3/4, σ 2k−1 = (1/4, . . . , 1/4), σ 2k = 1/4 for k > 0, is a (reactive) fixed point of the dynamics. In fact, the initial conditions studied in the main text correspond to this fixed point. Furthermore, the model exhibits G absorbing states corresponding to the extinction of all but one genotype.
We are now interested in the evolutionary dynamics in the vicinity of the reactive fixed point. To this end, we neglect moments of order three and higher and linearize Eqs.
Marginal probability distribution to be in any of the components of π for the asymmetric four species model with direct competition. We identify two threshold values separating distinct outcomes of the evolutionary dynamics (L = 80). The histograms for each diffusion constant were calculated over 10 5 trajectories.
Jacobian then determine the exponential dynamics in this region. The linear stability analysis reveals a stable and an unstable eigendirection. In addition, there is a pair of complex conjugate eigenvalues with positive real parts indicating an oscillatory escape out of the reactive fixed point.
Interestingly, this behaviour reminds us of the homogeneous May-Leonard model, where the dynamics spirals outward of an unstable fixed point, approaching states, where only a single species survives. Indeed, numerical simulations of the full dynamics in a well-mixed system show a strikingly similar behaviour for the average genotype σ 1 : σ 1 spirals outward of the reactive fixed point approaching the boundary of the simplex and the absorbing states. For the evolution of phenotypic heterogeneity this means that specialists dominate the population at large times. Indeed, in striking contrast to the Lotka-Volterra dynamics, we find the survival of specialists for any value of the diffusion constant D, see Fig. 5.
As an aside, we note that even though the heterogeneous May-Leonard model converges to three cyclically competing pure species the asymptotic dynamics is nevertheless slightly different to the homogeneous May-Leonard model. As noted above, the time scale of net competition increases monotonically with an increasing degree of specialization. Furthermore, in a finite heterogeneous system, the surviving genotypes are rarely ideal specialists. Rather, the asymptotic genotypes follow distribution as shown in Fig. 3(a) in the main text. Consequently, the time scale of interaction between the remaining three genotypes is lower in the heterogeneous model, which translates, for a given diffusion constant, to an increase in the length scale of spatio-temporal patterns. Therefore, the previously observed threshold in the diffusion constant [6,7] is shifted towards lower values of the diffusion constant. In other words, the range of parameters allowing for the coexistence of species is reduced, such that in this sense phenotypic heterogeneity ultimately decreases genetic diversity for very large times.
THE HETEROGENEOUS ASYMMETRIC FOUR SPECIES MODEL
As an example for more complex interactions we consider a heterogeneous model comprising asymmetric interactions between four phenotypic states P1, P2, S and R [ Fig. 6]. The corresponding interaction matrix is For large particle numbers and in the well-mixed limit the dynamics is described by Eqs. (1) in the main text and Eqs. (4), for direct competition and indirect competition, respectively. We performed extensive stochastic simulations for the heterogeneous, asymmetric four species model. While the specific choice of A fixes the time scale, the reproduction rate for indirect competition was set to 1. Figure 7 shows the marginal distribution of each component of the genotypes for the model comprising direct competition at large times. Remarkably, as in the cyclic Lotka-Volterra model we find a striking influence of mobility on the evolution of phenotypic heterogeneity. This is most clear in the P 1 and S component. Interestingly, the transitions occur at the same values of the diffusion constant as in the heterogeneous, cyclic Lotka-Volterra model. By contrast, for indirect competition we find that phenotypic heterogeneity does not evolve in the asymmetric four species model. For large times, the population is dominated by specialists, focussing on one of the four phenotypic states. This is again in agreement with our findings for cyclic competition of three species.
UNIVERSALITY OF SIMULATION RESULTS
Having found qualitatively the same results for two different ecological networks it seems reasonable to ask whether our findings are a universal property of a whole class of dynamic systems. Indeed, as outlined further above, the arguments provided in the previous sections are independent of the specific choice of the interaction matrix A. Specifically, we expect to find the survival of bet-hedging in the well-mixed limit for any model comprising neutrally stable orbits in the homogeneous dynamics, and thereby a maximum number of conserved quantities (closed orbits) in the heterogeneous system [8]. To understand the different phases in the cyclic LV model we made use of a geometric interpretation of the net transitions rates, which argued for the monotonicity of W + α : specialists interact on faster time scales than generalists. In fact, we expect that these general arguments hold true for any model, where comprising neutrally stable orbits and a vanishing net interaction rate of genotypes with themselves and the bet-hedging genotype. We, therefore, believe that our findings on the evolution of phenotypic heterogeneity are not restricted to the models we studied here. Rather, we think that our results only depend on the basic properties of the underlying nonlinear dynamics.
The mechanisms responsible for the emergence of transitions in degree of mobility suggest that these transitions can not arise in less than two spatial dimensions. Indeed, simulations for one spatial dimension confirm that bet-hedgers dominate for all values of D and direct competition [ Fig. 8]. For indirect competition we again find the survival of specialists, regardless of the value of the diffusion constant. | 2014-09-04T15:30:39.000Z | 2014-08-21T00:00:00.000 | {
"year": 2014,
"sha1": "5b98eea91b8d0dd486782b368a2dfe486b7bd61b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.4912",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5b98eea91b8d0dd486782b368a2dfe486b7bd61b",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Physics",
"Medicine"
]
} |
234366683 | pes2o/s2orc | v3-fos-license | Causal Excitation in Antenna Simulations
. The critical relevance of ensuring the excita-tion’s causality in electromagnetic (EM) simulations is validated via theoretical arguments and simulation results. Two families of model pulses with an implicitly causal behavior, namely the windowed-power (WP) and the power-exponential (PE) ones, are elaborately discussed. After introducing their unipolar prototypes, the relevant families are supplemented with monocycle and ringing variants, and are used for building signatures with almost rectangular spectral contents. Their utility is evidenced by contrasting their performance with that of other types of excitations that are habitually employed in antenna simulations. The WP pulse is also shown to be an almost exact replica of signatures generated by physical circuitry and to be singularly expedient for improving the effectiveness of EM computational packages.
Introduction
Causality is a fundamental property of macroscopic electromagnetic (EM) fields, stating that the electric field strength ( , ) and the magnetic flux density ( , ) at any instant are the effect of causes that acted before [1].Causality, as accounted for throughout this study, clearly contrasts locality (or microcausality) that is a quantum theoretical concept.From a special relativity perspective, causality is a distinctive property of time-like phenomena (with classical EM falling decidedly under this category), as opposed to the space-like phenomena encountered within quantum mechanics [2].From a theoretical viewpoint, causality is crucial to demonstrating the uniqueness of the EM initial value problem.In this respect, [3] has indisputably shown that uniqueness hinges upon the one-to-one correspondence between the causal time-domain (TD) EM field components and constitutive relations, on the one hand, and their time Laplace transforms, on the other hand (this argument will be reiterated later).Moreover, applying the reciprocity theorem (another fundamental EM result) to the case of un-bounded domains requires, again, the use of causal sources with a bounded spatial support [4].
The macroscopic EM field theory finds its prime practical application in the wireless (digital) transfer and, in particular, in antenna engineering (AE).The role played by causality in this case was stated in [5] that insisted on the detrimental effect of non-causal excitations in studies concerning antennas radiating in unbounded domains.Furthermore, [6] stressed the necessity to enforce causality for ensuring the physical realizability of any pulse.
Upon zooming-in on AE, it is noted that the ultra wideband (UWB) technology is decisive to pursuing the advance in digital wireless applications [7], a trend that gained momentum after the Federal Communications Commission (FCC) released in 2002 the 3.1-10.6GHz band for low-level, unlicensed use in UWB applications [8].Due to the intrinsic technological intricacies in producing UWB (antenna) systems, their design and performance prediction depends critically on increasingly sophisticated software simulation tools.Many authors resort to this end to frequency-domain (FD) instruments -this choice is justified by compatibility with measurement equipment capabilities (network analyzers operate in FD) and by the use of traditional concepts in AE, such as the operational bandwidth .However, digital communication occurs essentially in TD, and overlooking this aspect may result into the well-known detrimental effect of intersymbol interference [9] that can render ineffective the communication in channels in which the radiated power levels are well above the expected signal-to-noise-and-interference (SNRI) thresholds.As a result, the EM simulation tools of choice within the realm of UWB antennas should be of the TD variety, with [10][11][12] offering relevant such examples.Surprisingly, these tools employed manifestly non-causal pulses as excitation: Gaussian pulses [10], or the square root raised cosine (SRRC) pulse [11,12].In fact, although the intrinsic perils entailed by violating causality were already signaled in [13], non-causal excitations are still to this day the norm in TD EM simulations.
With this in mind, the present study will advocate the use of strictly causal excitations in TD EM simulations for (UWB) antenna design.This work relies on the author's quest for developing various classes of causal model pulses.Moreover, some of these pulses were cogently shown to (almost exactly) replicate pulses produced by (solid-state) pulse generators.For brevity, except for a small number of elements that were insufficiently covered in previous publications, the discussion will be confined to conceptual foundations, with technical details being intentionally left out -all these aspects are covered in great detail in the cited references.
After introducing some definitions, the account will proceed by introducing the two basic unipolar prototypes of the pulses to be discussed and their immediate monocycle descendants.Two uniquely opportune classes of pulses will be subsequently inferred from these prototypes.Some important implications of the use of these causal excitations will then be catalogued.Conclusions will be drawn at the end.
General Definitions
Throughout this study, position is specified by the coordinates { , , } with respect to a background Cartesian reference frame with origin and three mutually orthogonal unit vectors { x , y , z } that, in this order, form a right-handed system.The position vector is = x + y + z , with | | = , and the time coordinate is .(Partial) differentiation is denoted by .The one-sided Laplace transform of a causal function ( ) is with ∈ R and > 0. This choice ensures via Lerch's theorem [14], [15] that only one causal time-domain original corresponds to its related transform and is at the core of the uniqueness proof of the EM initial value problem [3].The Fourier transform of a function ℎ( ) that satisfies the needed conditions is in which = 2 (with ∈ R being the frequency).Whenever applicable, F [ℎ] (j ) will be inferred from (1) by taking = j .For compactness, the alternative notation ˆ will also be used for denoting the Fourier transform of ℎ( ).(•) will denote the Heaviside unit step function.
Causal Pulse Definitions
As stated in the Introduction, at the core of this study are some pulse prototypes that, via suitable transformations, allow deriving classes of pulses of concrete practical relevance.The design of these model pulses relies on the general guidelines delineated in [16] namely: • start from a strictly causal unipolar prototype; • define that prototype by using an as small as possible number of parameters that should have a clear technical interpretation and, preferably, be easily put in correspondence with standardized definitions; • ensure a controlled time-differentiability of the prototype -infinite time-differentiability is undesirable since it cannot be replicated via physical circuitry.
Starting from these precepts, this section will discuss two basic unipolar prototypes, namely the windowed-power (WP) and the power-exponential (PE) ones.Since antenna systems are practically always fed by means of signals with no DC component in their spectral diagram, immediate descendants of the prototypes will then be derived by time differentiation, this yielding the corresponding WP and PE monocycle signatures, respectively.This section is supplemented with a quick review of other causal pulses encountered in EM (numerical) analyses.
Model Pulses with Finite Temporal Support
Temporal boundedness is an important feature of pulsed feeding.For example, timed array antennas [17] use purposefully designed modulations of a single-tone feeding signal for shaping their radiation patterns.Inspired by a (mechanical) on/off switching, such arrays use standardly rectangular time-windowed sinusoidal excitations.However, one must observe that the far-field EM field radiated by antennas is at least the time-derivative of the feeding signal's signature [18] (the received signal in a loop-to-loop transfer being the third order time-derivative of the feeding current [19]).As a result, an on/off switched sine feeding renders the radiated EM field at least discontinuous, thus non-physical.
To remedy this situation, [21] introduced the windowedpower (WP) unipolar prototype having the expression in which = 2, 3, 4, . . . is the pulse rising power and = / r , with r > 0 being the pulse rise-time, namely the interval between the onset and the instant when the pulse peaks.This pulse is causal and has a finite temporal support 2 r .The support of its first time-derivatives is also 2 r .The pulse and its first − 1 time-derivatives are continuous at both onset and end, with the choice 2 ensuring this type of continuity at least for WP( , ).The WP prototype is normalized to unity.Another interesting feature of this model pulse is that its Fourier transform is analytical, namely in which +½ is the Bessel function of the first kind and fractional order [20, Section 10.1] (see the full proof of this result in Appendix 1).Note that | WP(j Examples of both TD signatures and their corresponding spectra are given in [21].Moreover, that publication also evidenced this pulse's exceptionally low spectral leakage (SpL), an essential figure of merit of any apodization function [22].
A monocycle signature can be easily obtained from (3) by taking its time-differential.Its normalized expression is The WP monocycle has a zero-crossing at = r and extrema at ex;± = r 1 ± (2 − 1) −½ , respectively.Its Fourier transform follows from (4), by multiplication by ( ) j .Examples for both TD signatures and their corresponding spectra can also be found in [21].The WP pulse was shown in [23] to almost perfectly replicate the signature generated by the solid-state pulse generator described in [24].
Higher-order time-differentiated versions of the WP pulse, as required, for instance, by evaluating the expressions in [19] or for implementing the algorithm in [25], can be easily derived.Here, the controlled differentiability of the prototype turns out to be extremely beneficial.
Model Pulses with Infinite Temporal Support
New classes of causal pulses can be constructed by relaxing the temporal finiteness requirement.Two such classes were introduced in [16], out of which the power-exponential (PE) is highly relevant for AE applications.The unipolar prototype reads in which and have the same significance as in (3).This pulse is also causal but has an infinite tail.Its conventional pulse width w defined according to [16, Eq. ( 23)] is interrelated with and r as w = r Γ( + 1) exp( ) +1 (7) with Γ(•) denoting the Euler gamma function.This prototype too has controlled differentiability at its onset.Its Laplace transform is with its Fourier transform PE following by taking = j in (8) -the condition ( ) > − / r guarantees the validity of this choice.Examples of both TD signatures and their corresponding spectra are given in [16].
As with the WP, a monocycle is derived from (7) by taking the first time-differential, the relevant expression being in which the normalization by ensures a unit amplitude.The Laplace transform of PE and, implicitly, its Fourier transform PE, follow by multiplying L [PE] ( ) in ( 8) by ( ) .Examples of both TD signatures and their spectra are also given in [16].
Despite their infinite tail, the very simple expressions of these pulses made them attractive for several EM formulations.For example, the PE excitation was used in [26] and the PE variant was used in [25,27].
Other Model Pulses Employed in Analytical EM Frameworks
Apart from causality, the WP and PE pulses offer some additional beneficial features, such as the finite temporal support of WP, and their controlled differentiability at the onset (and the endpoint, for WP).However, their expression may be, occasionally, excessively complicated and other pulses may prove more appropriate.
A popular such choice is the triangular pulse, a superior alternative to the rectangular pulse with its jump discontinuities at endpoints.As with the WP and PE classes, the triangular pulse comes in unipolar and monocycle (bipolar) variants.While having an evidently simpler expression, the triangular signatures are discontinuous already in their first time-derivative, this making them unsuitable for formulations as that in [19].However, its convolution with a large class of functions (for example testing functions in a Method of Moments context or a Green's function) can be carried out analytically, which is undoubtedly advantageous.As a result, the unipolar triangular excitation was used in [28,29] while its bipolar variant was used in [30,31].
Another option is offered by the unipolar bell-shaped excitation.An appealing property of this type of excitation is that it can be obtained by convolving rectangular and triangular pulses [32,33] or by convolving two triangular pulses [34].This procedure ensures a sufficient degree of smoothness at the pulse's endpoints.Another interesting observation is the remarkable similarity between the WP and WP signatures, on the one hand, and (combinations of) suitably chosen unipolar bell-shaped excitations, on the other hand.
Special Features of Pulsed Excitations
In this section, two additional special features that can be provided by pulsed excitations will be examined.
Pulses with Rectangular Spectral Content
Electronic circuitry operates over a limited bandwidth, only.All pulses discussed in Sec. 3 have infinite spectra and, while expedient for numerical analyses, they cannot be exactly generated and manipulated by physical circuits.The intrinsic bandwidth limitation inherently affects wireless transmissions where it induces unwanted artifacts in the employed modulation schemes [9], with intersymbol interference as one of the most detrimental effects in digital communications.To control these artifacts, signals are filtered prior to modulation via filters with an as flat as possible transfer in the passband.
Upon acknowledging the consequences of band limitation, antenna systems are often simulated via excitations having a spectral content that closely mimics a rectangular one.A customary choice in UWB antenna simulation is the square root raised cosine (SRRC) pulse [12] (that, in turn, is based on [9]) where s = 1/ s , with s being the symbol rate, = / s , and is a dimensionless roll-off factor for bandwidth control.An example of the relevant TD signature and its spectrum is given in [12], respectively.The SRRC is not causal and it needs being truncated.This situation was highlighted in [9], that also commented that the truncation effect is acceptable within the realm of wireless communications due to the pulse's sharp decay.However, when used as an excitation in a TD EM simulation, turning on a SRRC via a Heaviside unit step function will result in a jump discontinuity (as it will be documented below) and this can severely impact on the validity of the obtained results.
Another quite widely employed band-limited pulse is the approximate prolate wave function [35] -this type of signature is more frequently used for windowing purposes.Nonetheless, this pulse is also non-causal and suffers from the same disadvantages as an SRRC excitation.
The need for causal pulses with an almost rectangular spectral content was resolved in [5,21] that advocated a simple, but effective strategy for equipping the PE and WP families with this property.By denoting as an intended frequency bandwidth with upper and lower limits h and l , respectively, and center frequency c = ( l + h )/2, applying that strategy yielded the PE modulated-sinc-cosine (PE−sc) pulse [5] PE−sc( , sc , c , ) = sinc [ sc ( − 1)] cos [2 c r ( − 1)] PE( , ) (12) and the WP modulated-sinc-cosine (WP−sc) pulse [21] WP−sc( , sc , c , ) = sinc [ sc ( − 1)] cos [2 c r ( − 1)] WP( , ). (13) sc in ( 12) and ( 13) is a scaling coefficient interrelating and r as = sc / r ( sc 3 for practical applications).Examples of both TD signatures and their corresponding spectra are available in [5,21].The therein provided examples concerned raising powers 2, such values being however recommendable for ensuring sufficient smoothness at onset (and the endpoint).The spectral behavior of both pulses was shown (i) to approximate increasingly well a rectangular shape as sc increases, while the influence of on its shape is minimal and (ii) to have an approximately −6 dB attenuation at both l and h .These properties immediately entailed easy design rules: by taking sc 3, r and c follow from the intended l and h , with being chosen more or less arbitrarily for ensuring a certain pulse 'smoothness'.The superiority of the WP−sc or PE−sc pulses over the SRRC one for (TD) EM numerical formulation ends is compellingly demonstrated in Fig. 1 that juxtaposes the excitation in [12] with WP−sc and PE−sc pulses tailored to provide an approximately flat spectral content over a frequency range between l = 3.1 GHz and h = 10.6 GHz (corresponding to the −41.3 dBm part of the spectral mask for indoor UWB systems, as specified in [36]).Both the WP−sc and the PE−sc pulses have r = 1.5 ns (tuned to the SRRC signature in [12]), = 4 and sc = 10.9, the last parameter entailing a nearperfect match of the −6 dB points in the spectral diagrams of all 3 pulses.The plots demonstrate the remarkable concurrence between the in-band spectral contents of all three pulses.Nonetheless, the spectral leakage is largely improved, from −44 dB for SRRC, to −52.2 dB for PE−sc and to an exceptionally low −64.7 dB for WP−sc.The benefits are most apparent in the TD signatures in Fig. 1(a).While the overall behavior is largely similar, the non-causality of the SRRC is evident in the zoom-in around = 0.Although one may argue that the SRRC value at 0 is small, its patent deviation in even a minute vicinity of 0 implies that any inaccuracy in its turning on may lead to significant step discontinuities.Here, one must recall that marching-on-time numerical schemes will demand staggered time sampling and, thus, some numerically evaluated values are bound to be affected by the relevant step discontinuities.A similar situation is apparent at = r that, apart from illustrating the inferiority of a SRRC excitation, also highlights the benefit of the time-windowed WP−sc pulse.
Ringing Pulses
Ringing is frequently manifest in feeding circuits and antennas -such occurrences are common in pulsed radar applications but are also likely to surface in baseband, digital transfer (see [37]).For facilitating the study of this phenomenon in antenna (numerical) experiments, [16] endowed the PE family with a ringing pulse defined as an amplitudemodulated cosine or sine function of carrier frequency 0 , its envelope being provided by the PE unipolar signature in (6).For increased flexibility, a normalized time-derivative variant of this pulse was also introduced.Examples of both TD signatures and their spectra are available in [16].
Although the mathematical expression of these pulses is rather intricate and, thus, their use in analytical formulations may be cumbersome, they prove to be extremely convenient for purely numerical formulations.Moreover, these pulses were shown in [16] to present a high degree of similarity with pulses effectively generated by physical circuits.
A ringing pulse based on the WP unipolar prototype was not yet presented in the literature.For maintaining this account's focus, this topic is deferred to future publications.
Feature Practical Benefits
Sections 3 and 4 focused on the conceptual benefits of using the causal WP and PE families of pulses and highlighted their propitious effect when used in certain computational EM frameworks.This section will concentrate on practical situations that can particularly leverage the characteristic properties of these families.
Time-Windowed EM Simulations
Present-day antenna designs critically depend on numerical studies of increasing complexity, with commercial EM computational tools becoming an omnipresent element of any design methodology.Such approaches push the available hardware resources to their limits and drastic simplifications must often be accepted for making those simulations tractable.One of the additional, significant complications induced by antenna simulations is the necessity to ensure their unimpeded radiation into an unbounded embedding.This is particularly testing in the case of the finite-difference or finite-element (type) methods that can only be applied to bounded domains of computations.Several strategies were proposed for precluding reflections from the boundary, the most popular being the absorbing boundary conditions introduced in [38] (and complemented in [39]) and the perfectlymatched layers (PML) introduced in [40][41][42] (with its coordinate stretching alternative [43]).From the two, the later has become the norm in most of the prevalent (commercial) software packages.However, as shown in [41], these domain termination methods are plagued by spurious reflections, primarily from waves reaching the boundary at slanting angles.Moreover, the specific transformations required by implementing these methods are only compatible with one single type of embedding, this all but ruling out the direct incorporation of multi-layer configurations that can only be reliably examined by introducing a homogeneous buffer between a truncated version of the layered arrangement and the PML boundary.Apart from the evident perturbation of the investigated structure, this choice places an additional penalty on the computational resources that need to also calculate field values in the essentially useless buffer zone.
An elegant remedy to this deficiency was advocated in [21] that put forward the use of time-windowed EM simulations.That approach relies on performing the analysis within a time-window that is sufficient for allowing the EM perturbation to propagate through the region of interest D, but ends prior to any reflection propagating back from the boundary reached D. Admitedly, this strategy does require extending the domain of computation for ensuring the needed safe margin, but (i) allows using elementary boundary conditions (PML-type boundaries are superfluous) and (ii) makes use of a limited time window, whereas traditional TD analyses require long simulations for ensuring that the entire input energy effectively leaves the domain of computation.Moreover, the approach is well-suited for examining layered configurations, providing the diameter of D is reasonably small.The essential ingredient for its application is the use of the causal, time-limited WP excitation.Time-windowed EM simulations were shown in [21] to provide excellent results, including for frequency-domain (FD) studies, while requiring up to 30 times shorter computation times and comparable memory resources.This approach was also used for the numerical modeling of CMOS embedded radiating loops of the kind appearing in [25] -a typical multy-layered configuration example.
Replication of Effectively Generated Pulses
Both the WP and the PE families of causal pulses were introduced purely mathematically.While their suitability for analytical and numerical explorations has been aptly argued, the question remains to what extent do such pulses resemble pulses that can be generated by physical circuitry.This question was affirmatively answered already in [16] that evidenced the similarity between suitably constructed PE and time-differentiated, ringing PE pulses, on the one hand, and the TD signatures generated by the circuits presented in [44] and [45], respectively, on the other hand.
The WP has an even higher similitude with physically generated pulses.In this respect, [21] has evidenced its likeness with the monocycle generated by the circuit introduced in [46], whereas [23] has shown its practical identity with the one generated by the solid-state implementation discussed in [24].Based on the latter demonstrated congruence, WP represents the monocycle of choice for modeling UWB antenna systems -any computer simulated prediction has very high chances of being amenable to a physical implementation since the feeding pulse generator is readily available.
Conclusions
The crucial role played in EM simulations by the excitation's causality was attested via theoretical arguments and numerical examples.Upon establishing the need for causal excitations, two families of pulses were constructed by starting from windowed-power (WP) and power-exponential (PE) unipolar prototypes.These prototypes were then used for deriving monocycles and pulses with almost rectangular spectra, and as envelops for constructing ringing pulses.The account constantly stressed the conceptual and practical benefits entailed by using these pulses and cogently demonstrated their superiority with respect to other excitations that are regularly used in antenna systems simulations.Two practical applications that particularly illustrate the opportunity of the time-windowed WP family were singled out.Firstly, the study promoted TD, time-windowed computational schemes as a means for eliminating the effect of the spurious reflections from the boundaries of computational domains and/or for drastically reducing runtimes.Secondly, upon noting the near-coincidence of this mathematical instrument with signatures that are generated by readily-available, solid-state pulse generators, the WP monocycle was put forward as an ideal excitation type in any design framework within the scope of UWB antenna systems.
Fig. 1 .
Fig. 1.Comparison between the SRRC pulse shown in [12, Figs. 1 and 2], and the PE−sc and WP−sc pulses mimicking its characteristics.(a) Time domain signatures; (b) spectral content.The solid vertical lines in (b) indicate the limits of the −41.3 dBm mask for indoor UWB systems, as specified in [36], and the dashed vertical line marks the targeted 6.85 GHz center frequency.The insets in (a) are zoom-in around = 0 ns and = 2 r = 3 ns. | 2021-05-12T10:49:23.913Z | 2021-04-15T00:00:00.000 | {
"year": 2021,
"sha1": "636acd80d853ee1d478c0e07ea94a5b74dc42d64",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.13164/re.2021.0001",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "636acd80d853ee1d478c0e07ea94a5b74dc42d64",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
115551374 | pes2o/s2orc | v3-fos-license | On the joint spectra of the two dimensional Lie algebra of operators in Hilbert spaces
We consider the complex solvable non-commutative two dimensional Lie algebra $L$, $L=\oplus$, with Lie bracket $[x,y]=y$, as linear bounded operators acting on a complex Hilbert space $H$. Under the assumption $R(y)$ closed, we reduce the computation of the joint spectra $Sp(L,E)$, $\sigma_{\delta ,k}(L,E)$ and $\sigma_{\pi ,k}(L,E)$, $k= 0,1,2$, to the computation of the spectrum, the approximate point spectrum, and the approximate compression spectrum of a single operator. Besides, we also study the case $y^2=0$, and we apply our results to the case $H$ finite dimensional.
Introduction
In [1] we introduced a joint spectrum for complex solvable finite dimensional Lie algebras of operators acting on a Banach space E. If L is such an algebra, and Sp(L, E) denotes its joint spectrum, Sp(L, E) is a compact non empty subset of L * , which also satisfies the projection property for ideals, i. e., if I is an ideal of L and Π : L * → I * denotes the restriction map, then Sp(I, E) = Π(Sp(L, E)). In addition, when L is a commutative algebra, Sp(L, E) reduces to the Taylor joint spectrum, see [5]. Moreover, in [2] we extended S lodkowski joint spectra σ δ,k and σ π,k to the case under consideration and we proved the usual spectral properties: they are compact non empty subsets of L * and the projection property for ideals still holds.
In this paper we consider the complex solvable non-commutative two dimensional Lie algebra L, L =< y > ⊕ < x >, with Lie bracket [x, y] = y, as bounded linear operators acting on a complex Hilbert space H, and we compute the joint spectra Sp(L, H), σ δ,k (L, H) and σ π,k (L, H), for k = 0, 1, 2, when R(y) is a closed subspace of H. Besides, by means of an homological argument, we reduce the computation of these spectra to the one dimensional case. We prove that these joint spectra are determined by the spectrum, the approximate point spectrum, and the approximate compression spectrum of x in Ker(y) and x in H/R(y), where x is the quotient map associated to x, (R(y) and Ker(y) are invariant subspaces for the operator x).
In addition, we consider the case y 2 = 0 (it easy to see that y is a nilpotent operator), and we obtain a relation between the spectrum of x in R(y) and a subset of the spectrum of x in H/R(y), which give us a more precise characterization of the joint spectrum Sp(L, E). Finally, we apply our computation to the case H finite dimensional.
The paper is organized as follows. In Section 2 we review several definitions and results of [1] and [2]. In Section 3 we prove our main theorems and, in Section 4, we consider the case y 2 = 0 and the finite dimensional case.
Preliminaries
In this section we briefly recall the definitions of the joint spectra Sp(L, H), σ δ,k (L, H) and σ π (L, H), k = 0, 1, 2. We restrict ourselves to the case under consideration. For a complete account of the definitions and mean properties of these joint spectra, see [1] and [2].
From now on, let L be the complex solvable two dimensional Lie algebra, L =< y > ⊕ < x >, with Lie bracket [x, y] = y, which acts as right continuous linear operators on a Hilbert space H, i. e., L is a Lie subalgebra of L(H) op , where L(H) is the algebra of all bounded linear operators defined on H, and where L(H) op means that we consider L(H) with its opposite product. We observe that any complex solvable non-commutative two dimensional Lie algebra may be presented in the above form.
If f is a character of L, we consider the chain complex (H ⊗ ∧L, d(f )), where ∧L denotes the exterior algebra of L, and d(f ) is the following map: Let H * (H ⊗ ∧L, d(f )) denote the homology of the complex (H ⊗ ∧L, d(f )), we now state our first definition.
In addition, the complex (H ⊗ ∧L, d(f )) may be written in the following way, where λ = f (x). We denote this chain complex by (C, d(λ)). Thus, as (0, λ) ∈ Sp((y, x), H) if and only if f ∈ Sp(L, H), where λ = f (x), to compute the latter is equivalent to compute the former, and to study the exactness of the chain complex (H ⊗ ∧L, d(f )) is equivalent to study the exactness of (C, d(λ)).
With regard to the joint spectra σ δ,k (L, H) and σ π,k (L, H), k = 0, 1, 2, we review, for the case under consideration, the definition of them given in [2]. If We now state our second definition.
Definition 2.2. With H, L and f as above, We observe that Sp(L, H) = σ δ,2 (L, H) = σ π,0 (L, H). Besides, as we have said, these joint spectra are compact non empty subsets of L * . In addition, as in the case of the joint spectrum Sp(L, H), we consider the joint spectra σ δ,k (L, H) and σ π,k (L, H) in terms of the basis A and B. As these joint spectra are Moreover, as in the case of the joint spectrum Sp(L, H), to compute σ δ,k (L, H) and σ π,k (L, H), 0 ≤ k ≤ 2, is equivalent to compute these joint spectra in terms of the basis A and B. Finally, to compute the latter joint spectra it is enough to study the complex (C, d(λ)), and to consider the corresponding properties involved in the definition of σ δ,k (L, H) and σ π,k (L, H), 0 ≤ k ≤ 2, for it.
The Main Result
We begin with the characterization of Sp(L, H). Indeed, we consider Sp((y, x), H), and by means of an homological argument we reduce its computation to the case of a single operator.
Let us consider the chain complex (C, d), Then an easy calculation shows that we have a short exact sequence of chain complex of the form, where (i j ) (0≤j≤2) and (p j ) (0≤j≤2) are the following maps: Thus, by [4, Chapter II, Section 4, Theorem 4.1], and the fact that p is a map of degree −1, we have a long exact sequence of homology spaces of the form, We observe that H 1 (C, d) = Ker(y) and that H 0 (C, d) = H/R(y). Moreover, as [x, y] op = y, we have that x(R(y)) ⊆ R(y) and that x(Ker(y)) ⊆ Ker(y). In addition, we have: Proof. It is a consequence of the long exact sequence of homology spaces and the form of the maps ∂ j , j = 0, 1.
In order to characterize the joint spectra σ π,k (L, H), we recall the notion of approximate point spectrum of an operator T : λ is in the approximate point spectrum of T , which we denote by Π(T ), if there exists a sequence of unit vectors, (x n ) n∈N , x n ∈ H, x n = 1, such that (T − λ)(x n ) → 0 (n → ∞). An easy calculation shows that λ / ∈ Π(T ) if and only if Ker(T − λ) = 0 and R(T − λ) is closed in H.
Indeed, if (a n ) n∈N is a sequence in Ker(y) such that ( On the other hand, if R((x − 1 − λ | Ker(y) ) is closed, let us consider a sequence (z n ) n∈N , z n ∈ H, such that d 1 (λ)(z n ) → (w 1 , w 2 ) ∈ H ⊕ H (n → ∞). We decompose H as the orthogonal direct sum of Ker(y) and Ker(y) ⊥ , H = Ker(Y )⊕Ker(y) ⊥ . Let (a n ) n∈N and (b n ) n∈N be sequences in Ker(y) and Ker(y) ⊥ , respectively, such that z n = a n + b n . Then, where y : Ker(y) ⊥ → R(y) is the restriction of y to Ker(y) ⊥ . We observe that, as R(y) is a closed subspace of H, y is a topological homeomorphism. Besides, as y(b n ) → w 2 (n → ∞), there exists a z 2 ∈ Ker(y) ⊥ such that b n → z 2 (n → ∞), and y(z 2 ) = w 2 . Then, As (a n ) n∈N is a sequence in Ker(y), and R(x − 1 − λ | Ker(y) ) is closed, there is a z 1 ∈ Ker(y) such that With regard to σ π,1 ((y, x), H), we have, by Definition 2.2, that, which, by Proposition 1, is equivalent to the following conditions: We shall see that σ π,1 ((y, x), H) = Sp(x − 1, Ker(y)) ∪ Π(x, H/R(y)). Indeed, it is clear that condition (i) is equivalent to λ / ∈ Sp(x − 1, Ker(y)). Then, it is enough to see that condition (ii)-(iii) are equivalent to λ / ∈ Π(x, H/R(y) ). However, by (ii), it suffices to verify that the fact R(d 0 )(λ) is closed is equivalent to R(x − λ) is closed. Now, as the quotient map, Π : H → H/R(y), is an identification, by [3, Chapter II, Section 6, Lemma 6.1], In order to study the joint spectra σ δ,k (L, H), k = 0, 1, 2, we recall the definition of the approximate compression Spectrum of an operator T in H: λ is in the approximate compression spectrum of T , which we denote by ΠC(T ), if there exists a sequence of unit vectors in H, (x n ) n∈N , x n ∈ H, x n = 1, such that (T − λ) * (x n ) → 0 (n → ∞), i. e., ΠC(T ) = Π(T * ). Besides, an easy calculation shows that λ does not belong to Π(T ) if and only if (T − λ) is a surjective map.
A Special Case
As we have seen, y is a nilpotent operator. In this section we study the case y 2 = 0 and we obtain a more precise characterization of th joint spectrum Sp(L, H).
We decompose H in the following way: H = Ker(y) ⊕ Ker(y) ⊥ . Besides, as R(y) is contained in Ker(y), let us consider M, the closed subspace of H defined by M = Ker(y) ∩ R(y) ⊥ . Then we have another orthogonal direct sum decomposition of H, H = R(y) ⊕ M ⊕ Ker(y) ⊥ . Moreover, if we recall that x(R(y)) ⊆ R(y) and x(Ker(y)) ⊆ Ker(y), then we have that x and y have the following form, where y is as in Section 3 and the maps x ij ,1 ≤ i ≤ j ≤ 2, are the restriction of x to the corresponding spaces. We now see that, in the case under consideration, Sp(L, H) reduces essentially to the spectrum of x in Ker(y). Proof. An easy calculation shows that the relation [x, y] op = y is equivalent to yx 33 − x 11 y = y. However, as y is a topological homeomorphism, x 33 = I Ker(y) ⊥ + y −1 x 11 y. In particular, Sp(x 33 , Ker(y) ⊥ ) = Sp(x 11 , R(y)) + 1. Then, as Sp(x, H/R(y)) = Sp(x 22 , M) ∪ Sp(x 33 , Ker(y) ⊥ ), where M = R(y) ⊥ ∩ Ker(y),we have that Sp(x, H/R(y)) = (S 1 + 2) ∪ S 2 .
Finally, we consider the case R(y) closed, y 2 = 0, and H finite dimensional. If r = dim(R(y)) and k = dim(Ker(y)), let us chose a basis of Ker(y) such that the first r-vectors of it are a basis of R(y), and in this basis, x has an upper triangular form, with diagonal entries λ ii , 1 ≤ i ≤ k. Then we have the following corollary. | 2016-03-09T11:47:40.000Z | 2016-03-09T00:00:00.000 | {
"year": 2016,
"sha1": "eb35c78a580c8f3c1bb4507f1fd3e67f643ad854",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "eb35c78a580c8f3c1bb4507f1fd3e67f643ad854",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
3780142 | pes2o/s2orc | v3-fos-license | Distamycin A Inhibits HMGA1-Binding to the P-Selectin Promoter and Attenuates Lung and Liver Inflammation during Murine Endotoxemia
Background The architectural transcription factor High Mobility Group-A1 (HMGA1) binds to the minor groove of AT-rich DNA and forms transcription factor complexes (“enhanceosomes”) that upregulate expression of select genes within the inflammatory cascade during critical illness syndromes such as acute lung injury (ALI). AT-rich regions of DNA surround transcription factor binding sites in genes critical for the inflammatory response. Minor groove binding drugs (MGBs), such as Distamycin A (Dist A), interfere with AT-rich region DNA binding in a sequence and conformation-specific manner, and HMGA1 is one of the few transcription factors whose binding is inhibited by MGBs. Objectives To determine whether MGBs exert beneficial effects during endotoxemia through attenuating tissue inflammation via interfering with HMGA1-DNA binding and modulating expression of adhesion molecules. Methodology/Principal Findings Administration of Dist A significantly decreased lung and liver inflammation during murine endotoxemia. In intravital microscopy studies, Dist A attenuated neutrophil-endothelial interactions in vivo following an inflammatory stimulus. Endotoxin induction of P-selectin expression in lung and liver tissue and promoter activity in endothelial cells was significantly reduced by Dist A, while E-selectin induction was not significantly affected. Moreover, Dist A disrupted formation of an inducible complex containing NF-κB that binds an AT-rich region of the P-selectin promoter. Transfection studies demonstrated a critical role for HMGA1 in facilitating cytokine and NF-κB induction of P-selectin promoter activity, and Dist A inhibited binding of HMGA1 to this AT-rich region of the P-selectin promoter in vivo. Conclusions/Significance We describe a novel targeted approach in modulating lung and liver inflammation in vivo during murine endotoxemia through decreasing binding of HMGA1 to a distinct AT-rich region of the P-selectin promoter. These studies highlight the ability of MGBs to function as molecular tools for dissecting transcriptional mechanisms in vivo and suggest alternative treatment approaches for critical illness.
Introduction
Acute lung injury (ALI) represents a devastating clinical syndrome with increasing incidence that is initiated by an injurious stimulus, followed by the development of lung inflammation, increased alveolar-capillary barrier permeability, and influx of protein-rich edema fluid with resultant impairment in gas exchange due to alveolar flooding. Injury to the lung can be incurred through direct means (e.g., aspiration pneumonia), or, more commonly through indirect means (e.g., abdominal sepsis and resultant bacteremia often from gram negative rods that elaborate endotoxin). Despite the similar disruption of the alveolar-capillary membrane as an endpoint of both indirect and direct lung injury, the underlying mechanisms of injury are likely quite different, with direct injury initially targeting the lung alveolar epithelial cell and indirect injury activating the endothelium in the early stages [1]. Irrespective of the mechanism of lung injury, there exist no targeted treatment strategies for ALI, with current standard of care focusing on supportive approaches [2,3]. Thus novel molecular strategies applied toward improving outcomes from ALI are desperately needed.
Transmigration of neutrophils into the lung represents a critical early pathophysiologic step in the development of ALI, as evidenced by ameliorated lung injury in some animal models in which neutrophils are eliminated [4,5]. However, application of anti-inflammatory strategies (e.g., high-dose steroids, cyclooxygenase (COX-2) inhibitors) in ALI treatment has not proven universally effective [6,7]. Possible reasons for failure of these approaches are multifactorial [8][9][10], including the inability to easily titrate a drug's effect, i.e., an ''all or none'' treatment effect. Therefore, targeted treatment approaches that modulate neutrophil migration in a titratable fashion during an inflammatory stimulus represent important avenues of investigation in ALI. One potential approach, which we examine herein, is to interfere predictably with transcription factor-DNA binding to target genes critical for neutrophil recruitment.
ALI frequently is triggered by bacterial infection, and there exist an important subset of patients who suffer infections by gramnegative bacteria. These pathogens elaborate endotoxin (lipopolysaccharide, LPS) that triggers a complex inflammatory cascade, including release of cytokines (e.g., tumor necrosis factor) and transcriptional up-regulation of numerous genes critical for the inflammatory response. A number of these cytokine-inducible genes share common regulatory elements in their promoter sequences and therefore are up-regulated by common transcription factors (TF). TFs such as NF-kB, IRF-1, and Stat-1 have been implicated as important regulators of a number of inducible genes in the inflammatory pathway, including P-selectin, E-selectin, vascular cell adhesion molecule (VCAM)-1, and nitric oxide synthase (NOS)-2 [11][12][13][14][15][16]. Specifically, P-selectin is upregulated by both LPS and TNF-a via similar transcriptional mechanisms [12,[17][18][19]. Beyond the simplified paradigm of TF-DNA binding leading to gene activation, elegant molecular studies have demonstrated that larger complexes of multiple transcription factors interacting with each other, as well as with the conserved DNA promoter motifs, are important for gene regulation and have been termed ''enhanceosomes'' [20]. Enhanceosome formation is facilitated by a group of proteins known as architectural transcription factors that are critical for gene regulation due to their ability to modify DNA conformation and to recruit DNA-binding of other TFs [21,22]. High mobility group A1 (HMGA1, formerly known as HMG-I/Y) is an architectural transcription factor that binds to AT-rich DNA in the minor groove via three ''AT-hook'' DNA-binding motifs. HMGA1 binding sites are often adjacent to or overlap with consensus binding sites for conventional TFs. The role of HMGA1 in enhanceosome formation has been studied most extensively in regulating expression of the virus-inducible interferon (IFN)-b gene [20,23,24]. Similarly, HMGA1 plays a critical role in facilitating binding of nuclear factor (NF)-kB to the human E-selectin promoter [14,15], and as demonstrated by our laboratory, to the NOS2 promoter [25,26], to enable transcriptional up-regulation of these genes following inflammatory cytokine induction. Thus AT-rich regions surrounding TF consensus binding sites within promoters of a number of genes within the inflammatory cascade play critical roles in the inflammatory response. The ability to interfere with ATrich region DNA binding in a predictable fashion therefore has significant potential to regulate a subset of genes and modulate the inflammatory response.
Minor groove binding drugs (MGBs), including the antibiotic Distamycin A (Dist A), constitute a class of drugs that bind AT-rich sequences within the minor groove of DNA in a sequence-and conformation-specific fashion, thereby interfering with TF-DNA binding to AT-rich sequences [27,28]. HMGA1 is one of the few transcription factors known to bind exclusively to AT-rich DNA in the minor groove [29,30]. Therefore, our group and other investigators have used MGBs to study the effect of interfering with HMGA1-DNA binding in vitro [26,31,32]. In order to determine whether MGBs might modulate gene expression in a similar fashion in vivo, we examined the effect of MGBs on mortality and hypotension during murine endotoxemia [33]. Dist A conferred a significant survival benefit following intraperitoneal LPS and attenuated the hypotensive response during murine endotoxemia. This beneficial effect in vivo correlated with attenuation of NOS2 induction in tissues and in murine macrophages. Furthermore MGBs interfered specifically with TF-DNA binding in a selective fashion to a distinct AT-rich region of the NOS2 enhancer. Thus, the ability to regulate transcription of targeted genes during an inflammatory state represents a novel and powerful tool toward development of potential therapeutics.
Given the presence of similar regulatory regions in the promoters of the inducible genes E-selectin and P-selectin and their roles in neutrophil recruitment to the tissues, we now hypothesize that MGBs might likewise affect transcriptional regulation of these genes. Thereby, attenuated neutrophil recruitment to the tissue might account for the beneficial effect of MGBs in vivo, and MGBs may therefore represent a new class of anti-inflammatory molecules. To test this hypothesis, we examined the effect of MGBs on neutrophil recruitment during murine endotoxemia. We analyzed the effect of MGBs on neutrophilendothelial interactions in vivo, followed by testing of the effects of MGBs on expression and promoter trans-activation of candidate genes (P-selectin, E-selectin) involved in the distinct steps of the inflammatory cascade. Furthermore, the effects of MGBs on DNA-protein interactions were characterized and revealed that HMGA1-DNA binding is critical for full induction of the Pselectin promoter, and, moreover, that inhibition of HMGA1-DNA binding in vivo at a novel AT-rich DNA site within the Pselectin promoter correlates with attenuated inflammation during murine endotoxemia.
Murine endotoxemia
Male C57BL/6 wild-type (WT) mice (Charles River Laboratories, 6-8 weeks of age) were injected with lipopolysaccharide (LPS) 40 mg/kg (Escherichia coli serotype O26:B6 endotoxin, Sigma) or vehicle (saline) intraperitoneally (i.p.). Mice also received Distamycin A (25 mg/kg) i.p. 30 minutes prior to LPS administration (Dist A, Sigma) or Vehicle (dimethylsulfoxide, DMSO mixed with PBS, Sigma) as described previously [33]. RNA was extracted from lung tissue 2 hours following LPS treatment [33]. In separate experiments, lung and liver tissue was processed for immunohistochemistry between 4 and 24 hours after LPS treatment and stained for Gr-1 (neutrophils) (Pharmingen), or P-selectin (Santa Cruz Biotechnology) [34]. All research involving animals was conducted according to the recommendations for the ''Guide for the Care and Use of Laboratory Animals'', and all animal studies were approved by the Harvard Medical Area Institutional Animal Care and Use Committee (IACUC). Animals were housed in pathogen-free barrier facilities and regularly monitored by the veterinary staff.
Cell culture and reagents
Bovine aortic endothelial cells (BAEC) and primary murine lung endothelial cells (MLEC) (generous gift of Dr. Augustine Choi) were isolated and cultured as described previously [35,36]. Murine bEnd.3 endothelial cells (American Type Culture Collection) were cultured as recommended. Human and murine recombinant tumor necrosis factor (TNF)-a were obtained from PeproTech Inc. (Rocky Hill, NJ).
Plasmid constructs
The mouse [21379/213] P-selectin luciferase reporter plasmid (mp1379LUC, cloned into p0LUC) was a generous gift of Rodger P. McEver [11]. The human [2578/+35] E-selectin pCAT3 reporter plasmid was a generous gift of Tucker Collins [37]. The E-selectin promoter sequence was subcloned into the Acc65I/ XhoI sites of pGL2-Basic (Promega) [34], resulting in generation of a plasmid construct termed (Esel-luc). The human dominantnegative HMGA1 cDNA construct (mutant HMGI(mII,mIII)) lacks the ability to bind AT-rich DNA sequences in vitro but retains capacity for specific protein-protein interactions with other transcription factors [38]. This mutant construct was subcloned into the HindIII/KpnI sites of the pCMVFlag expression vector (Sigma-Aldrich Co., St. Louis, MO) with optimization of the Kozak consensus sequence [39] (CTTATG to GCCATG), resulting in generation of a plasmid construct termed (DNHMGA1-pCMVFlag) [40]. The p50, p65, and HMGA1 expression vectors were generated through cloning full-length cDNA sequences [25] into pcDNA3 (Invitrogen). Constructs were confirmed by sequencing, and where appropriate, expression was tested using the TNT T7 Quick Coupled Transcription/ Translation System (Promega Corporation, Madison, WI).
Transient transfections of BAEC cells and reporter assays
P-selectin and E-selectin plasmids (1.0 mg) were transiently transfected into BAEC cells using FuGENE 6 transfection reagent (Roche Applied Science), as described previously [33]. Twelve hours following transfection of the reporter construct and a bgalactosidase expression vector (to normalize for luciferase activity), cells were conditioned in standard media containing 2% fetal bovine serum (FBS), then pre-treated with Dist A (25 mM) or Vehicle (ethanol, less than 1% final volume), followed by addition of LPS (1 mg/ml) or human TNF-a (10 ng/ml) 30 minutes later. Following treatment, cells were harvested 4 hours after LPS treatment and 12 hours after TNF-a treatment and assayed for luciferase activity (Promega Luciferase Assay System) and b-galactosidase [33]. In separate experiments, transient transfections were undertaken with the P-selectin promoter (0.5 mg), increasing concentrations of the DNHMGA1-pCMVFlag vector (0.5-1.0 mg, or an empty vector as a control), p50/p65 and HMGA1 expression vectors (0.25 mg each), and a b-galactosidase expression vector (to normalize for luciferase activity).
RNA isolation and Northern blot analysis
RNeasy Mini RNA isolation kit (Qiagen) was used to extract total RNA from mouse tissues according to the manufacturer's instructions. Northern blot analysis using a radiolabeled murine Pselectin probe (generous gift of Rodger P. McEver [12]), E-selectin probe (generous gift of Mukesh Jain [41]), or HMGA1 probe [26,42] was performed as previously described [33]. A radiolabeled rRNA 18S probe [33] was used to confirm equal loading. Quantitation of message for each gene relative to 18S was undertaken using ImageQuant software (GE Healthcare).
Electrophoretic Mobility Shift Assay (EMSA)
EMSAs were performed as described previously [33] with double-stranded oligonucleotide probes encoding an AT-rich sequence within a region previously demonstrated to be critical for induction of the murine P-selectin promoter [(2542 to 2521): 59-AGAAATTCTCCCTGGATTTTCC-39] [12]. Nuclear extracts were harvested from BAEC cells or primary murine lung endothelial cells with or without 1 hour of exposure to human TNF-a or murine TNF-a (10 ng/ml), and nuclear protein was quantified by the Bradford dye-binding method (Bio-Rad). HMGA1 peptide (43 amino acids) was synthesized by Tufts Physiology Dept Core Facility (Boston, MA) and encompassed the AT-hook DNA binding domains (DBD)-2 and DBD-3 [43]. The radiolabeled probes were incubated with 10-20 mM Dist A (or ethanol as a vehicle control) for two hours prior to electrophoresis.
In separate experiments to test for presence of specific proteins within the TNF-a-inducible complex, the nuclear protein mixture was incubated for 30 min at room temperature with antibodies against NF-kB family members (p50 and p65, Santa Cruz), Ets-1 (unrelated antibody, Santa Cruz), or an isotype control IgG (Santa Cruz).
Chromatin Immunoprecipitation (ChIP)
ChIP analysis was performed as described previously [44] using the Chromatin Immunoprecipitation Assay Kit (Millipore) on bEnd.3 cells. The protocol was carried out according to the manufacturer's instructions, using approximately 2610 6 cells harvested 3 hours after treatment with 10 ng/ml murine TNF-a and/or Dist A (50 mM, or the appropriate Vehicle control). Following formaldehyde crosslinking, cell lysates were sonicated 25 times for 15 sec each time to shear the genomic DNA to 200-1000 bp lengths. Immunoprecipitation was subsequently carried out with either an HMGA1 affinity-purified antibody [45] or an equivalent amount of rabbit IgG control. Following reversal of formaldehyde crosslinking, genomic DNA was purified using QIAquick PCR Purification Kit (Qiagen). For the positive control sample (''input''), a 1% volume of sample was removed before the immunoprecipitation step, followed by subsequent reversal of crosslinks and DNA purification as described. Precipitated (and ''input'' control) DNA was subjected to 35 cycles of PCR using primers to amplify a 246-basepair region of the murine P-selectin promoter (encompassing the AT-rich region at basepairs 2542 to
Intravital Microscopy (IVM) and Image Analysis
IVM and analysis was performed as described previously [46]. Briefly, mice were treated with i.p. Dist A (25 mg/kg, or Vehicle control (DMSO/PBS)) followed 30 min later by intrascrotal murine TNF-a injection (500 ng). Mice were then anesthetized with i.p. ketamine/xylazine, and the right cremaster muscle was prepared as previously described [46] and overlaid with sterile, bicarbonate-buffered Ringer's injection solution (pH 7.4). Fluorescently labeled endogenous circulating leukocytes (achieved through injection of 2 mg/ml rhodamine 6G via internal jugular vein catheter insertion) were visualized by an X40 water-immersion objective (Zeiss Acroplan NA 0.75 '; Oberkochen, Germany) by video-triggered stroboscopic epi-illumination on an intravital microscope (IV-500; Mikron Instruments, San Marcos, CA). At least three venule trees per mouse were chosen, and 1-min recordings were collected of sub-segments of postcapillary and small collecting venules at 5-min intervals to assess baseline rolling. Measurements were taken hourly for four hours following TNF-a injection. The rolling fraction for each individual venule was calculated as percent leukocytes interacting with the vessel wall amongst total number of detected fluorescent cells passing the vessel during the observed period. Sticking efficiency is defined as the percentage of cells that engage in firm arrest ($30 sec) among all leukocytes passing a microvessel during the analysis interval. Vessel cross-sectional diameters, velocities of individual rolling and non-interacting leukocytes, wall shear rate, and wall shear stress were determined as previously described [46].
Statistical analysis
Results for each treatment group are summarized as mean values 6 standard error (SE). Comparison of results among multiple groups at different time points was performed by two-way analysis of variance (StatMost Software, Salt Lake City, UT). Comparisons between means of two groups were performed using an unpaired t-test. Statistical significance was defined as a p value,0.05.
Distamycin A Attenuates Lung and Liver Endotoxin-Induced Lung Inflammation
To examine the effect of Dist A on inflammation during systemic endotoxemia, C57BL/6 male adult mice were treated intraperitoneally (i.p.) with vehicle, LPS/Vehicle or LPS/Dist A (n = 9 per treatment group). Lung (Fig. 1A) and liver (Fig. 1B) tissue were harvested following treatment, processed for immunohistochemistry, and subjected to Gr-1 (neutrophil) staining. The number of positively stained cells in the Vehicle, LPS/Vehicle and LPS/Dist A groups was quantified using determination of brown pixilated area by NIH Image Software. As reported by our group and others, systemic endotoxin (LPS/Vehicle) results in recruitment of inflammatory cells to the lung interstitium [47,48] and liver parenchyma [49][50][51] (p,0.05 compared with vehicle-treated mice in Fig. 1A-B). Interestingly, treatment with Dist A decreased endotoxin-induced lung inflammation at 4 hours following treatment, and this reduction remained at 24
Distamycin A Attenuates Inflammatory Cytokine-Induced Neutrophil-Endothelial Interactions
We hypothesized that the effect of Dist A in attenuating endotoxin-induced lung and liver inflammatory cell recruitment would correlate with reduced interactions of circulating neutrophils with the endothelial surface. To test this hypothesis, we subjected mice to intravital microscopy of the cremasteric muscle at 3-4 hours following treatment with systemic TNF-a/Dist A or TNF-a/Vehicle (Representative Still Photos, n = 3 mice per treatment group, Figure 2A). TNF-a was selected for these experiments as representing a key pro-inflammatory cytokine in the LPS pathway. Preliminary experiments and intravital microscopy analysis demonstrated that total circulating cell counts were not altered by administration of Dist A alone (data not shown). However, the number of adherent cells to the endothelial surface was visibly reduced with the addition of DistA during a systemic inflammatory response, and formal analysis of the distinct phases of neutrophil-endothelial interaction [52] revealed a significant reduction in the rolling fraction (39.462.8% vs 23.262.5%, p = 0.0001) and sticking efficiency (21.363.5% vs 8.561.5%, p = 0.0004) in the TNF-a/Dist A-treated mice when compared with the TNF-a/Vehicle-treated mice (Fig. 2B).
Distamycin A Selectively Decreases Induction of Pselectin Expression and Promoter Activity
Given the role of the minor groove binder Dist A in decreasing rolling fraction and sticking efficiency of leukocytes in TNF-atreated mice, we hypothesized that inducible expression and promoter activity of cytokine-induced genes critical for this effect would be reduced in the presence of Dist A. In considering the molecules critical for early neutrophil-endothelial interactions [52], we selected two adhesion molecules demonstrated previously to be inducible by cytokines in endothelial cells: P-selectin [12] and E-selectin [53]. We hypothesized that reduction of P-selectin message would be mediated at the transcriptional level and, therefore, that induction of P-selectin promoter activity would be selectively attenuated by Dist A. To test the effect of Dist A on cytokine induction of P-and E-selectin promoter activities, we performed transient transfections in bovine aortic endothelial cells (BAEC) using promoter-reporter constructs for each of these genes. Transfected cells were then treated with Vehicle, TNF-a/ Vehicle, or TNF-a/Dist A; or with Vehicle, LPS/Vehicle, or LPS/Dist A and assessed for luciferase activity (with normalization for b-galactosidase activity) twelve hours or four hours after transfection, respectively ( Figure 3A). As anticipated [12,53], both genes exhibited inducible promoter activity following treatment with TNF-a and LPS. Interestingly, while inducible promoter activity of E-selectin was not significantly altered with the addition of Dist A to TNF-a or LPS (p = NS), the induction of P-selectin promoter activity was markedly attenuated when transfected cells were treated with TNF-a/Dist A or LPS/Dist A as compared with TNF-a/Vehicle (78% reduction, p = 0.006) or LPS/Vehicle (52% reduction, p = 0.004). Given the critical importance of P-selectin in the rolling fraction and sticking efficiency phases of the adhesion cascade [52,54], these findings raise the important possibility that Dist A mediates reduced neutrophil-endothelial interaction through transcriptional down-regulation of P-selectin promoter activity. Moreover, in contrast to P-selectin, and similar to the findings of other investigators [55], cytokine-induced E-selectin promoter activity is not significantly altered in the presence of Dist A. These results lend further support to the importance of Pselectin in mediating the observed effects of Dist A on attenuated neutrophil recruitment in our model.
To assess the effect of Dist A on inducible expression of these genes, lung tissues were harvested from mice at two hours following treatment with Vehicle, LPS/Vehicle, or LPS/Dist A. Tissues were then subjected to RNA extraction and Northernblotting using radiolabeled P-selectin or E-selectin probes (and an 18S probe as a loading control) (Fig. 3B). While both P-selectin and E-selectin expression increased following LPS treatment (21.561.5 fold for P-selectin and 2.061.5-fold E-selectin), only P-selectin expression was substantially attenuated following Dist A treatment (26.063.2% reduction for P-selectin vs 4.060.005% reduction for E-selectin).
Distamycin A Attenuates P-selectin Tissue Expression in Lung and Liver During Endotoxemia
Given the presence of P-selectin expression within vascular endothelial cells as well as within platelets, we examined lung and liver tissue sections for the effect of Dist A on inducible P-selectin expression within the lung and liver parenchyma that has been described by others during endotoxemia [49][50][51]. Lung and liver tissue was harvested from mice following treatment with Vehicle, LPS/Vehicle, or LPS/DistA. Lung (Fig. 4A) and liver (Fig. 4B) tissues were then subjected to immunohistochemistry using a Pselectin antibody. Analysis of lung sections revealed a significant LPS-induced increase in P-selectin staining within the lung vasculature that was reduced in the presence of Dist A at 4 hours after treatment, and this reduction remained at 24 hours after treatment (p,0.05 compared with LPS/Vehicle slides). Similarly, significant reduction in P-selectin staining in the liver was seen with LPS/DistA compared with LPS alone at 4 hours after treatment (0.2960.02 vs 0.1160.02%Area per 2006 field for LPS/Vehicle vs LPS/Dist A, respectively, p,0.05). By 24 hours after treatment, this trend persisted (0.0760.04 vs 0.0160.005%Area per 2006 field for LPS/Vehicle vs LPS/Dist A, respectively, p = NS). Notably, there was a significant overall reduction of P-selectin staining in all groups at 24 hours after treatment, compared with the four-hour timepoint.
Distamycin A disrupts binding of an inducible protein-DNA complex containing NF-kB to an AT-rich region of the P-selectin promoter
Given the effects of the minor groove binder Dist A in inhibiting inducible P-selectin expression in tissues and promoter-activity in endothelial cells, we next tested the hypothesis that Dist A decreases protein-DNA binding to the P-selectin promoter. We first examined HMGA1 expression in lung tissue of mice following vehicle, LPS/vehicle, or LPS/DistA to determine whether Dist A directly affected HMGA1 message levels (Fig. 5A). We hybridized the same blot as in Fig. 3B for HMGA1, and we found no significant effect of LPS or DistA on HMGA1 expression (1.2560.075 fold change for LPS/veh vs vehicle and 1.3160.25 fold change for LPS/DistA vs vehicle, respectively; p = NS). Next, to test DNA-protein binding activity, we examined an AT-rich DNA region within the P-selectin promoter that has previously been demonstrated to be critical for induction of promoter activity (basepairs 2542 to 2521). Pan et. al. demonstrated that members of the NF-kB family (p50/p65 subunits) are part of an inducible binding complex that forms within this promoter region [12]. Moreover, these authors speculated that HMGA1 might also bind this AT-rich region and facilitate NF-kB-binding through a mechanism similar to that described for upregulation of a number of other genes in the inflammatory cascade, including NOS2 and E-selectin [13,14,20,24,25]. We therefore set out to determine whether Dist A would disrupt a previously described inducible binding complex that forms at an AT-rich region of the P-selectin promoter and, moreover, whether HMGA1 binds with NF-kB family members to this AT-rich region.
Nuclear extracts harvested from BAEC cells or primary murine lung endothelial cells (MLEC) two hours following treatment with vehicle or TNF-a (human TNF-a for BAECs and murine TNF-a for MLECs) were electrophoresed with radiolabeled probes spanning the AT-rich region (basepairs 2542 to 2521) from the P-selectin promoter ( Figure 5). As described previously [12], an inducible ''doublet'' complex was observed in BAEC and MLEC cells following treatment with TNF-a (Fig. 5B lanes 3 and 9-10 and Fig. 5C, lane 3). Next, Dist A (or Vehicle control, V) was incubated with the radiolabeled probe prior to gel electrophoresis. Dist A decreased binding within the TNF-a-inducible complex, particularly the upper band of the ''doublet'' (Fig. 5B, lanes 4-5, [11][12], and use of identical and non-identical cold competitors (IC, NIC) confirmed the specificity of this inducible complex, with elimination of the binding complex with the IC (Fig. 5B, lane 6) and retention of the inducible complex with the NIC (Fig. 5B, lane 7).
Incubation of the nuclear extracts from TNF-a-treated cells with NF-kB family member antibodies (p50 and p65) revealed presence of these proteins in the complex as indicated by supershifted and/or disrupted bands (Fig. 5C, lanes 4-5), while no supershift or disruption of the binding complex was observed with an unrelated antibody (Ets-1, Fig. 5C, lane 6) or an isotype control antibody (IgG, Fig. 5C, lane 7).
HMGA1 is critical for induction of P-selectin promoter activity
Given that Dist A selectively decreases P-selectin expression and promoter activity (Fig. 3-4) and, moreover, that Dist A disrupts formation of an inducible binding complex at an AT-rich region of the P-selectin promoter containing HMGA1 and NF-kB (Fig. 5), we hypothesized that HMGA1 binds to the P-selectin promoter in this region and plays a critical role in P-selectin induction. Conversely, inhibition of HMGA1 binding would therefore be expected to attenuate induction of P-selectin promoter activity. To test this hypothesis, we examined whether HMGA1 would facilitate NF-kB induction of the P-selectin promoter (Fig. 6A) and, conversely, whether a dominant-negative form of HMGA1 Figure 3. Distamycin A selectively decreases induction of P-selectin promoter activity and expression. A. BAEC cells were transiently transfected with promoter-reporter constructs for P-selectin and E-selectin. Transfected cells were treated with Vehicle, TNF-a/Vehicle, or TNF-a/Dist A, then harvested twelve hours after treatment and assessed for luciferase activity (with normalization for b-galactosidase levels). Similar experiments were performed in which transfected cells were treated with Vehicle, LPS/Vehicle, or LPS/Dist A and harvested four hours after treatment. Fold change was assessed relative to normalized values of ''1'' for the Vehicle-treated condition for each construct. These experiments were repeated 3 separate times with duplicate wells for each condition. (*p,0.05 compared with Vehicle for each construct; **p,0.05 compared with TNF-a/Vehicle or LPS/Vehicle for P-selectin). B. Lung tissue was harvested from wild type mice two hours following treatment with Vehicle (Veh), LPS/Vehicle, or LPS/Dist A (Dist), then subjected to RNA extraction and Northern blotting using a radiolabeled probe for P-selectin or E-selectin (and an 18S probe as loading control). This experiment was repeated two separate times. doi:10.1371/journal.pone.0010656.g003 that does not bind DNA (DN-HMGA1) [38] would decrease TNF-a-induced P-selectin promoter activity (Fig. 6B).
BAEC cells were first transiently transfected with a P-selectin promoter-reporter construct with addition of expression vectors for NF-kB family members (p50/p65) and HMGA1 (Fig. 6A). No significant change in basal P-selectin promoter activity was seen with addition of HMGA1 alone, as has been described previously with other genes [25]. Addition of p50/p65 resulted in significant upregulation of the P-selectin promoter (p,0.05 vs. P-selectin promoter alone). Interestingly, the addition of HMGA1 along with p50/p65 resulted in synergistic upregulation of the P-selectin promoter, beyond that of p50/p65 alone (p,0.05 vs. P-selectin promoter alone). Thus, HMGA1 facilitates upregulation of Pselectin promoter activity by p50/p65.
To investigate the role of HMGA1 in upregulating P-selectin promoter activity, BAEC cells were transiently transfected with a P-selectin promoter-reporter construct and increasing concentrations of DN-HMGA1 (or empty vector control) (Fig. 6B). Transfected cells were treated with TNF-a, then harvested to assess for P-selectin promoter activity. We observed a significant reduction in TNF-a-induced P-selectin promoter activity in a dose-dependent fashion with increasing concentrations of DN-HMGA1 (p,0.05 for 0.5 mg and 1.0 mg DN-HMGA1 when compared with empty vector control). Of note, the DN-HMGA1 construct had no significant effect on the promoter activity of another unrelated gene (heme oxygenase-1, data not shown). Thus, HMGA1 is critical for TNF-a-induced P-selectin promoter activity.
Distamycin A blocks in vitro and in vivo HMGA1 binding to a distinct AT-rich region of the P-selectin promoter
We next set out to further test the hypothesis that Dist A attenuates induction of P-selectin promoter activity through inhibiting binding of HMGA1 to an AT-rich DNA region. To do this, we carried out electrophoretic mobility shift assays (EMSAs) using a synthesized protein (to assess in vitro binding, Fig. 6C) and chromatin immunoprecipitation experiments using murine endothelial cells (to assess in vivo binding, Fig. 6D). First, EMSAs were performed using HMGA1 peptide (containing the AT-hook DNA binding domains, see Methods) and a radiolabeled probe spanning the AT-rich region of the P-selectin promoter (basepairs 2542 to 2521, as above). Significant binding of the HMGA1 peptide to this site was observed (Fig. 6C, lane 2), while addition of Dist A significantly reduced in vitro binding of the HMGA1 peptide (Fig. 6C, lane 3).
Next, to examine in vivo binding of HMGA1 to the AT-rich region of the P-selectin promoter, chromatin immunoprecipitation (ChIP) was performed using murine endothelial cells (bEnd.3) and an affinity-purified HMGA1 antibody [45]. We and others have described that HMGA1 can function as an architectural transcription factor that binds constitutively to AT-rich DNA. With cytokine treatment, HMGA1 then facilitates binding of other transcription factors to form an inducible transcription factor complex, or enhanceosome [16]. Therefore, we hypothesized that HMGA1 would bind the P-selectin promoter pre-and post-TNF-a treatment, while addition of Dist A would inhibit HMGA1 binding in vivo. To test this hypothesis, ChIP was performed on cells treated with vehicle, TNF-a/Vehicle, and TNF-a/Dist A (Fig. 6D). Immunoprecipitated DNA (as well as ''input'' control DNA) was amplified by PCR with primers spanning the AT-region of the Pselectin promoter described above, with presence of a band indicating in vivo binding of the designated protein to the amplified DNA segment. No significant in vivo DNA-protein binding was detected with use of the IgG control antibody (Fig. 6D, lanes 1
Discussion
This study reports three important new findings. First, we present data supporting a novel anti-inflammatory strategy in vivo through using a minor groove binding drug to specifically interfere with DNA-protein binding in a targeted manner. Second, we demonstrate an important role for the architectural transcription factor HMGA1 in facilitating full induction of P-selectin promoter transactivation and inflammatory-induced gene expression. Third, we demonstrate that using transcriptional regulation to target select, similarly regulated inducible genes with common promoter motifs can be effective in improving outcomes in murine models of critical illness. Furthermore, our data supports the intriguing concept that minor groove binders can serve as an important in vivo tool to dissect molecular mechanisms of inflammatory disease processes.
Our previous work employed MGBs in vitro [25,26,33,42] and in vivo [33] to confirm an important role for TF-binding to AT-rich DNA regions in transactivation of the NOS2 promoter and in attenuating mortality and hypotension during murine endotoxemia. Numerous genes critical for the inflammatory cascade share similar promoter regulatory regions with NOS2, and we therefore hypothesized that MGBs would attenuate the inflammatory response through similarly interfering with TF-binding to ATrich DNA regions of promoters of genes critical for the inflammatory response. Recruitment of inflammatory cells to the tissue occurs via a complicated cascade of events, each step of which is highly regulated [52]. Leukocytes traversing the vasculature at rapid speed are lured to the activated endothelium through initial tethering and rolling mediated predominantly by members of the selectin family of adhesion molecules (P-selectin, E-selectin, and L-selectin). Leukocytes are then activated with subsequent firm adhesion to the endothelial surface facilitated in large part through interaction of immunoglobulin family members (on endothelial cell surface) with integrins (on leukocytes). Transmigration of leukocytes across the endothelial surface and into the tissue is likely mediated by a gradient of chemoattractants.
Given the role of MGBs in attenuating inducible gene expression [33] and the observed reduction of rolling fraction and sticking efficiency of leukocytes to the endothelial surface (Figure 2), we focused our studies on inducibly expressed genes previously demonstrated to play a role in the early tethering and rolling phases of leukocyte-endothelial surfaces. Therefore, we examined the effects of MGBs on P-selectin and E-selectin, both of which are inducibly expressed in endothelial cells ( [52], Figure 3). Of note, L-selectin is constitutively expressed on leukocytes [56]. While numerous members of the adhesion molecule families provide a contributory role toward leukocyte sticking and rolling [54,[56][57][58][59][60][61], studies of knockout animals have revealed that Pselectin plays the most critical and pronounced role in early leukocyte tethering and rolling [54,62]. Therefore, the fact that induction of P-selectin can be selectively attenuated in this model ( Figure 3) represents an important example of the potential in vivo benefits of exquisite transcriptional control. Similar to the observed effects of MGBs on NOS2 in this model [33], inflammatory cell recruitment and P-selectin induction was decreased but not entirely abolished with MGB treatment (Figure 3). Interestingly, other investigators have reported that NOS2 does not play a role in regulating expression of endothelial adhesion molecules, suggesting that our findings regarding Pselectin are independent of the regulation of NOS2 expression by Dist A in this model [63]. Furthermore, P-selectin exhibits constitutive as well as inducible expression within endothelial cells [11,12] such that the effect of leukocyte-endothelial interactions attributable to P-selectin is reduced but not eliminated with MGBs. Thus, the ability to selectively control gene transcription through interfering with TF-DNA binding presents the potential for a ''titratable'' anti-inflammatory effect, versus the ''all or none'' effect of more traditional anti-inflammatory approaches.
Interestingly, P-selectin is fairly unique as an adhesion molecule, as it is expressed not only in endothelial cells, but also in platelets, and roles of P-selectin in different locations has been a matter of recent debate. While numerous studies reported an important role for endothelial P-selectin expression in lung and liver inflammation and physiologic injury during endotoxemia [49][50][51], more recently, interest has arisen in the importance of platelet P-selectin expression in development of acid-induced lung injury [64]. While we cannot fully exclude the role of platelet P-selectin in our studies, examination of histologic sections ( Figure 4) and our studies in endothelial cells in vitro (Figures 5-6) support a significant contribution of endothelial P-selectin to our observations. Moreover, a recent study reported that P-selectin glycoproteinligand-1 regulates lung neutrophil recruitment independently of circulating platelets in a murine model of abdominal sepsis (cecal ligation and puncture model [65]). In aggregate, our findings in conjunction with those in the literature support the intriguing possibility that mechanisms of tissue neutrophil recruitment during indirect lung injury (e.g., systemic endotoxin, abdominal sepsis) are distinct from those that predominate during direct injury. Tissue neutrophil recruitment during indirect lung injury might rely more probe as loading control). This experiment was repeated two separate times. B. Nuclear extracts from BAEC cells (Lanes 1-7) and primary murine lung endothelial cells (MLEC, Lanes 8-12) (''Nuc Ext'') with (Lanes 3-7, [9][10][11][12] or without (Lanes 2,8) TNF-a stimulation were subjected to electrophoretic mobility shift assays (EMSA) using a radiolabeled probe spanning the AT-rich region of the P-selectin promoter (basepairs 2542 to 2521). Lane 1 represents the radiolabeled probe without addition of nuclear extract. TNF-a-treated nuclear extract was additionally incubated and electrophoresed with the radiolabeled probe and vehicle (V, lanes 3 and 10 (or in the presence of TNF-a without vehicle, Lane 9)) or increasing concentrations of Dist A (D 1 = 10 mM, D 2 = 20 mM, Lanes 4-5, 11-12) as well as with an identical competitor (IC, Lane 6) and a non-identical competitor (NIC, Lane 7). (* represents the inducible, specific complex seen following TNF-a treatment; ''R'' represents disruption of the TNF-a-inducible complex following addition of Dist A (Lanes 4-5 compared with Lane 3 and Lanes 11-12 compared with Lane 9-10). All of the binding studies were repeated at least two separate times. C. Nuclear extracts from BAEC cells (''Nuc Ext'') with (Lanes 3-7) or without (Lane 2) TNF-a stimulation were subjected to electrophoretic mobility shift assays (EMSA) using a radiolabeled probe spanning the AT-rich region of the P-selectin promoter (basepairs 2542 to 2521). Lane 1 represents the radiolabeled probe without addition of nuclear extract. TNF-a-treated nuclear extract was additionally incubated and electrophoresed with the radiolabeled probe and antibodies to the NF-kB family members p50 and p65 (lanes 4-5), or unrelated and control antibodies (Ets-1 and IgG control respectively, lanes 6-7). (* represents the inducible, specific complex seen following TNF-a treatment; ''r'' represents supershifted band/disruption of the TNF-a-inducible complex following addition of p50 and p65 antibodies (Lanes 4-5 compared with Lanes 6-7). All of the binding studies were repeated at least two separate times. doi:10.1371/journal.pone.0010656.g005 Figure 6. HMGA1 binds to the P-selectin promoter and is critical for full induction of P-selectin promoter activity. A. BAEC cells were transiently transfected with a P-selectin promoter-reporter construct with the addition of a blank expression vector, an expression vector for HMGA1, and/or expression vectors for NF-kB family members (p50/p65). Transfected cells were harvested and assessed for luciferase activity (normalized for b-galactosidase content). Results are expressed as fold-change in luciferase activity relative to transfection with the P-selectin promoter and a blank expression vector. B. BAEC cells were transiently transfected with a P-selectin promoter-reporter construct and increasing concentrations of a vector expressing a dominant-negative form of HMGA1 (DN-HMGA1). Transfected cells were stimulated with TNF-a, then harvested and assessed for luciferase activity (normalized for b-galactosidase content). Results are expressed for each transfection condition as fold-change in luciferase activity as a result of TNF-a stimulation. (*p,0.05 for 0.5 mg of DN-HMGA1 as compared with empty vector control; **p,0.05 for 1.0 mg of DN-HMGA1 as compared with empty vector control). This experiment was repeated three separate times, with each condition performed in triplicate. C. An electrophoretic mobility shift assay (EMSA) was performed using the HMGA1 peptide and a radiolabeled probe spanning the AT-rich region of the Pselectin promoter (basepairs 2542 to 2521) without (Lane 2) or with Dist A (10 mM, Lane 3). Lane 1 represents the radiolabeled probe in the absence of incubation with protein. (* represents the HMGA1-DNA complex in Lane 2 which is diminished in intensity following addition of Dist A in Lane 3).
heavily on endothelial P-selectin expression, while platelet Pselectin expression may play a more prominent role during direct lung injury.
Our results indicate that MGBs can serve as a useful tool to probe the functional effects of targeted DNA sequences in complicated biological systems in vivo. Through observing the effects of MGBs in inhibiting HMGA1-binding to a targeted ATrich region of the P-selectin promoter and in demonstrating the effects of the dominant-negative HMGA1 construct in attenuating induction of the P-selectin promoter by TNF-a, we were able to derive an important role for HMGA1 in regulating P-selectin promoter induction (Figures 5-6). We acknowledge that examination of the effect of MGBs on P-selectin induction in the presence of HMGA1 knockdown would provide more direct evidence that HMGA1 expression is essential for the observed effects of DistA. However, these experiments were not technically feasible, given the interference of the siRNA reagents with the MGB agents.
Other investigators have elegantly demonstrated in Drosophila that targeted minor-groove binding drugs can be fed to flies that interfere with binding of the Drosophila HMGA1 orthologue (termed D1) to specific AT-rich DNA sequences and result in specific gain-and loss-of-function phenotypes [66][67][68][69]. In these studies, small cell-permeable molecules were synthesized based upon the existing known structure of Distamycin A and were designed with the goal of developing improved tools to elucidate the role of architectural DNA regions in biology in model systems [66]. In recent years, there has been increasing interest in development of derivatives of MGBs as human chemotherapeutics to allow targeted delivery of DNA-modifying agents [69,70]. Furthermore, improvements in techniques to elucidate molecular structure has led to a growing literature on detailed characterization of MGB-DNA binding as well as on development of novel compounds with optimized DNA-binding and functional properties [71][72][73][74][75]. Our data supports the premise that novel small molecules interfering in a targeted way with sequence-and conformation-specific DNA binding can be studied at the molecular and physiologic level in higher order organisms subjected to models of human disease. Such approaches hold promise for development of novel treatment strategies for critical illness. To our knowledge, the present study and our prior work [33,43] represent the first in vivo applications of MGBs to murine models of critical illness.
In summary, we now demonstrate that MGBs can interfere in a targeted manner with HMGA1 binding to the P-selectin promoter in vivo, resulting in attenuated P-selectin induction and decreased lung and liver inflammation during murine endotoxemia. These findings, in combination with our previous data showing improvement in mortality and hypotension during murine endotoxemia attributed to attenuated NOS2 induction [33], supports the interesting possibility that there exist select genes regulated by common promoter motifs that can be advantageously regulated to improve outcomes from critical illness. With a growing appreciation of conserved regulatory motifs throughout the human genome and with increasing ability to catalogue this data [76], there exists a real possibility of molecularly targeted treatment strategies that can be applied in individualized ways to complex human disease. We acknowledge that MGBs may have other effects on an organism that remain to be characterized. However, our work represents implementation of MGBs as a molecular tool to derive in vivo biological characterization of critical illness that ultimately can be applied to the development of novel therapeutics in a field where effective treatment approaches are desperately needed. | 2014-10-01T00:00:00.000Z | 2010-05-14T00:00:00.000 | {
"year": 2010,
"sha1": "e670893c80c3b052035767d241ac815c4a0296e5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0010656&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7423c8c300a1d0fd8d1df25d8578e44124f14f5c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
221970153 | pes2o/s2orc | v3-fos-license | Ordinal Bayesian incentive compatibility in random assignment model
We explore the consequences of weakening the notion of incentive compatibility from strategy-proofness to ordinal Bayesian incentive compatibility (OBIC) in the random assignment model. If the common prior of the agents is a uniform prior, then a large class of random mechanisms are OBIC with respect to this prior -- this includes the probabilistic serial mechanism. We then introduce a robust version of OBIC: a mechanism is locally robust OBIC if it is OBIC with respect all independent priors in some neighborhood of a given independent prior. We show that every locally robust OBIC mechanism satisfying a mild property called elementary monotonicity is strategy-proof. This leads to a strengthening of the impossibility result in Bogomolnaia and Moulin (2001): if there are at least four agents, there is no locally robust OBIC and ordinally efficient mechanism satisfying equal treatment of equals.
Introduction
This paper explores the consequences of weakening incentive compatibility from strategyproofness to ordinal Bayesian incentive compatibility in the random assignment model (onesided matching model). Ordinal Bayesian incentive compatibility (OBIC) requires that the truth-telling expected share vector of an agent first-order stochastically dominates the expected share vector from reporting any other preference. It is the natural analogue of Bayesian incentive compatibility in an ordinal mechanism. This weakening of strategyproofness was proposed by d' Aspremont and Peleg (1988). We study OBIC by considering mechanisms that allow for randomization in the assignment model.
In the random assignment model, the set of mechanisms satisfying ex-post efficiency and strategy-proofness is quite rich. 1 Despite satisfying such strong incentive properties, all of them either fail to satisfy equal treatment of equals, a weak notion of fairness, or ordinal efficiency. Indeed, Bogomolnaia and Moulin (2001) propose a new mechanism, called the probabilistic serial mechanism, which satisfies equal treatment of equals and ordinal efficiency.
However, they show that it fails strategy-proofness, and no mechanism can satisfy all these three properties simultaneously if there are at least four agents. A primary motivation for weakening the notion of incentive compatibility to OBIC is to investigate if we can escape this impossibility result.
We show two types of results. First, if the (common) prior is a uniform probability distribution over the set of possible preferences, then every neutral mechanism satisfying a mild property called elementary monotonicity is OBIC. 2 An example of such a mechanism is the probabilistic serial mechanism. This is a positive result and provides a strategic foundation for the probabilistic serial mechanism. In particular, it shows that there exist ordinally efficient mechanisms satisfying equal treatment of equals which are OBIC with respect to the uniform prior.
Second, we explore the implications of strengthening OBIC as follows. A mechanism is locally robust OBIC (LROBIC) with respect to an independent and identical prior if it 1 Pycia andÜnver (2017) characterize the set of deterministic, strategy-proof, Pareto efficient, and nonbossy mechanisms in this model. This includes generalizations of the top-trading-cycle mechanism.
2 Neutrality is a standard axiom in social choice theory which requires that objects are treated symmetrically. Elementary monotonicity is a monotonicity requirement of a mechanism. We define it formally in Section 4.
is OBIC with respect to every independent and identical prior in its "neighborhood". The motivation for such requirement of robustness in the mechanism design literature is now wellknown, and referred to as the Wilson doctrine (Wilson, 1987). We show that every LROBIC mechanism satisfying elementary monotonicity is strategy-proof. An immediate corollary of this result is that the probabilistic serial mechanism is not LROBIC (though it is OBIC with respect to the uniform prior). As a corollary, we can show that when there are at least four agents, there is no LROBIC and ordinally efficient mechanism satisfying equal treatment of equals. This strengthens the seminal impossibility result of Bogomolnaia and Moulin (2001) by replacing strategy-proofness with LROBIC.
Both our results point to very different implications of OBIC in the presence of elementary monotonicity -if the prior is uniform, this notion of incentive compatibility is very permissive; but if we require OBIC with respect to a set of independent and identical priors in any neighborhood of a given prior, this notion of incentive compatibility is very restrictive.
Related literature
There is fairly large literature on random assignment problems. We summarize them below.
The notion of incentive compatibility that we use, OBIC, has been used in voting models by Majumdar and Sen (2004); Bhargava et al. (2015); Mishra (2016); Hong and Kim (2018) to escape the dictatorship result in (Gibbard, 1973;Satterthwaite, 1975;Gibbard, 1977). All these papers use deterministic mechanisms in voting models, whereas we apply OBIC to the random assignment model. Majumdar and Sen (2004) show that every deterministic neutral voting mechanism satisfying elementary monotonicity is OBIC with respect to uniform priors. Our Theorem 1 shows that this result generalizes to the random assignment model. Mishra (2016) generalizes this result to some restricted domains of voting (like the single peaked domain). He shows that in the deterministic voting model, elementary monotonicity and OBIC with respect to "generic" prior is equivalent to strategy-proofness in a variety of restricted domains -see also Hong and Kim (2018) for a strengthening of this result.
Though these results are similar to our Theorem 2, there are significant differences. First, we consider randomization while these results are only for deterministic mechanisms. Our notion of locally robust OBIC is incomparable to OBIC with respect to generic priors used in these papers. Second, ours is a model of private good allocation (random assignment), while these papers deal with the voting model. Bogomolnaia and Moulin (2001) introduce a family of mechanisms in the random assignment model. They call these the simultaneous eating algorithms. which generate ordinally efficient random assignments, a stronger notion of efficiency than ex-post efficiency. 3 The probabilistic serial mechanism belongs to this family and it is anonymous. However, it is not strategy-proof. In fact, Bogomolnaia and Moulin (2001) show that there is no ordinally efficient and strategy-proof mechanism satisfying equal treatment of equals when there are at least four agents. 4 There is a large literature that provides strategic foundations to the probabilistic serial (PS) mechanism. Bogomolnaia and Moulin (2001) show that the PS mechanism satisfies weak-strategy-proofness. Their notion of weak strategy-proofness requires that the manipulation share vector cannot first-order-stochastic-dominate the truth-telling share vector. Bogomolnaia and Moulin (2002) study a problem where agents have an outside option.
When agents have the same ordinal ranking over objects but the position of outside option in the ranking of objects is the only private information, they show that the PS mechanism is strategy-proof. Other contributions in this direction include Liu (2019); Liu and Zeng (2019), who identify domains where the probabilistic serial mechanism is strategy-proof. Che and Kojima (2010) show that the PS mechanism and the random priority mechanism (which is strategy-proof) are asymptotically equivalent. Similarly, Kojima and Manea (2010) show that when sufficiently many copies of an object are present, then the PS mechanism is strategy-proof. Thus, in large economies, the PS mechanism is strategy-proof. Balbuzanov (2016) introduce a notion of strategy-proofness which is stronger than weak strategy-proofness and show that the PS mechanism satisfies it. His notion of strategyproofness is based on the "convex" domination of lotteries, and hence, called convex strategyproofness. Mennle and Seuken (2021) define a notion called partial strategy-proofness, which is weaker than strategy-proofness and show that the PS mechanism satisfies it. They show that strategy-proofness is equivalent to upper invariant, lower invariant and elementary monotonicity (they call it swap monotonicity). Their notion of partial strategy-proofness is equivalent to upper invariance and elementary monotonicity, and hence, it is weaker than strategy-proofness.
3 Katta and Sethuraman (2006) extend the simultaneous eating algorithm to allow for ties in preferences. 4 With three agent, the random priority mechanism satisfies these properties.
The main difference between these weakenings of strategy-proofness and ours is that OBIC is a prior-based notion of incentive compatibility. It is the natural analogue of Bayesian incentive compatibility in an ordinal environment. Ehlers and Massó (2007) study OBIC in a two-sided matching problem. Their main focus is on OBIC mechanism that select a stable matching. They characterize the beliefs for which such a mechanism exists. There is a literature in computer science studying computational aspects of manipulation of the PS rule -see Aziz et al. (2014Aziz et al. ( , 2015 and references therein.
Model
Assignments. There are n agents and n objects. 5 Let N := {1, . . . , n} be the set of agents and A be the set of objects. We define the notion of a feasible assignment first.
Hence, an assignment is a bistochastic matrix. For any assignment L, we write L i as the share vector of agent i. 6 Formally, a share vector is a probability distribution over the set of objects. For any i ∈ N and any a ∈ A, L ia denotes the "share" of agent i of object a. The second constraint of the assignment definition requires that the total share of every agent is 1. The third constraint of the assignment requires that every object is completely assigned.
Let L be the set of all assignments.
An assignment L is deterministic if L ia ∈ {0, 1} for all i ∈ N and for all a ∈ A. Let L d be the set of all deterministic assignments. By the Birkohff-von-Neumann theorem, for every L ∈ L, there exists a set of deterministic assignments in L d whose convex combination equals L. 5 All our results extend even if the number of objects is not the same as the number of agents. We assume this only to compare our results with the random assignment literature, where this assumption is common. 6 Whenever we say an assignment, we mean a random assignment from now on.
Preferences.
A preference is a strict ordering of A. The preference of an agent i will be denoted by P i . The set of all preferences over A is denoted by P. A preference profile is P ≡ (P 1 , . . . , P n ), and we will denote by P −i the preference profile P excluding the preference P i of agent i. We write aP i b to denote that a is strictly preferred over b in preference P i .
Prior. We assume that the preference of each agent is independently and identically drawn using a common prior µ, which is a probability distribution over P. From now on, whenever we say a prior, we refer to such an independent and identical prior. We will denote by µ(P i ) the probability with which agent i has preference P i . With some abuse of notation, we will denote the probability with which agents in N \ {i} have preference profile P −i as
Ordinal Bayesian incentive compatibility
Our solution concept is Bayes-Nash equilibrium but we restrict attention to ordinal mechanisms, i.e., mechanisms where we only elicit ranking over objects from each agent. Hence, whenever we say mechanism, we refer to such ordinal mechanisms. 7 Formally, a mechanism is a map Q : P n → L. A mechanism Q assigns a share vector Q i (P) to agent i at every preference profile P.
Before discussing the notions of incentive compatibility, it is useful to think how agents compare share vectors in our model. Fix agent i with a preference P i over the set of objects A. Denote the k-th ranked object in P i as P i (k). Consider two share vectors π, π ′ . For every a ∈ A, we will denote by π a and π ′ a the share assigned to object a in π and π ′ respectively. We will say π first-order-stochastically-dominates (FOSD) π ′ according to P In this case, we will write π ≻ P i π ′ . Notice that ≻ P i is not a complete relation over the outcomes. An equivalent (and well known) definition of ≻ P i relation is that for every von-Neumann-Morgenstern utility representation of P i , the expected utility from π is at least as 7 The restriction to not consider cardinal mechanisms is arguably arbitrary. It is usually done to simplify the process of elicitation. Such restriction is also consistent with the literature on random assignment models. The set of incentive compatible mechanisms expand if we consider cardinal mechanisms (Miralles, 2012;Abebe et al., 2020). much as π ′ .
The most standard notion of incentive compatibility is strategy-proofness (dominant strategy incentive compatibility), which uses the FOSD relation to compare share vectors.
Definition 2 A mechanism Q is strategy-proof if for every i ∈ N, every P −i ∈ P n−1 , and every P i , P ′ i ∈ P, we have The interpretation of this definition is that fixing the preferences of other agents, the truthtelling share vector must FOSD other share vectors that can be obtained by deviation. This definition of strategy-proofness appeared in Gibbard (1977) for voting problems, and has been the standard notion in the literature on random voting and random assignment problems.
The ordinal Bayesian incentive compatibility notion is an adaptation of this by changing the solution concept to Bayes-Nash equilibrium. It was first introduced and studied in a voting committee model in d' Aspremont and Peleg (1988), and was later used in many voting models (Majumdar and Sen, 2004). To define it formally, we introduce the notion of an interim share vector. Fix an agent i with preference P i . Given a mechanism Q, the interim share of object a for agent i by reporting P ′ i is: The interim share vector of agent i by reporting P ′ i will be denoted as q i (P ′ i ).
Definition 3 A mechanism Q is ordinally Bayesian incentive compatible (OBIC) (with respect to prior µ) if for every i ∈ N and every P i , P ′ i ∈ P, we have It is immediate that if a mechanism Q is strategy-proof it is OBIC with respect to every (including correlated and non-identical) prior. Conversely, if a mechanism is OBIC with respect to all priors (including correlated and non-identical priors), then it is strategy-proof.
A motivating example
We investigate a simple example to understand the implications of strategy-proofness and OBIC for the probabilistic serial mechanism. Suppose n = 3 with three objects {a, b, c}. Consider the preference profiles (P 1 , P 2 , P 3 ) and (P ′ 1 , P 2 , P 3 ) shown in Table 3.1 -the table also shows the share vector of each agent in the probabilistic serial mechanism of Bogomolnaia and Moulin (2001). In the probabilistic serial mechanism, each agent starts "eating" her favorite object simultaneously till the object is finished. Then, she moves to the best available object according to her preference and so on. Each agent has the same eating speed. Table 3.1 shows the output of the probabilistic serial mechanism for preference profiles (P 1 , P 2 , P 3 ) and (P ′ 1 , P 2 , P 3 ). Since Q 1a (P ′ 1 , P 2 , P 3 ) + Q 1c (P ′ 1 , P 2 , P 3 ) > Q 1a (P 1 , P 2 , P 3 ) + Q 1c (P 1 , P 2 , P 3 ), we conclude that Q 1 (P 1 , P 2 , P 3 ) ⊁ P ′ 1 Q 1 (P ′ 1 , P 2 , P 3 ). Hence, agent 1 can manipulate from P 1 to P ′ 1 , when agents 2 and 3 have preferences (P 2 , P 3 ). When can such a manipulation be prevented by OBIC? Note that P 1 is generated from P ′ 1 by permuting a and c. Suppose we permute P 2 and P 3 also to get P ′ 2 and P ′ 3 respectively: c P ′ 2 b P ′ 2 a and a P ′ 3 c P ′ 3 b.
Since the probabilistic serial mechanism is neutral (with respect to objects), the share vector of agent 1 at (P 1 , P 2 , P 3 ) is a permutation of its share vector at (P ′ 1 , P ′ 2 , P ′ 3 ). Further, when all the preferences are equally likely, the probability of (P 2 , P 3 ) is equal to the probability of (P ′ 2 , P ′ 3 ). So, the total expected probability of a and c for agent 1 at P 1 and P ′ 1 is the same (where expectation is taken over (P 2 , P 3 ) and (P ′ 2 , P ′ 3 )). As we show below, this argument generalizes and the expected share vector at P 1 first-order-stochastic-dominates the expected share vector at P ′ 1 when the true preference is P 1 and prior is uniform.
Uniform prior and possibilities
In this section, we present our first result which shows that the set of OBIC mechanisms is much larger than the set of strategy-proof mechanisms if the prior is the uniform prior. A prior µ is the uniform prior if µ(P i ) = 1 |P| = 1 n! for each P i ∈ P. Uniform prior puts equal probability on each of the possible preferences. We call a mechanism U-OBIC if it is OBIC with respect to the uniform prior.
We show that there is a large class of mechanisms which are U-OBIC -this will include some well-known mechanisms which are known to be not strategy-proof. This class is characterized by two axioms, neutrality and elementary monotonicity, which we define next. To define neutrality, consider any permutation σ : A → A of the set of objects. For every preference P i , define P σ i as the preference that satisfies: aP i b if and only if σ(a)P σ i σ(b). Let P σ be the preference profile generated by permuting each preference in the preference profile P by the permutation σ.
Definition 4 A mechanism Q is neutral if for every P and every permutation σ, Neutrality requires that objects be treated symmetrically by the mechanism. Any mechanism which does not use the "names"of the objects is neutral -this includes all priority mechanisms (including the random priority mechanism), the simultaneous eating algorithms (including the probabilistic serial mechanism) in Bogomolnaia and Moulin (2001).
Our next axiom is elementary monotonicity, an axiom which requires a mild form of monotonicity. This was introduced in Majumdar and Sen (2004). To define it, we need the notion of "adjacency" of preferences. We say preferences P i and P ′ i are adjacent if there exists a k ∈ {1, . . . , n − 1} such that In other words, P ′ i is obtained by swapping consecutively ranked objects in P i . Here, if P i (k) = a and P i (k + 1) = b, we say that P ′ i is an (a, b)-swap of P i .
Definition 5 A mechanism Q satisfies elementary monotonicity if for every i ∈ N, every P −i ∈ P n−1 , and every P i , P ′ i ∈ P such that P ′ i is an (a, b)-swap of P i for some a, b, we have In other words, as agent i lifts alternative b in ranking by one position by swapping it with a (and keeping the ranking of every other object the same), elementary monotonicity requires that the share of object b should weakly increase for agent i, while share of object a should weakly decrease. A similar axiom called swap monotonicity is used in Mennle and Seuken (2021). 8 It is not difficult to see that elementary monotonicity is a necessary condition for strategyproofness -see Majumdar and Sen (2004). As we show later, elementary monotonicity is satisfied by a variety of mechanisms -including those which are not strategy-proof. However, every neutral mechanism satisfying elementary monotonicity is U-OBIC.
Theorem 1 Every neutral mechanism satisfying elementary monotonicity is U-OBIC.
Proof : Fix a neutral mechanism Q satisfying elementary monotonicity. The proof goes in various steps.
Step 1. Pick an agent i and two preferences P i and P ′ i . Pick any k ∈ {1, . . . , n} and suppose P i (k) = a and P ′ i (k) = b. We show that the interim shares of a and b are same for agent i in preferences P i and P ′ i : q ia (P i ) = q ib (P ′ i ). This is a consequence of uniform prior and neutrality. To see this, let P ′ i = P σ i for some permutation σ of objects in A. Then, b = σ(a) and hence, for every P −i , we have Due to uniform prior and using the above expression, where the third equality follows from the fact that {P −i : P −i ∈ P n−1 } = {P σ −i : P −i ∈ P n−1 }. In view of step 1, with some abuse of notation, we write q ik to denote the interim share of the object at rank k in the preference. We call q i the interim rank vector of agent i.
Step 2. Pick an agent i and a preference P i . We show that interim shares are non-decreasing with rank: q ik ≥ q i(k+1) for all k ∈ {1, . . . , n − 1}. Fix a number k and let P i (k) = a and P i (k + 1) = b. Then, consider the preference P ′ i , which is an (a, b)-swap of P i . For every . Due to uniform prior, q ia (P i ) ≥ q ia (P ′ i ). But by Step 1, Step 3. We show that Q is OBIC with respect to the uniform prior. Suppose agent i has preference P i . By Steps 1 and 2, she gets interim rank vector (q i1 , . . . , q in ) by reporting P i with q ij ≥ q ij+1 for all j ∈ {1, . . . , n − 1}. Suppose she reports P ′ i = P σ i , where σ is some permutation of set of objects. By Steps 1 and 2, the interim share vector is a permutation of interim rank vector q i . Using non-decreasingness of this interim share vector with respect to ranks, we get q i (P i ) ≻ P i q i (P ′ i ). Hence, Q is OBIC with respect to uniform prior.
Theorem 1 generalizes, an analogous result in Majumdar and Sen (2004), who consider the voting problem and only deterministic mechanisms. They arrive at the same conclusion as Theorem 1 in their model. Theorem 1 shows that their result holds even in the random assignment problem.
4.1 Probabilistic serial mechanism and U-OBIC Bogomolnaia and Moulin (2001) define a family of mechanisms, which they call the simultaneous eating algorithms (SEA). Though the SEAs are not strategy-proof, they satisfy compelling efficiency and fairness properties, which we discuss in Section 5. We informally introduce the SEAs -for a formal discussion, see Bogomolnaia and Moulin (2001).
Each SEA is defined by a (possibly time-varying) eating speed function for each agent.
At every preference profile, agents simultaneously start "eating" their favorite objects at a rate equal to their eating speed. Once an object is completely eaten (i.e., the entire share of 1 is consumed), the amount eaten by each agent is the share of that agent of that object.
Once an object completely eaten, agents go to their next preferred object and so on.
If the eating speed of each agent is the same, then the simultaneous eating algorithm is anonymous. Bogomolnaia and Moulin (2001) call the unique anonymous SEA, the probabilistic serial mechanism. 9 Corollary 1 Every simultaneous eating algorithm is U-OBIC.
Proof : Clearly, the SEAs are neutral since eating speeds do not depend on the objects.
Locally robust OBIC
While the uniform prior is an important prior in decision theory, it is natural to ask if Theorem 1 extends to other "generic" priors. Though we do not have a full answer to this question, we have been able to answer this question in negative under a natural robustness requirement. Our robustness requirement is local. Take any (independent and identical) prior µ, and let µ ′ be any (independent and identical) prior in the ǫ-radius ball around µ (where ǫ > 0), i.e., ||µ(P ) − µ ′ (P )|| < ǫ for all P ∈ P. In this case, we write µ ′ ∈ B ǫ (µ). Our local robustness requirement is the following.
Definition 6 A mechanism Q is locally robust OBIC (LROBIC) with respect to a prior µ if there exists an ǫ > 0 such that for every prior µ ′ ∈ B ǫ (µ), Q is OBIC with respect to µ ′ .
It is well known that Bayesian incentive compatibility with respect to all priors lead to strategy-proofness (Ledyard, 1978). Here, we require OBIC with respect to all independent and identical priors in the ǫ-neighborhood of an independent and identical prior. Bhargava et al. (2015) study a version of LROBIC with respect to uniform prior but their robustness also allows the mechanism to be OBIC with respect to correlated priors. They show that a large class of voting rules satisfy their notion of LROBIC. We show that in the random assignment model, LROBIC with respect to any independent and identical prior has a very different implication.
Theorem 2 A mechanism is LROBIC with respect to a prior and satisfies elementary monotonicity if and only if it is strategy-proof.
The proof builds on some earlier results. Before giving the proof, we define some notions and preliminary results. We first decompose OBIC into three conditions. This decomposition is similar to the decomposition of strategy-proofness in Mennle and Seuken (2021) -there are some minor differences in axioms and we look at interim share vectors whereas they look at ex-post share vectors.
Our decomposition of OBIC uses the following three axioms.
Definition 7 A mechanism Q satisfies interim elementary monotonicity if for every i ∈ N and every P i , P ′ i such that P ′ i is an (a, b)-swap of P i , we have Give a preference P i of agent i and an object a ∈ A, define U(a, P i ) := {x ∈ A : x P i a} and L(a, P i ) := {x ∈ A : a P i x}.
Definition 8 A mechanism Q satisfies interim upper invariance if for every i ∈ N and every P i , P ′ i such that P ′ i is an (a, b)-swap of P i , and for every x ∈ U(a, P i ), we have Definition 9 A mechanism Q satisfies interim lower invariance if for every i ∈ N and every P i , P ′ i such that P ′ i is an (a, b)-swap of P i , and for every x ∈ L(b, P i ), we have The following proposition characterizes OBIC using these axioms.
Proposition 1 A mechanism Q is OBIC with respect to a prior if and only if it satisfies interim elementary monotonicity, interim upper invariance, and interim lower invariance.
Since the proof of Proposition 1 is similar to the characteration of strategy-proofness in Mennle and Seuken (2021), we skip its proof. 10 The proof of Theorem 2 is based on Proposition 1 and the following lemma.
Lemma 1 Suppose Q mechanism is LROBIC with respect to a prior. Then, for every i, for every P −i , for every P i and P ′ i such that P ′ i is an (a, b) swap of P i , we have Proof : Pick an agent i ∈ N and P i , P ′ i ∈ P such that P ′ i is an (a, b)-swap of P i . Fix some P −i . By Proposition 1, Q satisfies interim upper invariance and interim lower invariance.
Hence, we know that for all c / ∈ {a, b}, we get Since µ is a probability distribution over P, we can treat it as a vector in R n!−1 . Using µ(P −i ) ≡ × j =i µ(P j ), we note that the LHS of the Equation (3) has to hold for all µ ∈ B ǫ (µ * ) (which has non-zero measure), then Q ic (P i , P −i ) = Q ic (P ′ i , P −i ) for all c / ∈ {a, b}.
We now complete the proof of Theorem 2.
Proof of Theorem 2
Proof : Every strategy-proof mechanism is OBIC with respect to any prior. A strategyproof mechanism satisfies elementary monotonicity. So, we now focus on the other direction of the proof. Let Q be an LROBIC mechanism with respect to a prior µ. Suppose Q satisfies elementary monotonicity.
By Lemma 1, any LROBIC mechanism Q, Q satisfies ex-post versions of interim lower invariance and interim upper invariance. Mennle and Seuken (2021) refer to these properties as upper invariance and lower invariance (see also Cho (2018)). They show that upper invariance, lower invariance, and elementary monotonicity are equivalent to strategy-proofness.
By the assumption of the theorem, Q satisfies elementary monotonicity. Hence, it is strategyproof.
We now explore the compatibility of LROBIC and ordinal efficiency.
Definition 10 A mechansim Q is ordinally efficient if at every preference profile P there exists no assignment L such that with L i = Q i (P) for some i. Bogomolnaia and Moulin (2001) show that every ordinally efficient mechanism is ex-post efficient but the converse is not true if n ≥ 4. In fact, for n ≥ 4, strategy-proofness is incompatible with ordinally efficiency along with the following weak fairness criterion.
Definition 11 A mechanism Q satisfies equal treatment of equals if at every preference profile P and for every i, j ∈ N, we have P i = P j ⇒ Q i (P) = Q j (P) Due to Theorem 2, we can strengthen the impossibility results in Bogomolnaia and Moulin (2001) and Mennle and Seuken (2017) as follows.
Corollary 2 Suppose n ≥ 4. Then, there is no locally robust OBIC and ordinally efficient mechanism satisfying equal treatment of equals.
Proof : By Lemma 1, a locally robust OBIC mechanism satisfies ex-post versions of upper invariance and lower invariance. Mennle and Seuken (2017) show that the proof in Bogomolnaia and Moulin (2001) can be adapted by replacing strategy-proofness with expost versions of upper invariance and lower invariance. Hence, these two properties are incompatible with ordinal efficiency and equal treatment of equals for n ≥ 4, and we are done.
Note that Corollary 2 does not use elementary monotonicity, and hence, cannot be directly inferred from Theorem 2. | 2020-09-29T01:01:20.534Z | 2020-09-28T00:00:00.000 | {
"year": 2020,
"sha1": "8c60c45a6dd088f6e9c819732cc48e58671be891",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.13104",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ad041d17e734157b98d4a693b5e03850d47a2522",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics",
"Computer Science"
]
} |
263104752 | pes2o/s2orc | v3-fos-license | IL-6 Trans-Signaling Is Increased in Diabetes, Impacted by Glucolipotoxicity, and Associated With Liver Stiffness and Fibrosis in Fatty Liver Disease
Many people living with diabetes also have nonalcoholic fatty liver disease (NAFLD). Interleukin-6 (IL-6) is involved in both diseases, interacting with both membrane-bound (classical) and circulating (trans-signaling) soluble receptors. We investigated whether secretion of IL-6 trans-signaling coreceptors are altered in NAFLD by diabetes and whether this might associate with the severity of fatty liver disease. Secretion patterns were investigated with use of human hepatocyte, stellate, and monocyte cell lines. Associations with liver pathology were investigated in two patient cohorts: 1) biopsy-confirmed steatohepatitis and 2) class 3 obesity. We found that exposure of stellate cells to high glucose and palmitate increased IL-6 and soluble gp130 (sgp130) secretion. In line with this, plasma sgp130 in both patient cohorts positively correlated with HbA1c, and subjects with diabetes had higher circulating levels of IL-6 and trans-signaling coreceptors. Plasma sgp130 strongly correlated with liver stiffness and was significantly increased in subjects with F4 fibrosis stage. Monocyte activation was associated with reduced sIL-6R secretion. These data suggest that hyperglycemia and hyperlipidemia can directly impact IL-6 trans-signaling and that this may be linked to enhanced severity of NAFLD in patients with concomitant diabetes. Article Highlights IL-6 and its circulating coreceptor sgp130 are increased in people with fatty liver disease and steatohepatitis. High glucose and lipids stimulated IL-6 and sgp130 secretion from hepatic stellate cells. sgp130 levels correlated with HbA1c, and diabetes concurrent with steatohepatitis further increased circulating levels of all IL-6 trans-signaling mediators. Circulating sgp130 positively correlated with liver stiffness and hepatic fibrosis. Metabolic stress to liver associated with fatty liver disease might shift the balance of IL-6 classical versus trans-signaling, promoting liver fibrosis that is accelerated by diabetes.
Many people living with diabetes also have nonalcoholic fatty liver disease (NAFLD).Interleukin-6 (IL-6) is involved in both diseases, interacting with both membrane-bound (classical) and circulating (trans-signaling) soluble receptors.We investigated whether secretion of IL-6 trans-signaling coreceptors are altered in NAFLD by diabetes and whether this might associate with the severity of fatty liver disease.Secretion patterns were investigated with use of human hepatocyte, stellate, and monocyte cell lines.Associations with liver pathology were investigated in two patient cohorts: 1) biopsy-confirmed steatohepatitis and 2) class 3 obesity.We found that exposure of stellate cells to high glucose and palmitate increased IL-6 and soluble gp130 (sgp130) secretion.In line with this, plasma sgp130 in both patient cohorts positively correlated with HbA 1c , and subjects with diabetes had higher circulating levels of IL-6 and trans-signaling coreceptors.Plasma sgp130 strongly correlated with liver stiffness and was significantly increased in subjects with F4 fibrosis stage.Monocyte activation was associated with reduced sIL-6R secretion.These data suggest that hyperglycemia and hyperlipidemia can directly impact IL-6 trans-signaling and that this may be linked to enhanced severity of NAFLD in patients with concomitant diabetes.
ARTICLE HIGHLIGHTS
• IL-6 and its circulating coreceptor sgp130 are increased in people with fatty liver disease and steatohepatitis.• High glucose and lipids stimulated IL-6 and sgp130 secretion from hepatic stellate cells.• sgp130 levels correlated with HbA 1c , and diabetes concurrent with steatohepatitis further increased circulating levels of all IL-6 trans-signaling mediators.• Circulating sgp130 positively correlated with liver stiffness and hepatic fibrosis.• Metabolic stress to liver associated with fatty liver disease might shift the balance of IL-6 classical versus transsignaling, promoting liver fibrosis that is accelerated by diabetes.
Nonalcoholic fatty liver disease (NAFLD) is a multistep, progressive disorder beginning with simple steatosis that can evolve to nonalcoholic steatohepatitis (NASH), characterized by hepatocellular ballooning, lobular inflammation, and fibrosis.NAFLD prevalence is rapidly increasing worldwide, currently affecting 25% of the population (1).The biological and physiological signals corresponding to and mediating the transition from simple steatosis to the more advanced and pathogenic stages of NASH are not well understood.NAFLD is closely associated with obesity and diets high in fat and sugar (2).NAFLD stemming from metabolic disease, recently reclassified as metabolically associated steatotic liver disease (MASLD) (3), is often diagnosed in conjunction with diabetes.Upward of 70-80% of people with diabetes also have MASLD, and conversely, MASLD increases one's risk of developing diabetes by two-to threefold (4).Whether or how one disease impacts the other is unclear.
Liver inflammation plays a key role in the transition of simple steatosis to steatohepatitis.Interleukin-6 (IL-6) is an inflammatory cytokine closely associated with metabolic disease (5,6), has both pro-and anti-inflammatory properties, and may play a role in the progression of NAFLD to NASH (7,8).IL-6 signaling involves activation of either a membrane-bound receptor (classical) or formation of a signaling complex with a soluble (s)IL-6 receptor found in circulation (trans-signaling).Both types of signaling require the ligand/receptor complex to bind a coreceptor, glycoprotein 130 (gp130), on the surface of cells.Circulating IL-6R (sIL-6R), shed from receptor-expressing cells, can dock gp130 on distant cells, allowing IL-6 transsignaling in tissues that do not express the IL-6R.A soluble, secreted form of the coreceptor (sgp130) also circulates and can inhibit trans-signaling by sequestering the IL-6/sIL-6R complex (9).Based on expression patterns, hepatocytes and hepatic stellate cells may be a rich source of sIL-6R and sgp130, respectively, suggesting the liver as a potential major player in IL-6 trans-signaling (10,11).There is evidence linking increased IL-6 trans-signaling to other metabolic diseases including diabetes (12)(13)(14), as well as to alcohol-and infection-induced chronic liver disease (11).
While classical IL-6 signaling is known to have close links to NAFLD and metabolic disease, associations between IL-6 trans-signaling and NAFLD/NASH have not been evaluated.In this study, we investigated whether the liver could be a significant source of these circulating coreceptors and whether their levels relate to liver pathology associated with diabetes in two human cohorts with NAFLD.We also investigated whether metabolic stress influences the secretion of IL-6 trans-signaling mediators from liver cell types.
Quantification of Media and Serum Cytokines
IL-6, sIL-6R, and sgp130 were measured in conditioned media or plasma with ELISA, according to instructions (D6050, DR600, and DGP00, Human Quantikine ELISA kits; R&D Systems).Levels of secreted proteins were normalized to total cellular protein.
Proteins solubilized in RIPA were resolved with SDS-PAGE, transferred to nitrocellulose, and probed with anti-ADAM10 (14194; Cell Signaling Technology) or anti-b-ACTIN (A5441; Sigma-Aldrich).ChemiDoc (Bio-Rad Laboratories) and Image Lab software were used to capture and quantify signals.
Patients With NASH
Plasma samples, histological liver scoring, and MRI/magnetic resonance elastography (MRE) data were obtained from 50 patients with NASH (28 women, 22 men) previously recruited from a registry of patients with NASH (study initiated in 2016 with ethics approval at Centre hospitalier de l'Universit e de Montr eal, institutional review board no.15.147).All selected participants in the current study (institutional review board no.17.031) provided written informed consent allowing preservation and use of their plasma samples and data.
Subjects were aged 18 years and older, diagnosed with NASH (biopsy confirmed), and able to undergo MRI without contrast agent.Subjects were excluded on the basis of alcohol consumption (>10 drinks/week for women and >15 for men) or if they had liver disease other than NASH, were taking medications associated with steatosis (e.g., amiodarone, valproate, tamoxifen, methotrexate, or corticosteroids), were physically unable to fit in the MRI, had contraindications to MRI, or were pregnant or wished to be pregnant during that year.One patient without steatosis and 10 with previous history of hepatocellular carcinoma were excluded.Proton density fat fraction, liver volume (voxels, cm 3 ), and liver stiffness (Pa) were measured as quantitative predictors of liver fat, volume, and fibrosis, respectively.Average proton density fat fraction values for the entire liver volume were obtained with use of the LiverLab package (version VE11C, MAGNETOM Aera; Siemens Healthineers).Liver stiffness by MRE was measured as previously described (15).Repeatability, reproducibility, and accuracy of MRE in NAFLD were previously reported (16,17).Fasting plasma samples were collected on the day of Random blood samples were collected on the night before surgery and stored at À80 C until analysis.Sampling procedure and position were standardized among surgeons.Liver samples were obtained by incisional biopsy of the left lobe and not cauterized.Grading and staging of histological liver sections were performed according to the methodology of Brunt (19) by pathologists blinded to study objectives.The algorithm of Bedossa et al. (20) was used to diagnose NASH, with use of liver biopsy scores for hepatocellular ballooning stage (0-2), lobular inflammation (0-2), steatosis grade (G0-G3), activity score (A0-A4), and fibrosis stage (F0-F4).Since participants in both cohorts were diagnosed prior to 2023 guidelines for MASLD, not all precisely fit this new designation.Thus, we have chosen to use the older classifications of NAFLD and NASH herein for accuracy.
Statistics
Data are presented as mean ± SD for continuous variables and number of subjects (n) and percentage (%) for categorical variables.Normality was evaluated with Kolmogrov-Smirnov test.When normality failed, data were log transformed (log 10 ).Outliers and influencers were identified with SPSS.One subject (NASH cohort) was a strong influencer for plasma IL-6 and sIL-6R and excluded from all analyses.Data were analyzed with unpaired t test, one-way ANOVA, and Pearson correlation as indicated.Sensitivity analysis was performed with Mann-Whitney U test for intergroup differences.For categorical variables, x 2 test was used for count >5; otherwise, Fisher exact test was used.Since the number of these covariates should not be >10% of patient number, stepwise forward regression analysis (liver fat fraction, volume, and stiffness) and univariate analysis (steatosis grade, activity score, fibrosis stage) for the NASH cohort were adjusted with two models (model 1, age, sex, BMI, and diabetes; model 2, age, sex, BMI, and metformin).Diabetes and metformin were classified in different models due to their strong association (P < 0.001).For the morbid obesity cohort, univariate analysis (steatosis grade, activity score, hepatocellular ballooning, lobular, portal inflammation, and fibrosis stage) was adjusted with age, sex, BMI, diabetes, and metformin in one model.Data were analyzed with IBM SPSS (version 27) and GraphPad Prism (version 8), and significance was set at P < 0.05.
Data and Resource Availability
Data, analytical methods, and study materials are available on request.
Hepatic Stellate Cells Express and Secrete sgp130 in Response to Glucolipotoxic Stress
As NASH progresses, the proportion of hepatocytes in liver decreases, whereas hepatic stellate cells and macrophages increase with each fibrosis stage (21).Since inflammation is an important component of the simple steatosis-to-NASH transition, we hypothesized that IL-6 trans-signaling may be -ACTIN
diabetesjournals.org/diabetes
Gunes and Associates active in liver and altered following exposure to glucolipotoxic (high glucose and lipid) conditions.To test this, we first measured mRNA expression of IL-6 trans-signaling components in human hepatocyte (HepG2), quiescent or activated hepatic stellate (LX-2), and unactivated or activated macrophage (THP-1) cell lines to determine the source(s) of these proteins.Although all cell types expressed transcripts for IL-6, IL-6Ra, and GP130, LX-2 stellate cells expressed significantly higher IL-6 (Supplementary Fig. 1A) and GP130 mRNA (Supplementary Fig. 1C) compared with other cell types.Only HepG2 hepatocytes and THP-1 macrophage cells expressed appreciable levels of IL-6 receptor (IL-6R) transcripts (Supplementary Fig. 1B).Although all cell lines expressed IL-6 mRNA, only LX-2 stellate cells secreted IL-6 (Fig. 1A).In line with our expression data, only HepG2 hepatocytes and THP-1 immune cells secreted the soluble sIL-6R (Fig. 1E and H), confirming observations by Lemmers et al. (11).We also found that HepG2 and LX-2 cells secreted 10-fold more sgp130 than THP-1 cells (Fig. 1C, F, and I).Activation of the stellate cell line with transforming growth factor-b did not impact IL-6 or sgp130 secretion (Fig. 1A and C), while activation of THP-1 monocytes with PMA decreased sIL-6R and sgp130 secretion (Fig. 1H and I).Wolf et al. (22), also reported decreased sgp130, but not sIL-6R secretion, following macrophage activation.These results show that the liver can be a source of IL-6 trans-signaling mediators; however, a coordinated response from multiple hepatic cell types is required for secretion of all components of the soluble complex.NAFLD is related to and influenced by metabolic disorders associated with hyperglycemia and hyperlipidemia, including diabetes, obesity, and cardiovascular disease (23).For exploration of whether glucolipotoxicity influences secretion of IL-6 trans-signaling mediators, cell lines were exposed to high glucose and lipid.LX-2 stellate cells secreted more sgp130 when cultured in glucolipotoxic conditions (regardless of state), while significant increases in IL-6 secretion did not occur unless cells were activated (Fig. 1A and C).High glucose and palmitate did not change secretion patterns from HepG2 or THP-1 cells (Fig. 1D-I).These results suggest that glucolipotoxicity may impact secretion of IL-6 trans-signaling mediators from hepatic stellate cells.
sgp130 levels can be regulated by expression or proteolytic cleavage of gp130 on the plasma membrane (22).For determination of how glucolipotoxicity leads to increased spg130 secretion, LX-2 cells were treated with GI (ADAM10 inhibitor) or GW (ADAM10/17 inhibitor) prior to and during glucolipotoxic challenge.GI and GW similarly inhibited high glucose-and lipid-induced sgp130 secretion (Fig. 2A), suggesting ADAM10 as a potential protease implicated in cleavage of gp130.Consistent with proteolysis underlying sgp130 secretion, the mature, activated form of ADAM10 (mADAM10) was increased in response to high glucose and lipids (Fig. 2B) and this was inhibited by GI and GW without any changes in ADAM10 mRNA (Supplementary Fig. 1D) or total proenzyme levels (proA-DAM10) (Fig. 2B).These results suggest that glucolipotoxicity activates ADAM10, which promotes secretion of sgp130 from stellate cells.1).Among this population, 100% had NASH, 44.7% subjects had diabetes, 47.4% had obesity, 31.6% had hyperlipidemia, and 36.8% were taking metformin.The second cohort was comprised of subjects with class 3 obesity, for whom blood and liver tissue were collected at the time of bariatric surgery (Table 2).In this population, 100% were obese, 81.9% had NAFLD, 16.5% had NASH based on Bedossa scoring (20), 40.4% had diabetes, and 15.1% were taking metformin.Consistent with diabetes being a risk factor for advanced liver disease, people with NAFLD/NASH and diabetes had higher steatosis grade, activity score, and fibrosis stage than those without diabetes (Supplementary Fig. 2A-F).
Given the close relationship between glucotoxicity and NAFLD severity (29), we first investigated whether IL-6 trans-signaling mediators varied with blood lipids or glycated hemoglobin (HbA 1c ).We found no consistent relationships between plasma cholesterol or triglycerides with any component of the circulating IL-6 trans-signaling complex (Supplementary Fig. 5).However, in the NASH cohort we observed clear correlations between HbA 1c and sIL-6R or sgp130 (Fig. 3B and C) and HbA 1c with sgp130 in the class 3 obesity cohort (Fig. 4C).Consistently, plasma IL-6 and sIL-6R in the NASH cohort (Fig. 3D and E), and all three trans-signaling components in the obese population, were significantly higher in subjects with NASH and concurrent diabetes (Fig. 4D-F).Metformin can reduce inflammatory signaling ( 30); yet, IL-6 and sIL-6R were higher in individuals with NASH taking metformin (Fig. 3G and H higher in individuals with obesity taking metformin (Fig. 4I).Interestingly, the relationship between sIL-6R and HbA 1c was not seen in the obesity cohort (Fig. 4B).This may be explained by sex differences, as we found significant associations between sIL-6R and HbA 1c in women of both cohorts (NASH, r = 0.39, P = 0.02; class 3 obesity, r = 0.17, P = 0.06) but not men patients with NASH and class 3 obesity compared among liver biopsy measures of steatosis grade.Data presented for subjects with NASH with steatosis grade G1 (n = 9) and G2 (n = 29) and class 3 obesity grade G0 (n = 5), G1 (n = 168), G2 (n = 47), and G3 (n = 25).Analysis was conducted with one-way ANOVA with multiple comparisons with adjustment for age, sex, BMI, and diabetes or metformin use.Taken together with in vitro data, our data suggest a strong association between circulating regulators of IL-6 transsignaling and diabetes that may be associated with increased secretion within a setting of hyperglycemia.
Circulating IL-6 Signaling Mediators Correlate With NAFLD Pathology
We next sought to determine whether circulating levels of IL-6 trans-signaling mediators were associated with pathological features of fatty liver disease, including hepatic lipid content, inflammation, and/or fibrosis.In both human cohorts, liver disease severity was determined with biopsy samples, while in the NASH cohort we also had access to data from MRI/MRE scans.Liver volume and stiffness measured with MRI/MRE have good prognostic value to predict liver disease severity (31-34), with much greater surface area covered than with biopsy alone.
Plasma sIL-6R positively correlated with liver fat fraction (Fig. 5B) and liver volume (Fig. 5E) in the NASH cohort.However, stepwise forward regression analysis adjusted with model 1 (Supplementary Table 1) or 2 (Supplementary Table 2) demonstrated that these correlations may be influenced by BMI or metformin.Given strong relationships between BMI and liver fat/volume (35) and BMI and sIL-6R (Supplementary Fig. 3), it is impossible to delineate the contribution of liver fat versus whole-body adiposity.However, we did not observe a significant relationship between steatosis grade in biopsies and any circulating proteins in either cohort (Fig. 5J-L).Overall, data suggest that sIL-6R correlates with adiposity, but further exploration is needed to determine whether this is directly related to hepatic fat.Given our observation that high glucose and lipids stimulated sgp130 secretion from stellate cells, we next evaluated Hepatocytes and monocytes secrete sIL-6R basally, which can induce IL-6 trans-signaling in quiescent hepatic stellate cells.In return, hepatic stellate cells secrete sgp130, blunting IL-6 trans-signaling and facilitating classical signaling in hepatocytes and monocytes.Progression of NAFLD, which promotes monocyte infiltration in liver and activation into macrophages, could result in lower levels of sIL-6R from macrophages and higher amounts of sgp130 from stellate cells.In this context, inhibition of IL-6 trans-signaling in liver might be linked to steatotic hepatocytes, hepatic stellate cell activation and fibrosis, and chronic liver inflammation.In diabetes, the secretion of IL-6, sIL-6R, and sgp130 is significantly increased, which could exacerbate this pathway to accelerate the disease.The scheme was created with BioRender (biorender.com).
diabetesjournals.org/diabetes relationships between circulating IL-6 trans-signaling components and liver fibrosis.Both plasma IL-6 and plasma sgp130 positively correlated with liver stiffness (Fig. 6A and C) in the NASH cohort, with plasma sgp130 showing a particularly strong linear association (r = 0.69, P < 0.0001).When adjustment with model 1 or model 2, our results show that plasma sgp130 level correlated with liver stiffness independent of all covariates (Supplementary Tables 1 and 2).We also noted a trend toward higher sIL-6R and sgp130 in NASH patients with advanced fibrosis stage versus F0-F1 (Fig. 6E and F (36).Activity score is a composite value of indices given for histological evidence of inflammation (lobular and/or portal) and hepatocyte ballooning (an indication of hepatocyte damage).Plasma IL-6, sIL-6, and sgp130 did not correspond with activity score in either cohort (Supplementary Fig. 6A-F).Hepatocyte ballooning trended higher with increased plasma IL-6, and portal inflammation trended higher with increased plasma sIL-6R (Supplementary Fig. 6G and L).No associations were noted with lobular inflammation.Since metabolic inflammation is intimately linked to diabetes, we investigated whether diabetes influenced these associations.Interestingly, when considering only subjects with both NAFLD and diabetes, all IL-6 trans-signaling components were associated with hepatocyte ballooning (Fig. 7A-C) and sgp130 was significantly associated with increased portal inflammation and fibrosis (Fig. 7E and F).Thus, diabetes accentuated associations of IL-6 and sgp130 with liver pathology in NAFLD, suggesting that enhanced IL-6 trans-signaling in diabetes may exacerbate NASH.
DISCUSSION
In this study we explored relationships among three components of the IL-6 signaling pathway (IL-6, sIL-6R, and sgp130), diabetes, and steatotic liver disease.We provide novel data showing that in people with NAFLD, diabetes and hyperglycemia are associated with higher plasma concentrations of IL-6 trans-signaling mediators.Our data in cell lines suggest that high blood glucose and lipids promote ADAM10 activation and proteolytic cleavage of gp130, potentiating sgp130 secretion from hepatic stellate cells along with IL-6 (Fig. 8).In line with IL-6 trans-signaling playing a role in NAFLD pathobiology, higher plasma sgp130 strongly correlated with liver stiffness and was associated with histological evidence of liver fibrosis.IL-6 trans-signaling was also associated with liver inflammation in people with concurrent diabetes.
Higher IL-6 is often observed in metabolic disease (5,(12)(13)(14)(37)(38)(39), but much less is known about its circulating coreceptors or how their secretion is regulated.Our data support the liver as a possible source for the circulating components of IL-6 trans-signaling in NAFLD (sIL-6R and sgp130).Most notably, all three circulating IL-6 trans-signaling components were further increased in concomitant diabetes.We show that different liver cell types secrete components of the complex and their secretion is influenced by high glucose and lipids, and we report higher circulating IL-6 and sgp130 in NAFLD/NASH aligning with increases seen in alcoholic steatotic liver disease (11).Circulating sIL-6R in both our cohorts were within normal ranges (25), while in a previous study investigators found increased plasma sIL-6R in NASH (40).However, we did not include healthy subjects for direct comparison.
There are close relationships between hyperglycemia and chronic liver disease.High blood glucose positively correlates with hepatic insulin resistance, which can result in excessive fat accumulation.Elevated blood glucose can also amplify oxidative stress and trigger secretion of inflammatory cytokines, creating a state of low but constitutive inflammation that damages liver cells and plays a critical role in development of liver fibrosis (41).Our data showing a relationship between circulating IL-6 coreceptors and HbA 1c , together with in vitro data linking increased sgp130 secretion from stellate cells in response to glucolipotoxicity, lead us to propose that IL-6 trans-signaling plays a direct role in the unique pathogenesis of NAFLD associated with diabetes.
The physiological role of IL-6 trans-signaling within liver is controversial.Total ablation of hepatic IL-6 signaling in mice causes steatosis and fibrosis (40), but this does not discriminate between classical-and trans-signaling.Activation of both classical and trans IL-6 signaling is associated with increased cellular proliferation via STAT3, but trans-signaling seems to prolong activation (42).Blockade of IL-6 transsignaling with sgp130 decreases liver regeneration following partial hepatectomy (43), while activation of IL-6 transsignaling promotes regeneration (44,45) and aggravates liver cancer in mice (46,47).In our study, the strong correlation between sgp130 and liver stiffness/fibrosis suggests that increased IL-6 trans-signaling (IL-6 and sIL-6R) might promote repair in response to liver damage or inflammation.Since sgp130 acts as a natural inhibitor for IL-6 transsignaling, higher secretion (i.e., by glucolipotoxicity) could inhibit repair and promote NASH progression.However, our ability to draw mechanistic links is limited by the correlative nature of human data; thus, further investigation is needed to determine whether decreased local IL-6 transsignaling could be responsible for worsening inflammation and/or fibrosis in NAFLD.
Limitations of our study include use of isolated, immortalized cell lines and lack of an in vivo model.It is difficult to target IL-6 alone, differentiate classical versus trans-signaling, or isolate tissue-specific roles using available mouse models.Interestingly, transgenic overexpression of human sgp130 in mice does not exacerbate diet-induced NAFLD (48,49), but chronic increases may promote obesity and fatty liver with age (50), suggesting that decreased trans-signaling could potentiate metabolic disease.
One interesting possibility stemming from our data is that plasma sgp130 (alone or in combination with sIL-6R) might be an effective, noninvasive method to predict the severity of liver disease in NASH.We present this as a hypothesis-generating observation and recognize that a larger sample size and additional analysis are required to test whether IL-6 trans-signaling proteins are biomarkers of NAFLD/NASH.Yet, we found it interesting that while sgp130 correlates very well with liver stiffness by MRI/ MRE, its correlation with the stage of histological fibrosis was less pronounced.Liver fibrosis in biopsies is scored based on collagen staining, while MRE-determined liver stiffness is influenced by fibrosis, inflammation, and steatosis (32)(33)(34).sgp130 is a component of an inflammatory pathway, which could explain its stronger correlation with liver stiffness versus collagen (a late consequence of damage).Our data may also suggest that high sgp130 represents active NASH (i.e., inflammatory state) instead of collagen deposition (fibrosis).
In conclusion, our data support that circulating components of the IL-6 trans-signaling system correlate with NAFLD/NASH pathogenesis and that liver may be a source of these mediators in metabolic disease.Our data also suggest a link between diabetes and/or hyperglycemia and hepatic IL-6 trans-signaling.Strong associations of IL-6, sIL-6R, and sgp130 with liver pathobiology imply that these circulating proteins may be locally involved in NAFLD pathogenesis and suggest further investigation into whether higher plasma levels indicate liver damage, particularly in people with concurrent NAFLD and diabetes.
Figure 1 -
Figure 1-Secretion of IL-6 signaling mediators is influenced in hepatic stellate cells by glucolipotoxicity.Levels of IL-6 (A, D, and G), sIL-6R (B, E, and H), and sgp130 (C, F, and I) in conditioned media from quiescent and activated LX-2 and HepG2 cells, and unactivated and activated THP-1 cells.Cells were treated with high glucose and palmitate (G/P) or BSA vehicle (control [Ctl]) for 18 h, and secreted protein levels normalized to total cell protein.Analyses were performed with one-way ANOVA with multiple comparisons.Data shown are a representative experiment of three independent biological replicates, with three technical replicates for each.
Figure 2 -
Figure 2-Inhibition of the ADAM10 protease blocks glucolipotoxicity-induced sgp130 secretion.Cells were treated with either GI or GW inhibitor 30 min prior to high glucose and palmitate (G/P) or BSA vehicle (control [Ctl]) for 18 h.A: Sgp130 secretion was assessed and normalized to total protein content.B: proADAM10 and mADAM10 protein levels were visualized with Western blot using b-ACTIN as loading control.Analyses were performed with one-way ANOVA with multiple comparisons.Data shown is one representative experiment of three independent biological replicates, each with two to three technical replicates.
GFigure 3 -
Figure 3-IL-6 trans-signaling correlates with HbA 1c , diabetes, and metformin use in NASH.Pearson correlations between plasma IL-6 (A), sIL-6R (B), and sgp130 (C) with plasma HbA 1c in patients with NASH.n = 22 women and n = 13 men.Plasma concentrations of IL-6, sIL-6R, and sgp130 in patients with NASH compared among patients with diabetes (D-F) and metformin use (G-I).Analysis was conducted with t test; n = 24 women, n = 14 men.
Plasma sgp130 Correlates With Advanced LiverDisease in NASH and Class 3 Obesity
Table 1 -
Anthropometric, metabolic, and clinical characteristics of patients with NASH Data are means ± SD for continuous data and sample size (n) and percent within the population (%) for categorical data.P values are comparisons between men and women, measured with unpaired t test for continuous data and x 2 or Fisher exact test for categorical data.Bolded P values represent statistically significant differences.ALP, alkaline phosphatase; HDL-c, HDL cholesterol; LDL-c, LDL cholesterol; INR-PT, international normalized ratio of prothrombine time.a n = 23 for women, 13 for men.b n = 22 for women, 13 for men.c n = 21 for women, 12 for men.
Table 2 -
Anthropometric, metabolic, and clinical characteristics of patients with class 3 obesity Data are means ± SD for continuous data and sample size (n) and percent within the population (%) for categorical data.P value shows comparisons between men and women, measured with unpaired t test for continuous data and x 2 or Fisher exact test for categorical data.Bolded P values represent statistically significant differences.a n = 122 for women, 120 for men.b n = 122 for women, 121 for men.lating levels of IL-6 trans-signaling proteins are altered in NAFLD and whether this is impacted by diabetes.To this end, we measured plasma levels from subjects with biopsyconfirmed NAFLD or NASH from two patient biobanks.The first cohort was comprised of subjects with NASH with MRI/MRE assessment of liver disease concurrent with blood sampling (n = 38) (Table | 2023-09-29T06:18:18.150Z | 2023-09-27T00:00:00.000 | {
"year": 2023,
"sha1": "a47a7d5a46ec68175d3eca44dffb5995e504837d",
"oa_license": null,
"oa_url": "https://diabetesjournals.org/diabetes/article-pdf/doi/10.2337/db23-0171/735134/db230171.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c29aac094f433bacf540bab89247ecc49b6fdf95",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
246958452 | pes2o/s2orc | v3-fos-license | Characteristics of Obese Patients with Acute Hypercapnia Respiratory Failure Admitted in the Department of Pneumology: An Observational Study of a North African Population
Background Acute hypercapnic respiratory failure (AHRF) is a common life-threatening event in patients with obesity hypoventilation syndrome (OHS). Objectives To study the clinical pattern, noninvasive ventilatory support, as well as the short- and long-term outcomes of patients with OHS admitted in a ward because of AHRF. Methods We conducted a retrospective cohort study including all adults with OHS aged ≥ 18 − year − old, admitted in a 90-bed-ward for AHRF. Results A total of 44 patients were included. Fifteen (34.1%) and 29 (65.9%) patients were diagnosed with malignant OHS (mOHS) and nonmalignant OHS (non-mOHS), respectively, while 36 (81.8%) had coexisting obstructive sleep apnea hypopnea syndrome (OSAHS). Patients with mOHS had a significantly higher rate of heart failure (100% vs. 31%; p < 0.001), chronic renal insufficiency (CRI) (73.3% vs. 41.4%; p = 0.04), and dyslipidemia (66.7% vs. 34.5%; p = 0.04) than those with non-mOHS. The mean forced vital capacity (FVC) in our patients was of 59.5% ± 18.5 of the predicted value, lower than what is usually reported in stable patients with OHS. At hospital admission, more than two-thirds (n = 34, 77.3%) were misdiagnosed as having asthma exacerbation (n = 4, 4.9.1%), chronic obstructive pulmonary disease (COPD) exacerbation (n = 12, 27.3%) and/or heart failure (n = 29, 65.9%). Acute pulmonary oedema (ACPE) (n = 16, 36.4%) and acute viral bronchitis (n = 12, 27.3%) were the main identified causal factors, while no cause could be determined in 5 (11.4%) patients. Noninvasive positive pressure ventilation (NIPPV) using bilevel positive airway pressure (BIPAP) was very highly effective to treat AHRF, with only 2.27% of patients failing the modality. Median overall duration of ventilation was 9 hours per day (1.3–20) and was significantly longer in patients with mOHS than in those with non-mOHS (10 [6–18] vs. 8 [1.3–20], respectively; p = 0.01). Forty two of the forty-three patients discharged alive were treated with BIPAP or continuous positive airway pressure (CPAP) in 26 and 16 patients, respectively. The probability of survival was 90% at 12 months, while the probability of readmission for a new episode of AHRF was 56% at 6 months and 22% at 12 months, respectively. Conclusion AHRF in OHS patients is a life-threatening event which can be successfully and safely treated with BIPAP, with a low long-term mortality even in patients with mOHS.
Introduction
Over the recent decades, the prevalence of obesity in North African countries has greatly increased due to the rapid epidemiological transition and dietary behaviour changes [1]. Tunisia is one of these countries, and today features a high prevalence of obesity that has almost tripled over the last three decades, according to the latest statistics of the World Health Organization (WHO). It has increased from 8.7% in 1980 (Statistiques [2]) to 28% in 2010 among adults > 30 years, which means that over three million Tunisian adults are obese [3]. As observed in the majority of countries in the region, obesity is more common in women than men, with close to one third of Tunisian women reported to be obese [4].
With this increasing obesity rate, the Tunisian population has experienced a major rise in the prevalence of many obesity-related diseases, in particular, type 2 diabetes, cardiovascular diseases, and the metabolic syndrome which today affect nearly a third of Tunisian adults [5]. The accrual in obesity is also likely to lead to an increase in sleep-related breathing disorders including obesity hypoventilation syndrome (OHS) and obstructive sleep apnea/hypopnea syndrome (OSAHS).
OHS, previously known as Pickwickian syndrome, is defined as the appearance of awake hypercapnia (arterial partial pressure of carbon dioxide ½PaCO 2 > 45 mmHg) in the obese patient (body mass index ½BMI > 30 kg/m 2 ) in the absence of other known causes of alveolar hypoventilation such as lung or neuromuscular diseases [6]. Most patients with OHS have coexisting OSAHS. Though there is still insufficient epidemiological data, current estimations suggest that the prevalence of OHS is less than 1% of the general population [7]. It increases significantly as the prevalence of obesity increases, with a reported prevalence ranging from 10% to 20% in outpatients presenting with suspected sleep-disordered breathing (SDB), and 30% in hospitalized obese patients [6]. To our knowledge, no data about the prevalence of OHS in North Africa either in the general population or in patients with OSAHS are available.
Comorbidities and complications are common in patients with OHS, which leads to high hospitalization rates, more health care expenses, lower quality of life, with higher morbidity, and mortality. OHS may cause chronic complicacies such as pulmonary hypertension (PHT), right heart failure, and acute hypoxemic or most commonly hypercapnic respiratory failure (AHRF).
Despite its high prevalence and significant morbidity and mortality, OHS remains largely under recognized and usually discovered at an advanced stage when AHRF occurs ( [8,9]. The number of patients admitted to the intensive care unit (ICU) with AHRF due to previously undiagnosed OHS is also increasing [10]. However, among these patients, only 30% of OHS patients receive a correct diagnosis when admitted with AHRF, while up to 75% are misdiagnosed as having chronic obstructive pulmonary disease (COPD) or asthma [11]. Consequently, many patients do not receive the appropriate treatment, which may result in a higher risk for readmission for new episodes of AHRF, an additional cost, and increased morbimortality [12] [13].
Noninvasive positive pressure ventilation (NIPPV) using bilevel positive airway pressure (BiPAP) or continuous positive airway pressure (CPAP) is recommended as the primary management option for stable ambulatory patients diagnosed with OHS [14]. Moreover, BIPAP is being increasingly used during AHRF in patients having probable or confirmed OHS, with acceptable rates of success [15]. So far, there has been no published consensus either on indications or on the protocol for the proper application of BIPAP in AHRF complicating OHS.
We retrospectively studied a cohort of patients with OHS admitted in a ward with AHRF to examine their clinical pattern, their noninvasive ventilatory support, and their short-and long-term outcomes.
Study Design.
We conducted a retrospective observational cohort study in the respiratory disease department of the Hedi Chaker University Hospital in Sfax, Tunisia. Around 2000 patients a year are admitted at this department which has a capacity of 90 beds.
Population
2.2.1. Inclusion Criteria. All obese adults aged ≥ 18-year-old, admitted in our department between 1 January 2012 and December 2019 for AHRF, were consecutively enrolled in this study. According to the International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM), patients without chronic respiratory failure (CRF) were diagnosed with acute hypoxemic respiratory failure if the PaO 2 was <60 mmHg (SpO 2 < 91%) on room air, or the PaO 2 /fraction inspired oxygen (FiO 2 ) (P/F) ratio was <300. In patients with CRF, a PaO 2 < 60 mmHg on their usual supplemental oxygen flow rate, or a 10 mmHg decrease in baseline PaO 2 (if known) indicated acute hypoxemic respiratory failure. AHRF was defined as an arterial pH less than 7.35, associated with a PaCO 2 > 45 mmHg, or an increase by 10 mmHg in the baseline PaCO 2 (if known) [16].
Noninclusion
Criteria. The noninclusion criteria were a smoking history of 20 pack-years or more, a diagnosis of COPD with forced expiratory volume first second ðFEV1Þ < 50% predicted, neuromuscular disease, chest wall disease, kyphoscoliosis, diaphragmatic paralysis, restrictive pulmonary disease, idiopathic and postcapillary PHT, or other known reasons for hypoventilation disorders such as sleeprelated hypoventilation due to a medication or substance and idiopathic central alveolar hypoventilation. Malignant OHS (mOHS) was defined as a severe form of OHS characterized by a BMI > 40 kg/m 2 , metabolic syndrome (central obesity, hypertension, hyperlipidemia, and insulin resistance), and multiorgan system dysfunction related to obesity [10] [17]. BMI was calculated by dividing weight (in kilograms) by squared height (in meters). Obesity was defined as a BMI equal or greater than 30 kg/m 2 . Three ranges of BMIs were used to assess the severity of obesity: class I obesity if BMI was 30.0 to 34.9 kg/m 2 , class II obesity, if and previous treatment with NIPPV at home were also reported. Data relative to in-hospital therapeutic management included pharmacological treatment, as well as modalities, duration, and effectiveness of NIPPV. NIPPV was considered successful when intubation was avoided and the patient discharged alive. Failure of NIPPV therapy occurred when, despite optimal support, worsening of respiratory distress and gazometric disturbance was observed leading to intubation, transfer to ICU, or death [19]. Time to discharge from our department or to in-hospital death was registered, as well as readmissions for new episodes of AHRF and death after discharge.
Respiratory Function Test.
The results including FEV1, forced vital capacity (FVC), and FEV1/FVC were compared with local spirometric norms calculated from the patient's height, weight, and age using equations from the European Respiratory Society (ERS) [22]. The lower limit of normal (LLN) was used as a cut-off in the interpretation of respiratory function results. An obstructive ventilatory defect (OVD) was defined by a FEV1/FVC ratio < LLN. A restrictive ventilatory defect (RVD) was established according to prebronchodilator spirometry as FEV1/FVC > 0:70 and a predicted FVC < LLN. A mixed pattern of obstruction and restriction was described as both a FEV1/FVC ratio and FVC < LLN.
14 female patients were unable to cooperate with spirometric testing. All of them showed no features suggestive of COPD (no exposure to risk factors for COPD, no medical history of chronic bronchitis, no wheezing/sibilants, and no X-ray finding of emphysema or chronic bronchitis) or neuromuscular/chest wall disorders and had no lung parenchymal abnormalities on chest X-ray.
Sleep Polygraphic Study.
A level three portable sleep polygraphic study including recording of oro-nasal flow (oro-nasal air pressure transducer), thoraco-abdominal movements (respiratory inductance plethysmography), body position, nocturnal oxygen saturation and heart pulse (pulse oximetry), and snoring (microphone placed on the anterior neck) was performed on all subjects. In patients without a known diagnosis of OSAHS before admission, sleep polygraphic studies were performed after hospital discharge with at least 6 weeks of clinical stability.
Respiratory events were scored manually by a sleep specialist according to the American Academy of Sleep Medicine (AASM) 2012 guidelines [23]. Obstructive sleep apnea was defined as a 90% decrease in airflow compared with baseline for at least 10 seconds, while there is evidence of persistent respiratory effort. Hypopnea was specified as a decrease in airflow amplitude by ≥30% of baseline, lasting for at least 10 seconds, and accompanied by oxygen desaturation ≥3%. OSAHS was diagnosed if the apnea hypopnea index (AHI) was ≥5 events/hour (h), with consistent clinical symptoms and/or comorbidities, or if AHI was ≥15 events/h with or without associated symptoms. OSAHS severity was graded according to AHI, as mild (5/h ≤ AHI < 15/h), moderate (15/h ≤ AHI < 30/h), or severe (AHI ≥ 30/h). Overnight pulse oximetry was analyzed to assess oxygen desaturation index per hour (ODI), mean overnight transcutaneous oxygen saturation (nSpO 2 ), and total sleep time with nSpO 2 which was less than 90% (TST90%).
Echocardiographic
Data. The following echocardiographic data were recorded: left ventricular ejection fraction (LVEF), left ventricular (LV) mass index, left ventricular diastolic function, and systolic pulmonary artery pressure (sPAP). Eccentric LV hypertrophy was defined by a LV mass index > 47 g/m 2 with a relative wall thickness (RWT) of less than 0.45 [24]. Left ventricular diastolic function was assessed by mitral E/A ratio and tissue Doppler evaluation [25]. sPAP was estimated by the peak tricuspid regurgitant velocity. PHT was defined by an estimated sPAP > 35 mmHg, while a pressure > 45 mmHg was considered to indicate moderate to severe PHT [10].
Data Analysis.
All the analyses were conducted using the Statistical Package for the Social Sciences version 20 software (SPSS, Chicago, IL). For quantitative variables, the Shapiro-Wilk test was used to check normal distribution, and descriptive characteristics were given as mean ± standard deviation (SD) when the distribution was normal and median with the interquartile range if the distribution was not normal. Categorical variables were expressed as the number of cases and percentages (%). Pearson's chi-square 3 Sleep Disorders test (or Fisher exact test (when appropriate)) and Student t-test (or Mann-Whitney U test when indicated) were used to compare data between study groups. The Kaplan-Meier estimate of survival curve was used to determine the cumulative 1-year probability of hospital readmission and survival. A p value < 0.05 was considered statistically significant.
Clinical Characteristics of Study Population.
Of the 50 patients who met the inclusion criteria for study, 6 were excluded because of insufficient data in the medical record. Out of the 44 patients included, 15 (34.1%) and 29 (65.9%) were diagnosed with mOHS and non-mOHS, respectively. Thirty-six (81.8%) patients were classified as having OHS with coexisting OSAHS. Out of this last group, 2 patients (5.6%) had mild OSAHS, 5 patients (13.9%) had moderate OSAHS, and 29 patients (80.6%) had severe OSAHS. The patient's section is shown in Figure 1.
More than two-thirds of patients (n = 30; 68.2%) were referred from the emergency department. Nearly one in nine patients (n = 5, 11.4%) were referred from the outpatients' clinic in our department, and one in 15 patients (n = 3; 6.8%) was transferred from an ICU.
Laboratory
Data. The patients' pertinent laboratory data are presented in Table 2.
Respiratory Function
Test. Spirometry data were available in 30 patients. All patients except 3 (n = 27/30) had RVD. One patient had a mild OVD, while the remaining two patients had no evidence of ventilatory defect. The median FEV 1 was of 1.16 l (0.4-2.55), with a mean predicted percentage of 61% ± 20:7, the median FVC was of 1.39 l (0.48-3.84) with a mean predicted percentage of 59:5% ± 18:5, while the mean FEV1/FVC ratio was of 81:6% ± 7.
3.6. Outcomes and Follow-Up of Patients. BIPAP therapy failed in one patient (2.27%), leading to intubation and invasive mechanical ventilation. ABG measurements at hospital discharge and 3-month follow-up were similar. However, they were significantly improved when compared with measurements at patient's admission (Figure 3).
Of the 43 patients discharged alive, only one patient did not receive instrumental respiratory therapy. Domiciliary BIPAP was used in 26 patients (61.9%), 4 patients of whom were already ventilated with CPAP prior to admission. Nocturnal CPAP was used in 16 patients (38.1%), including 7 4 Sleep Disorders patients who were already under CPAP before admission.
LTOT was used in one-fifth of patients (n = 9, 21.4%), combined with CPAP in 2 patients (4.8%), and with BIPAP in 7 other patients (16.7%) ( Figure 4). No significant difference was found in instrumental respiratory therapy at discharge, either between men and women or between patients with mOHS and those with non-mOHS (Table 3). Similarly, age, BMI, overall daily NIV duration, and idiopathic cause did not differ significantly between patients ventilated with BIPAP and those treated with CPAP (Table 4). In total, four cases of death had been reported, one of which happened during the index hospitalization. The three others occurred at 2, 3, and 6 months after discharge from the hospital. The probability of survival was about 90% at 6 and 12 months ( Figure 5).
Of the 43 patients discharged alive from the hospital, 10 patients were lost to follow-up. The median follow-up duration was 13 months . More than a third of patients (n = 18, 41.9%) had been readmitted one or more times (range from 1 to 6 times) with a new episode of AHRF, with a median readmission time interval of 7 months (range from 1 to 45 months). The probability of readmission was 56% at 6 months and 22% at 12 months, respectively ( Figure 6). Age, gender, severity of OHS, BMI, CVF, and the type of home NIPPV had no significant influence on the probability of readmission for a new episode of AHRF.
Major
Findings. This is a retrospective observational cohort study conducted in Tunisia and investigating obese patients with AHRF. In most if not all of them, it was an acute on CRF, as suggested by the high serum bicarbonate levels at admission ( Table 2). We found that OHS was still a frequently unrecognized cause of AHRF, that ACPE and viral bronchitis were the two main precipitating factors of AHRF in patients with OHS, and that BIPAP was a highly safe and effective management option of AHRF in patients with OHS. Moreover, patients treated with home NIPPV after an AHRPF episode had a good outcome with a low long-term mortality rate. Finally, the risk of readmission for a new episode of AHRF did not differ between BIPAP and CPAP therapy.
Despite the increasing obesity prevalence in the Tunisian adult population, only 44 episodes of AHRF were linked with OHS over the most recent 13-year study period. In no case, OHS had been previously diagnosed, although 1 in 5 patients and half of the patients had been previously hospitalized one or more times in the ICU or the pulmonology department, respectively. This fact suggests that OHS remains widely unknown even among specialists. Thus, it is frequently missed or neglected during hospitalization as well as during outpatient follow-up after hospital discharge [26]. Moreover, more than two-thirds of patients had been erroneously diagnosed on admission with COPD/asthma exacerbation and/or congestive heart failure. Yet, none of the patients diagnosed with COPD had evidence of obstructive airway disease, and none of the patients diagnosed with asthma had evidence of reversible airway obstruction on spirometry. Similarly, in a study by Akpinar [27], of a total of 82 patients hospitalized with AHRF, none was diagnosed with OHS. These data indicate that OHS often remains unrecognized until an episode of HARF occurs. It is also a still unrecognized cause for AHRF. In the study of Marik [28] reported that the diagnosis of OHS was made prior to the AHRF episode in only 8 patients. This is despite the fact that obesity associated with hypoventilation has been found to be frequent among hospitalized patients. In a research by Nowbar et al. [29], hypoventilation (mean PaCO 2 of 52 ± 7 mmHg) was present in 31% (n = 47) of 52 hospitalized patients with severe obesity (IMC ≥ 35 kg/m 2 ) who did not have other reasons for hypercapnia. The underdiagnosis of OHS leads to the fact that most patients are diagnosticated late, after the occurrence of detrimental outcomes including mainly AHRF, severe multisystem diseases directly linked with obesity, chronic respiratory failure, and/or chronic hypoventilation. Carrillo et al. 34.5%; p = 0:04) compared to the group of patients with non-mOHS. Morbid obesity is commonly associated with various multivisceral complications such as respiratory failure, PHT, OSAH, complicated diabetes, hypertrophic cardiomyopathy, metabolic syndrome, vitamin D deficiency, and muscular deconditioning (M. [31]). Based on this, a new concept called "mOHS" has been recently defined by Marik as a subgroup of OHS in which morbid obesity (BMI > 40 kg/m 2 ) is associated with increased multiorgan system dysfunction and morbidity [10]. Older patients with mOHS are commonly fragile in the medical sense of the term, bedridden, over dependent, and with a poor prognosis. Therefore, most of these patients are classified as "do not intubate." In a prospective study by Lemyze et al. [32], of 73 morbidly obese patients admitted in ICU with AHRF, 60% were concerned by a "do not intubate order." This emphasizes the importance of correct identification of patients with mOHS to ensure appropriate management and to avoid futile therapeutic escalation [31]. The largest study of patients with mOHS ever published was by Marik and Desai [10]. The 61 patients included in this study had severe multisystem diseases directly associated with morbid obesity. In contrast to these results, our study showed no significative differences in baseline characteristics between mOHS and nmHOS, apart from BMI and AHI (which was lower in the malignant form). Besides, both groups received similar respiratory therapy at discharge and had comparable long-term outcomes. These findings based on a too small sample seem to be misleading and should not be extrapoled to other populations.
Sleep Disorders
Obstructive sleep apnea associated with alveolar hypoventilation is the most common ventilator pattern observed in obese patients hospitalized with AHRF [33]. In our study, the majority of patients (n = 34; 81.1%) had OSAHS confirmed by sleep polygraph recording: 21 patients had a previous known history of OSAHS while the other 15 patients were diagnosed with OSAHS after the index event of AHRF. Cuvelier et al. [33] studied 20 obese patients hospitalized with AHRF. The sleep recording performed one month after hospitalization showed OSAHS in 9 patients, whereas 11 other patients were diagnosed with pure OHS. Rabec et al. [34] reported results based on a larger sample composed of 41 obese patients hospitalized with AHRF and treated with NIPPV. Among these, 6 patients had OSAHS, 19 had OHS (associated with OSAHS in 16 cases), 4 had COPD, and 10 were diagnosed with overlap syndrome (combination of Table 4: Association between discharge instrumental respiratory therapy and age, BMI, and overall daily NIPPV duration.
Sleep Disorders
COPD and OSAH). These results confirm the fact that OSAHS is highly prevalent in obese patients hospitalized with AHRF. Unfortunately, most of these patients did not have a respiratory polygraphy at baseline and could not undergo polygraphy during episodes of AHRF. Therefore, for many of them, the presence of OSAHS could only be strongly suspected until confirmation of the diagnosis far from the acute episode of respiratory failure [28]. Nevertheless, we think that each patient admitted with AHRF should be considered at high risk for OSAH and should undergo respiratory polygraphy after this acute event, unless the diagnosis of OSAHS was already known prior to admission.
In our study, the following echocardiographic abnormalities were identified: LV hypertrophy in all patients (100%), moderate-to-severe PHT and dilated right cavities each one in almost half of patients (46.2% and 48.7%, respectively), LV diastolic dysfunction in about one quarter of patients (25.6%), and a LV systolic dysfunction (LVEF ≤ 50%) in 13.63% of patients. In the study of Carrillo et al. [30], two main cardiac abnormalities were reported: right ventricle dilatation (n = 16, 23%) and left ventricle diastolic dysfunction (n = 19, 28%). Marik and Desai [10] described higher rates of cardiac abnormalities in patients with mOHS as 71% of them had LV hypertrophy, 61% had LV diastolic dysfunction, and 77% had HTP. The impaired cardiac function commonly observed in OHS patients is closely associated with several comorbid conditions mainly systemic AHT, diabetes, dyslipidemia, and obesity. Obesity has been identified as a strong and independent risk factor for LV hypertrophy and cardiac dysfunction. Both the high cardiac output and excessive production of proinflammatory adipokines associated with obesity contribute to the development of LV hypertrophy in obese patients. The chronic hypoxemia associated with OHS may also be involved as suggested by Sugerman et al. [35].
The mean FVC in our patients was of 59.5% ±18.5 of the predicted value, lower than what is usually reported in stable patients with OHS [36]. Similarly, in a study by Chebib et al. [26], patients with OHS who were admitted to the ICU for AHRF had significantly lower baseline FVC than those not admitted to the ICU (72% vs. 80%, respectively; p = 0:01). Moreover, a cut-off of FVC < 3:5 L in men and 2.3 l in women could predict chronic daytime hypercapnia in obese subjects, as was shown by Mandal et al. [37]. These findings suggest that a severe restrictive spirometric pattern might be considered as a risk factor for AHRF in patients with OHS [38].
More than a third of our patients were recorded as having a period between symptom onset and hospitalization equal to or greater than 15 days. This long prehospital delay time indicates that AHRF in OHS patients may occur either acutely or insidiously with a progressive deterioration in gas exchange [39]. Besides, clinical presentation with hypercapnic encephalopathy syndrome (HES) may be underestimated or misdiagnosed, resulting in a delayed consultation. HES is a heterogeneous and potentially reversible wide range of neurological alterations (headache, ataxia, cognitive defects, psychomotor agitation, confusion, flapping, tremor, daytime sleepiness, delirium, and coma) occurring as a result of severe decompensated respiratory hypercapnic acidosis [40] [41]. In our study, daytime sleepiness, headaches, and behaviour disturbance were reported by about two-thirds, half, and the quarter of patients, respectively. One patient who presented at the emergency with behaviour disturbance was initially suspected of having a psychological or mental disorder. These findings corroborate the fact that HES is common in patients with severe AHRF. It may also be the dominant clinical feature of severe AHRF, mimicking cognitive, and/or neuropsychological disorders and leading to inappropriate or delayed diagnosis and treatment.
In our study, respiratory infection (n = 17; 38.63%) and ACPE (n = 12; 36.4%) were the two main precipitating factors of AHRF. In contrast, pleural effusion (n = 2, %) and extra respiratory infection (n = 1; %) were rarely involved. In a study by Chebib et al. [26], congestive heart failure was responsible for AHRF in more than half of patients (54%), followed by acute pulmonary embolism (10.8%), and in slightly less than one-third of patients, no cause could be identified. Carrillo et al. [30] found that most of AHRF episodes were triggered by respiratory infections, while cardiac origin was much less involved than was reported in our study. For a minority of patients, no cause could be determined as was the case for our patients (6% vs. 11.4%, respectively). In an analysis of Bry et al. [28], three main origins were identified: respiratory tract infections (32%), ACPE (28%), and exacerbation of COPD. However, most of the reported AHRF cases were regarded as idiopathic (62%) contrary to our results and those of Carrillo et al. [30]. Similarly, causes of AHRF were specified in only 21 among 61 patients (34.42%) assessed by Marik and Desai [10]. Identified causes in this study were dominated by 10 Sleep Disorders pneumonia, extra respiratory infections (urosepsis and limb cellulitis), and acute CRI. Overall, the common causes of AHRF in patients with OHS are respiratory infections, ACPE, and sepsis [42], but idiopathic forms are also frequent. Therefore, OHS patients with AHRF need to be meticulously assessed for infection, with a careful examination of heart function and loading condition [26]. Other less common causes include pleural effusion, pulmonary embolism, depressants, surgery, rib contusion and fractures, supine immobilization, and thoracic bracing [43] [30] [28]. In our cohort, NIPPV using BIPAP was very highly effective to treat AHRF, with only 2.27% of patients failing the modality. Moreover, NIPPV was well tolerated by most of the patients, as no serious side effects or complications were observed. In cohorts of OHS patients admitted with AHRF, the efficacy of NIPPV was highly variable, with reported failure rates of between 0% and 60% (De [44]). According to these studies, higher NIPPV success rates were observed in OHS patients with idiopathic episodes of AHRF and those with high PaCO 2 levels [28] [26] [30] [45]. In contrast, higher NIPPV failure rates were found to be associated with several factors including mainly mOHS, severe hypoxemia, and infectious pneumonia as the main precipitating factor of AHRF (Malcolm [32]) [10][45] [46]. Both Lemyze et al. [32] and Marik and Desai [10] reported high NIPPV failure rates of 17% and 39.65%, respectively, in severely obese patients (mean BMI of 46.6 and 48.9 kg/m 2 , respectively) with mOHS. Of interest, initial IPAP and EPAP levels were higher in our study than in the study by Lemyze et al., which may have helped us achieve a higher success rate, this is despite a high prevalence of mOHS (more than one-third of our patients), and a small nurse-to-patient ratio. As was mentioned in a review by Shah et al. [47], high EPAP and IPAP levels are usually required to successfully ventilate patients with OHS. Initial EPAP levels should be set at least at 5 cmH 2 O, but a maximum of 15 cmH 2 O may be required particularity during sleep. IPAP should be at least 10 cmH 2 O higher than EPAP and can be as high as 30 cmH 2 O (De [44]). Our study corroborates data from many other studies supporting the efficacy of NIPPV in the management of AHRF in patients with OHS. Even better, Carrillo et al. [30] found that NIPPV tended to be more effective in OHS patients than in COPD patients, with better outcomes including lower rates of heart failure, lower intrahospital mortality, and a higher survival rate at 1 year. Despite all these data, current guidelines recommend the use of NIPPV in AHRF in COPD patients. In contrast, there is not yet a published consensus on the indications for NIPPV in AHRF complicating OHS, although new guidelines were recently published for the management of stable OHS [14].
Of our 43 patients discharged alive, domiciliary NIPPV was used in 26 patients (61.9%), while nocturnal CPAP was used in 16 patients (38.1%). Taking as reference point the results at admission, we observed an improvement in gas exchange as was attested by the significant improvement in ABG measurements at 3 months after in-hospital initiation of NIPPV therapy. However, this improvement was not significant when compared with the results at hospital discharge. Moreover, no significant difference was found in ABG measurements at 3 months between BIPAP and CPAP therapy. Global readmission rate for ARF was 56% at 6 months and 22% at 1 year and was independent of sex, BMI, OHS severity, and of domiciliary NIPPV modality (BIPAP or CPAP). While COPD is well known for its association with a high rate of readmission after an episode of AHRF treated with NIPPV (ranging between 56% and 80% at 1 year) [48] [49], few data are available regarding patients with OHS. Chebib et al. [26] reported that 46% of patients with OHS admitted to the ICU for AHRF were readmitted for the same reasons in the following 2 years. In a study by Bry et al. [28], the global readmission rate for AHRF at 1 year was 20%. Patients ventilated at home had lower 1year readmission rates than those who were not; however, the difference was not significant (10% vs. 36%, respectively; p = 0:07). Carrillo et al. [30] found that patients with OHS had a 1-year readmission rate as high as that observed in patients with COPD (>50%). Curiously, patients with OHS treated with domiciliary NIPPV had a higher 1-year readmission rates when compared to those who were not ventilated at home. Yet, this difference did not remain significant after adjustment for confounding variables (adjusted OR 1.31; 95% CI, 0.71-2.41; p = 0:39). Taken together, these data suggest that readmission to hospital after an episode of AHRF treated by NIPPV is probably more frequent in patients with COPD than in patients with OHS. However, they do not allow to conclude as to the type of home ventilation and the modality of ventilation that could reduce or not the longterm readmission probability in the latter.
Our study had several limitations. First, given the retrospective nature, not all potentially eligible patients could be included either because of lack of data or undiagnosed cases of OHS. This led to a small sample size, with many patients lost to follow-up. Some key variables such as the impact of the interface on the effectiveness of NIPPV therapy, home NIPPV compliance could not be assessed. Moreover, we could not be aware about deaths occurring out hospital in patients lost to follow-up, which would imply an underestimation in mortality rate. Second, the baseline ventilator patterns (pure OHS, pure OSAHS, or OHS + OSAHS) were not known at admission of patients, which may have impacted the titration of BIPAP setting. If we add to that the monocentric design of the study, our results may not be reproducible in other centers with different approaches to BIPAP and different inpatient hospitals. The limited number of patients with the small number of events (only 1 BIPAP failure event out 44 patients with OHS and 4 deaths) do not allow any multivariate analysis of our results [50]. Finally, the observational design of our study did not allow clear conclusions about the efficiency of BIPAP in the AHRF management; therefore, controlled randomized trials are still required.
Conclusion
Our study shows that OHS remains a frequently unrecognized cause of AHRF even in morbidly obese patients. The two leading precipitating factors of AHRF in patients with OHS are ACPE and respiratory infections, hence, the need for a careful examination of heart function and loading 11 Sleep Disorders condition, with appropriate investigations for infections. BIPAP is highly effective in the management of AHRF even in patients with mOHS. All patients discharged alive except one were treated with home NIPPV, either BIPAP or CPAP, and had a low long-term mortality rate. No factor is independently associated with a higher risk for readmission.
Data Availability
Data are available on request through the authors.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2022-02-19T16:11:42.962Z | 2022-02-17T00:00:00.000 | {
"year": 2022,
"sha1": "8f5f169812bf66816f97101c1b5801bbbd7361a6",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "885ea3faa167caedc13109a756122b1acd9001a1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8469243 | pes2o/s2orc | v3-fos-license | Spitzer IRAC Images and Sample Spectra of Cassiopeia A's Explosion
We present Spitzer IRAC images, along with representative 5.27 to 38.5 micron IRS spectra of the Cassiopeia A supernova remnant. We find that various IRAC channels are each most sensitive to a different spectral and physical component. Channel 1 (3.6 micron) matches radio synchrotron images. Where Channel 1 is strong with respect to the other channels, the longer-wavelength spectra show a broad continuum gently peaking around 26 micron, with weak or no lines. We suggest that this is due to un-enriched progenitor circumstellar dust behind the outer shock, processed by shock photons and electrons. Where Channel 4 (8 micron) is bright relative to the other IRAC channels, the long-wavelength spectra show a strong, 2-3 micron-wide peak at 21 micron, likely due to silicates and proto-silicates, as well as strong ionic lines of [Ar II], [Ar III], [S IV] and [Ne II]. In these locations, the dust and ionic emission originate from the explosion's O-burning layers. The regions where Channels 2 (4.5 micron) and 3 (5.6 micron) are strongest relative to Channel 4 show a spectrum that rises gradually to 21 micron, and then flattens or rises more slowly to longer wavelengths, along with higher ratios of [Ne II] to [Ar II]. Dust and ionic emission in these locations arise primarily from the C- and Ne- burning layers. These findings are consistent with asymmetries in the explosion producing variations in the velocity structure in different directions, but preserving the nucleosynthetic layers. At each location, the dust and ionic lines in the mid-infrared, and the hotter and more highly ionized optical and X-ray emission are then dominated by the layer currently encountering the reverse shock in that direction.
Introduction
Cassiopeia A (Cas A) is the youngest supernova remnant (SNR) in our galaxy, thought to be the result of either a type Ib or IIn supernova explosion (Chevalier & Oishi 2003) occurring in 1671 (Thorstensen, Fesen, & van den Bergh 2001) at a distance of 3.4 kpc (Reed et al. 1995). The remnant has been studied extensively at many wavelengths, and is one of the brightest radio and X-ray sources in the sky. Its primary structures are a 105 ′′ radius bright ring surrounded by a 150 ′′ radius low surface brightness plateau (Braun 1987). The outer plateau is bordered by a thin X-ray ring identified as the outer shock in the circumstellar medium (CSM), with the broader, brighter interior ring originating from stellar ejecta that have encountered the reverse shock (Gotthelf et al. 2001).
The X-ray emission is characterized by a thermal spectrum containing emission lines from highly ionized atoms. Optical emission from the remnant is dominated by chemicallyenriched knots. Infrared emission was previously known to contain thermal continua from heated dust, line emission from ionized atoms, and, shortward of about 5 µm, synchrotron emission from electrons accelerated in shock regions (Jones et al. 2003;Rho et al. 2003). Submillimeter observations of Cas A prove difficult due to a molecular cloud complex along the line of sight and the presence of cold dust in the remnant is therefore still in question Krause et al. 2004;Wilson & Batrla 2005). The radio emission from Cas A is synchrotron radiation (Ginzburg & Syrovatskii 1965).
Cas A's structure and dynamics reflect different progress into the Sedov-Taylor evolutionary phase in different directions . The bright ring remains illuminated as new, successively slower-moving ejecta encounter the reverse shock and are heated and ionized . A Doppler analysis of the X-ray gas, studies of the optical knots, and the large abundance ratio of 44 Ti/ 56 Ni support an asymmetric explosion (Reed et al. 1995;Hwang & Laming 2003;Willingale et al. 2003;Nagataki et al. 1997). The MIPS images (Hines et al. 2004) showed both the main X-ray jet (Hwang, Holt, & Petre 2000) and a counterjet (Hwang et al. 2004), providing further evidence for explosion asymmetry.
The progenitor of Cas A is generally believed to have been a WN star (i.e., a Wolf-Rayet star with high nitrogen abundance), due to the high abundances of N and H in some of the Fast-Moving Knots (FMKs, Kamper & van den Bergh 1976;Fesen & Becker 1991). The hydrodynamical model of Pérez-Rendón, García-Segura, & Langer (2002) suggests a 29 − 30 M ⊙ progenitor, while Young et al. (2006) find the overall data are best fit by a 15 − 25 M ⊙ progenitor that loses its hydrogen envelope in a binary interaction. The pre-supernova wind produced a dense, clumpy medium (Chevalier & Oishi 2003) which is currently being shocked by the blast wave. The highest density shocked clumps are seen in optical emission as slow-moving Quasi-Stationary Flocculi (QSFs, van den Bergh & Kamper 1985).
Rich in optical, X-ray and infrared emission lines from ionized atoms (Fesen at al. 2001;Douvion, Lagage, & Cesarksky 1999;Hwang, Holt, & Petre 2000), Cas A provides an important window into both quasi-equilibrium (pre-explosion) and explosive nucleosynthesis. Each layer contains several different elements, with C and O produced in He-burning, Ne and Mg first appearing through C-burning, and O and Al added with Ne burning (Woosley & Weaver 1995;Woosley & Janka 2005). When O and Mg burn, the heavier elements, Si, S, Ar, Ca are then produced and their burning products yield the Fe group elements. In Cas A, much of this layered nucleosynthetic structure has been preserved following the supernova explosion, e.g., with layers of nitrogen-sulfur-and oxygen-rich ejecta seen beyond the outer shock (Fesen 2001). There is also some evidence for mixing of the nucleosynthetic layers on various scales. Optical "mixed emission knots" show both N and S lines, suggesting that high speed clumps of S ejecta penetrated through the outer N-rich layers (Fesen 2001). At infrared wavelengths, ISOCAM observations showed the presence of Ar and S in strong Ne knots, although the Ne and silicate emissions appeared anti-correlated (Douvion, Lagage, & Cesarksky 1999). The X-ray line data show some large-scale overturning of ejecta layers (Hughes et al. 2000;Hwang & Laming 2003); iron in the SE region is further out and is moving faster than the Si/O regions.
The variations in ionic species at different locations, as probed by optical, infrared, and X-ray observations, are sensitive to temperature, density and ionization state. Thus, multiple wavelength detections of a species from different ionization states can help separate these effects from actual abundance variations (Vink et al. 2001). This provides one important motivation for our Spitzer studies. Density information can be derived from comparing different line strengths from the same ionization state, e.g. for [S III] (Houck et al. 1984). All of this information aids in reconstructing a picture of the inhomogeneities in the explosion, produced, e.g., through instabilities deep in the core (Foglizzo 2002;Blondin et al. 2003).
Another key motivation for the Spitzer observations presented here was to understand the production and destruction of dust in Cas A. One possible location for dust production is in optically thick clumps of ejecta, such as proposed by Lucy et al. (1991) for SN1987A. They suggest that most of the dust is contained in such optically thick knots, with small grains being distributed diffusely between them. Ejecta knots that are dense enough should remain largely intact with the passage of the reverse shock and blast-wave. If the dust contained in dense clumps is not in equilibrium with the more diffuse X-ray gas, then it may remain at a colder temperature.
In Cas A, Dwek et al. (1987) detected a strong excess of emission in Infrared Astronomy Satellite (IRAS) observations at 12 µm to 100 µm, largely from dust swept up by the supernova blast wave (the outer shock). Spitzer MIPS observations of Cas A (Hines et al. 2004) also detected this thermal dust emission from shocked circumstellar material. In addition, emission has been seen from a hot dust component associated with both the optical (Dwek et al. 1987;Fesen at al. 2001) and the X-ray ejecta (Douvion, Lagage, & Pantin 2001). Dust continua found in Cas A with the Infrared Space Observatory (ISO) were fit at 21 µm with proto-silicates (Arendt et al. 1999) or MgSiO 3 and SiO 2 (Douvion, Lagage, & Pantin 2001). Those data suggest that the dust is heated continuously, presumably by the hot X-ray emitting gas (Dwek et al. 1987;Hines et al. 2004). As noted above, the amount of cold dust associated with Cas A is still uncertain.
In this paper, we report the results of Spitzer Space Telescope images made using the In-frared Array Camera (IRAC, Fazio et al. 2004) with brief supporting data from the Infrared Spectrograph (IRS, Houck et al. 2004) and images from other wavelengths. We show evidence for different nucleosynthetic layers currently encountering the reverse shock in different directions. In subsequent papers, we will address the physical conditions and dynamics of the gaseous material, and the detailed composition, temperature structure and mass estimates of the dust components.
Observations
IRAC observations covered the entire Cas A supernova remnant, including the outer shock, jet and counterjet regions. The IRAC images utilize four wide filters with central wavelengths of 3.6, 4.5, 5.6 and 8 µm for Channels 1, 2, 3 and 4, respectively. The data were taken on January 18, 2005. The observing strategy combined a mapping grid and dithers to yield a depth of coverage of at least 18 pointings over the entire remnant, with higher coverage in some overlap regions. At each pointing, a 0.6 and a 12 second frame was taken. The IRAC images have an angular resolution of 2-2.5 ′′ in Channels 1-2 and ≈ 3" in Channels 3-4. The data were processed with the S11 version of the IRAC pipeline (Lowrance et al. 2006). The four IRAC images are shown in Figure 1.
The IRS Spectrograph was used on January 13, 2005 to spectrally map the full remnant (with portions of the outer structures missing from some slits), covering 5-15 µm (Short-Low module, SL) and 15-38 µm (Long-Low module, LL). Each included the two orders of wavelength, and the two long, low-resolution slits provided resolving powers of 64-128. The long-wavelength (15-38 µm) spectra were taken in a single large map with 4 × 91 pointings, using a single 6 second ramp at each position. To achieve the spatial coverage with the short-wavelength (5-15 µm) slit, a set of four quadrant maps were made, two with 4 × 87 pointings and two with 3 × 87 pointings, using a 6 second ramp at each position. The mapped area ranged from 6.26 ′ × 5.86 ′ (SL) to 11.0 ′ × 7.79 ′ (LL), with offsets between the maps produced in each of the two orders in each module of 3.2 ′ (LL) and 1.3 ′ (SL), along the slit direction. The effective overlap coverage of all modules and orders is 4.9 ′ by 5.8 ′ . The illustrative spectra presented here were processed with the S12 version of the IRS pipeline, using the Cubism package ) to reconstruct the spectra at each position. They were extracted from the IRS cubes using areas from 10 ′′ to 33 ′′ across. No detailed matching of spectral overlap amplitudes between the SL and LL detectors was done. Occasional instrumental problems, such as spectral fringing at the short wavelength end of LL2, are not addressed and do not affect the analysis presented here.
For comparisons with the IRAC images, we also performed near-infrared observations with a narrow Pa β filter, using the Palomar 200-inch Wide-infrared Camera (WIRC). The Pa β filter is cented on 1.282µm with a 1% width. The data were taken on Aug 15 and 16, 2005. The exposure time was nine times 90 seconds per sky position.
IRAC images and comparisons to other bands
The four IRAC images in Figure 1 each show the same overall structure of the remnant, including the bright ring, the surrounding low surface brightness plateau, and the eastern jet. The plateau region and the internal filamentary emission are most apparent in Channel 1. The large oval ring covering the northern third of the remnant is prominent in Channels 2 and 4, weak in Channel 3, and just visible in Channel 1. Channels 3 and 4 show significant diffuse and patchy emission beyond the plateau, likely associated with the surrounding medium.
A color image combining all four IRAC images is shown in Figure 2. There are large spatial variations in the relative strengths of the IRAC channels, resulting in the broad range of IRAC "colors". In order to examine the colors more quantitatively, we isolated those regions where each channel was strongest with respect to Channel 4, and determined the mean surface brightness in those regions for each of the four channels. The results are shown in Figure 3. The very large jump to Channel 4 (≈ 10) is likely caused by the presence of [Ar II] and [Ar III] emission in the IRAC band, as discussed further below. Unfortunately, the IRS spectra do not cover Channels 1 and 2, and cover only part of Channel 3, so we cannot perform a quantitative analysis of these various IRAC colors; below, we suggest a few possible contributors to Channels 2 and 3.
In order to understand the origins of these IRAC color differences, we therefore first compare the images in each IRAC channel with those from other bands. The IRAC Channel 1 image is very similar to that seen in the radio ( Figure 4), with interior, ring, and plateau emission. By contrast, the bright ring dominates the emission in Channel 4, mirroring Cas A's appearance in the optical, in the 24 µm MIPS image, and in X-ray line images ( Figure 5). Channels 2 and 3 also are dominated by bright ring emission, although there are distinct differences in the brightness of various features between them and Channel 4. Figure 6 shows a comparison of Channels 2 and 3 with the [Fe II] (18 µm) and Pa β (1.3 µm, with some [Fe II] contamination) images. The results of these comparisons are all discussed in more detail below, after we have examined the correspondence between the IRAC colors and the shapes of the IRS spectra.
We find no evidence for emission in the IRAC images from Cas A's compact X-ray source (Tannanbaum 1999) against the variations in flux near the center of the remnant. The 3-sigma upper limits are 50 µJy for Channels 1 and 2, 100 µJy for Channel 3, and 220 µJy for Channel 4.
IRS Spectra
The 5.3 -38.5 µm IRS spectrum for the full remnant is presented in Figure 7, showing both the average continuum shape as well as Doppler-broadened ionic line emission. We indicate where the IRS spectral coverage overlaps IRAC Channels 3 and 4; there is no IRS coverage in the IRAC Channels 1 and 2 bands. We also extracted spectra from 22 different regions in the remnant, chosen to explore a broad range of possible physical properties by comparing the IRAC color image to MIPS 24 µm, X-ray, optical, and radio images. These sample spectra showed a variety of relative line strengths and continuum shapes, especially in the relative strength of the peak around 21 µm, as expected from the ISO work of Douvion, Lagage, & Pantin (2001). We found that the spectra fell into three major categories, as follows: Broad -showing a gentle peak around 10 µm and rising to a very broad, gradual peak around 26 µm, with little line emission; Strong 21 µm -showing a 2-3 µm wide strong asymmetric peak at 21 µm, similar to those studied with ISO (Douvion, Lagage, & Pantin 2001), along with strong lines of Ar, Ne, Si, S, and 26 µm Doppler-blended Fe and O; and Weak 21 µm -rising gently through 21 µm and gradually becoming shallower to longer wavelengths, accompanied by stronger [Ne II], but relatively weaker [Ar II] lines. Figure 7 shows the average shape for each of these classes; the variations of shape within each class are indicated by the grey bands.
We find a good correspondence between the IRS spectral shapes and the IRAC colors, as illustrated in Figure 8. When Channel 4 (with major contributions from [Ar II]) is strong with respect to Channel 2, the IRS spectra show the strong 21 µm shape. When Channel 4 is weaker, the ratio of Channels 2 and 3 distinguishes between broad spectra and weak 21 µm spectra. A relatively high ratio of Channel 1 to Channels 2 or 3 is also a good indicator of broad spectra. Using the various IRAC colors, we can identify locations in the remnant where each IRAC channel in turn appears strongest compared to the other channels. Table 1 summarizes the typical properties of regions where each respective channel is so enhanced. We now look at each of the IRAC channels in turn, and discuss the likely origins of their emission.
Strong IRAC Channel 1 (3.2 -3.9 µm)
The Channel 1 image is shown in Figure 4, along with a λ 6cm radio image from DeLaney (2004). The detailed correspondence between these two images shows that Channel 1 is dominated by synchrotron emission. The synchrotron nature of the emission at 2.2 µm was first suggested by based on its morphological similarity to the radio emission, and then established both by its polarization (Jones et al. 2003) and brightness at levels expected from extrapolations of the radio spectrum (Rho et al. 2003;Jones et al. 2003). We now extend the detection of synchrotron radiation to the mid-infrared; in IRAC Channel 1 we see the same bright ring, faint plateau and filamentary structures as in the radio, at brightness levels comparable to those calculated from an extrapolation of the radio spectrum. In the forward shock region, where IRAC Channel 1 is most enhanced relative to the other channels, synchrotron radiation makes substantial contributions to all the IRAC Channels (Figure 3). This region also shows significant 4-6 keV X-ray emission in the form of a thin rim of emission at the edge of the radio plateau. The X-ray rim has been identified as marking the location of the outer (forward) shock (Gotthelf et al. 2001) and is likely dominated by synchrotron emission (Vink et al. 1999;. A detailed analysis of the radio/infrared/X-ray spectral shape holds important clues to the relativistic particle acceleration mechanism (Ellison, Decourchelle & Ballet 2005) but is beyond the scope of this paper.
Substantial emission from the forward shock region can also be seen in the MIPS 24 µm image ( Figure 5, Hines et al. 2004). However, the 24 µm brightness reaches up to a factor of ≈50 above the values extrapolated from IRAC Channel 1, so it must be due to another emitting component. IRS spectra of this spatial component show the characteristic "broad" shape shown in Figure 7. Previous observations with ISOCAM (Arendt et al. 1999) found these spectra rising to 18 µm; with IRS, we now see a broad peak around 26 µm, a smaller "bump" around 9 µm, and little or no line emission. The IRS spectrum of the forward shock region shown here can be approximately fit by a blackbody Planck function with a temperature of 113 K, multiplied by the absorption efficiency calculated for grain models for R V = 3.1 (Weingartner & Draine 2001). The silicate emission feature between 9-11µm from the interstellar medium can also be seen; such silicate dust from the circumstellar/interstellar medium is expected in the forward shock region.
3.4. Strong Channel 4 (6.5 -9.4 µm) The IRAC Channel 4 image (Figures 1 and 5 ) 5 µm), are also seen, but these are not exclusive to bright Channel 4 regions. In most locations associated with the bright ring, Channels 2 and 3 are weak with respect to Channel 4 (e.g., as seen in Figure 3).
The shape of the continuum of bright Channel 4 regions is characterized by a 2-3 µm wide peak at 21 µm, similar to that seen by Arendt et al. (1999) who modeled it as due to magnesium proto-silicates. This 21 µm peak thus leads to an excellent correspondence between Channel 4 and the MIPS 24 µm image ( Figure 5). However, the MIPS image also shows strong emission from the outer shock, whose spectra peak around 26 µm, without any corresponding strong Channel 4 emission.
Many of the details of the Channel 4 emission can also be seen in the optical HST WFPC2 images; Figure 5 (Fesen at al. 2001). Some of the large-scale differences between the Channel 4 and WFPC2 images are due to variations in optical extinction. The 0.3 -10 keV X-ray emission (Hwang, Holt, & Petre 2000) also shows some similarities to the Channel 4 image; a better correspondence is seen in the X-ray Si (shown in Figure 5)and S emission. Ionic lines from lower ionization states of these two elements are seen in the IRS spectra of strong Channel 4 regions.
The ratio of [Ar II] to [Ne II] is highest in the bright Channel 4 regions, partly due to selection. In order to quantify this, we created continuum-subtracted line maps around the [Ne II] (12.8 µm) and [Ar II] (6.99 µm) images, and calculated the average ratio of those lines (see Table 1) in regions where the Channel 4 brightness was above 25 MJy/sr. This will serve as a standard of comparison for the [Ar II]/[Ne II] ratio in places where the other IRAC channels are strongest relative to Channel 4.
Relatively Strong IRAC Channels 2 (4 -5 µm) and 3 (5 -6.4 µm)
Channel 2, and to a lesser extent, Channel 3, have significant contributions from synchrotron radiation in some locations such as the northern forward shock region and interior filamentary structures (see Figure 3). We have therefore subtracted the Channel 1 image from Channels 2 and 3 to create the residual images in Figure 6. The similarities between Channel 2 and Channel 4 are now more apparent, although the relative brightnesses of features vary by at least a factor of 5. Channels 2, 3, and 4 all trace out the same large oval structure in the North, e.g., but their relative strengths vary abruptly between nearby knotty structures (see Figure 2). Sometimes, as in the far north of the bright ring, these changes are actually due to dynamically distinct but superposed features.
Bright Channel 2 regions show both strong 21 µm and weak 21 µm spectra, but when Channel 2 is brightest with respect to Channel 4, we find only weak 21 µm spectra, along with low [Ar II]/[Ne II] ratios (see Table 1). These low ratios occur because [Ar II] (Channel 4) is weak in these regions. Most of the relatively bright Channel 3 regions also have low [Ar II]/[Ne II] ratios and weak 21 µm spectra, although some show modest peaks around 21 µm.
There is no IRS coverage for Channel 2; we consider possible line contributions below. Channel 3 is covered by the IRS from 5.3 to 6.4 µm, but there is no coverage between 5.0 and 5.3 µm. In the accessible wavelength range we found no ionic lines dominating the Figure 6. [Fe II] emission is also seen at 1.64 µm (Rho et al. 2003) and in the spectra of some FMKs around 1.2 µm . Comparison with our near infrared measurements suggests that [Fe II] might contribute up to 50% of the Channel 3 emission in some isolated locations, but a negligible amount in many bright Channel 3 regions. The rest of Channel 3 emission is likely from the dust continuum. When the synchrotron emission is subtracted from Channel 3, we also find a few isolated bright patches in the northeast jet and elsewhere ( Figure 6). These bright patches are not coincident with features seen in Channels 2 and 4, and their IRS spectra show the "broad" shape characteristic of the forward shock.
What is Channel 2 (4 -5 µm) ?
In the absence of IRS coverage in the Channel 2 wavelength range, we summed the 17 available SWS spectra from the ISO archives (http://www.iso.esac.esa.int/ida) but found no strong lines shortward of the 6.99 µm [Ar II] line. However, the fact that Channel 2 is usually brighter than both Channel 1 and Channel 3 (Figure 3) indicates the presence of line emission.
There are two distinct questions regarding the origins of the Channel 2 emission -what dominates when Channel 4 (and [Ar II]) is also strong, and what dominates when Channel 4 (and [Ar II]) is weaker, but [Ne II] is still strong? [Fe II] has several lines in the Channel 2, 4 -5 µm, band, but the 5.3 µm emission is quite weak. Looking at the 18 µm [Fe II] structure, there is little or no emission where Channel 2 is strong relative to Channel 4 (see Figure 9 for the locations of these regions), so [Fe II] is unlikely to provide the missing lines in Channel 2.
If pieces of the hydrogen envelope survived the WR-wind stage of Cas A, then Br α at 4.05 µm could be present in the ejecta. Since ground-based spectroscopy of Br α is difficult, we obtained an image of the Pa β 1.28 µm line using the Wide-Field Infrared Camera on the 5m Hale Telescope at Palomar Observatory. Some nearby [Fe II] lines, seen in FMKs by also fall into this filter. In a number of regions, the Pa β, [Fe II] and Channel 2 images trace out the same structures, so Br α might be responsible for some Channel 2 emission. These are also the regions where Channel 4 is strong. However, there are other locations where Channel 2 is strong, such as the jet and the crescent shaped regions seen in Figure 9, that are weak or absent in the Pa β image, so Br α is unlikely to be playing a key role. We also see no evidence for Pfα at 7.46 µm.
Another strong candidate is the CO fundamental bandhead around 4.76 µm, such as seen in regions with shocked CO (Gonzlez-Alfonso et al. 2002). This bandhead has been detected in SN 1987A (Meikle et al. 1989;Kotak et al. 2005), as well as the first overtone at 2.29 µm (e.g., Catchpole et al. 1987), so we know that CO can form in supernova ejecta. The resolution of this issue requires sensitive spectra in the 2 -5 µm band.
Another possibility is H 2 , which we see in our IRS images around 17 µm. However, the H 2 is largely exterior to the remnant, possibly associated with the surrounding CO clouds (Liszt & Lucas 1999). If H 2 were dominant in Channel 2, it should also be strong both in 5-6.4 µm and 6.4-9.4 µm spectra, but is not seen.
HeII 8-7 line recombination occurs at 4.76 µm. If this were responsible for Channel 2 emission, we should also have HeII 9-8 emission at 6.95 µm and 10-9 emission at 9.71 µm. The former is unfortunately coincident with the extremely bright [Ar II] emission, and we find no evidence for the latter anywhere in the remnant. At present therefore, the origin of the line emission in Channel 2 is unclear.
Discussion
We have found the mid-infrared radiation from Cas A to arise from a number of different components -at short wavelengths, synchrotron radiation and at longer wavelengths, low ionization lines from Ne, O, Si, S, Ar and Fe ejecta, and shock-heated dust from both ejecta and CSM. The ejecta at different locations are further distinguished from each other by their colors in the IRAC bands, by the relative line strengths of different elements, and by the shape of their dust continua. The variations occur on both small and large spatial scales. The same spatial variations characterize the optical and X-ray line emission from ejecta, although these are from much higher ionization states. In this discussion, we briefly summarize the structure of the multiwavelength appearance of the ejecta, and the implications of these new Spitzer observations for the dynamics of Cas A's explosion.
A consistent picture of the ejecta structure emerges from images at different wavelengths. Line emission from elements such as Si and S dominate the optical (Fesen at al. 2001) and Xray (Hwang, Holt, & Petre 2000) emission, from moderate and high ionization states. These appear structurally as a partially illuminated bright circular ring at the same location as the radio bright ring. In addition, both optical and X-ray observations show the NE "jet" and two interior elliptical rings towards the North. IRAC Channel 4 (6.4 -9.4 µm), which has major contributions from [Ar II] and [Ar III] emission, shows all of these same structures.
A quite different picture of the ejecta emerges from IRAC Channel 3 (5 -6.4 µm), which is dominated by dust, and in regions where Channel 2 (4 -5 µm) is strongest with respect to Channel 4. Regions of high (Channel 2 / Channel 4) can be seen in Figure 9, where the most prominent features are two bright crescent features. The ratio of [Ar II]/[Ne II] from our IRS spectra (see Table 1) is low in these regions, so they are relatively Ne-rich, although Ne does not directly contribute in the IRAC bands. The northern crescent is also shown in Figure 9 overlaid on the WFPC2 F450W image, showing that the same crescent structure appears in [O III] λλ 4959, 5007 emission. This feature is also seen in an X-ray image in the oxygen emission around 0.64-0.71 keV, using unpublished ACIS data from our proper motion studies . The brightness of the northern crescent in the F850LP WFPC2 image, which is primarily sensitive to [S III] λλ 9069, 9531 emission, is a factor of two lower, compared to the [O III] emission, than in surrounding regions. The southern crescent falls in a very distinct X-ray gap, as seen in Figure 9. Multiple epoch HST images show brightening [O III] emission at this location (R. Fesen, private communication). Although there is some X-ray Si, S, or Fe emission in the general area of the crescents (Hwang, Holt, & Petre 2000), they do not show the detailed structural correspondence seen in the O X-rays.
The ejecta thus appear to spatially segregate in a way consistent with different nucleosynthetic layers. The brightest optical, X-ray and 8 µm (Channel 4) emission is dominated by lines from the O-burning layers, e.g., Si, S and Ar. Other regions, such as the crescents, are distinguished by their relatively strong 4 -6 µm emission and the relatively strong products of C-burning, neon (infrared) and oxygen (optical and X-ray). This suggests that the appearance of different elements at different locations in the remnant reflects which nucleosynthetic layer is locally illuminated.
This picture receives strong support from the different types of dust found associated with regions of different IRAC colors. Where Channel 4 is strong, the IRS spectra show a broad spectrum with a 21 µm peak, which requires the presence of silicates, e.g. MgSiO 3 or Fe or Mg proto-silicates (Lagage et al. 1996;Arendt et al. 1999;Rho et al. 2006). Thus, the O-burning product Si is also seen in the dust. Where Channels 2 and 3 are strongest relative to Channel 4, the gently rising weak 21 µm spectra are dominated by Al 2 O 3 , i.e., the C-burning products. The detailed models of dust temperature and composition, including contributions from carbon dust, are presented in Rho et al. (2006).
These findings provide a new perspective on the distribution of different elements on Cas A's sparsely-covered spherical shell, which appears in projection as the bright ring. Apparent differences in composition at different locations could have appeared from variations in temperature and/or ionization states. Similarly, variations in ionization timescale (Hwang & Laming 2003) and density can significantly affect which elements are seen. However, the results presented here indicate that differences in apparent composition likely reflect the actual local composition. In some locations, we find multiple indicators for only C-burning products, e.g. neon, and oxygen in the form of [O III], [O VIII] and Al 2 O 3 dust. In other locations we see O-burning products. For example, when sulfur is seen, e.g., it appears as [S III] (33 µm, see Fig. 7) and He-and H-like S (X-ray S XV and S XVI lines between ≈2.4 and 3.1 keV). Silicon, where it is present, appears as strong 21 µm dust (i.e., silicates), [Si II] (35 µm), and He-and H-like Si (X-ray Si XIII and Si XIV lines between ≈1.8 and 2.6 keV). These variations in composition at different locations likely reflect asymmetries in the original explosion.
We briefly outline a simple dynamical model for Cas A which incorporates these findings. The blast wave from the explosion was quite symmetric, as seen by the nearly circular appearance of the outer and global reverse shocks (Gotthelf et al. 2001), and has now swept up sufficient mass to decelerate the outer shock by a factor of ≈1.5 from free expansion. This produces an inward-moving (in the frame of the explosion) global reverse shock in the diffuse X-ray gas. Clumps of ejecta traveling at ≈5000 km/s will be encountering this reverse shock at the current epoch, driving a local reverse shock back into the clumps. This heats and compresses them, making them visible optically . Stripping, heating, ionization and disruption of these clumps then lead to decelerated X-ray and radio-emitting features Anderson et al. 1994) and eventual disappearance. The reverse shock then is a slowly moving front that is successively overtaken and illuminated as a bright ring. by increasingly slower-moving undecelerated ejecta from the initial explosion. An illustration of the key features of this picture is shown in Figure 10.
Superposed on this symmetric structure are major variations, the most prominent being the jet and counterjet regions. They arise deep in the explosion, producing fast-moving Srich ejecta (Fesen et al. 2006), as well as emission from Si group elements (Hwang et al. 2004). Roughly perpendicular to this axis, we now find evidence for a much slower moving bipolar structure, in the form of the crescents of C-burning material in the infrared, optical, and X-ray. In order to reach the global reverse shock and become visible now, this material must have moved at a free expansion speed of ≈5000 km/s. However, along this axis only these upper layers are now encountering the reverse shock. Material from the O-burning layers is not seen either at the reverse shock, or in the fast-moving outlying knots in these directions (Fesen 2001).
This scenario leads to the suggestion that if we could wait sufficiently long, we would see the O-burning layers encounter the reverse shock along the crescents' axis, and become visible at all wavelengths. We find evidence that this actually may be occurring, because of the presence of [Si II] and [S III] emission interior to and somewhat overlapping the bright ring.
Conclusions
1. The four IRAC bands dominate in different regions of the Cas A supernova remnant, echoing structures seen in optical, X-ray and radio images.
2. IRAC Channel 1 is dominated by infrared synchrotron radiation; where Channel 1 dominates, the broadband spectra have a distinct shape, gently peaking around 26 µm, which we attribute to forward-shock-heated circumstellar dust.
3. IRAC Channel 4 has a significant contribution from both [Ar II] emission and continuum. Where Channel 4 dominates, the dust continuum peaks strongly around 21 µm, signifying the presence of silicates. 4. Where IRAC Channels 2 and 3 are strongest with respect to Channel 4, [Ar II] is weaker relative to [Ne II]. The continuum in these regions rises slowly or levels off at 21 µm, showing the absence or only weak presence of silicate dust. 5. The relatively strong Channel 2,3 regions show optical and X-ray oxygen emission and an absence of silicon and sulfur. For full-resolution images, please see http://webusers.astro.umn.edu/˜jennis/iracpaper.html. For full-resolution images, please see http://webusers.astro.umn.edu/˜jennis/iracpaper.html. -IRS spectra of the total remnant and the three major spectral classes. The spectra show the mean of the class, and the grey regions show the rms scatter within the class. Within each class, all spectra are normalized to have the same brightness at 30 µm. Small (up to ≈ 10%) cosmetic adjustments have been made to normalize the SL and LL brightness scales.For full-resolution images, please see http://webusers.astro.umn.edu/˜jennis/iracpaper.html. Fig. 8.-IRAC colors for the three major classes of IRS spectra. For full-resolution images, please see http://webusers.astro.umn.edu/˜jennis/iracpaper.html. Fig. 10.-Illustration of dynamical model described in text. In different directions, different layers from the explosion are currently reaching the reverse shock. There, they develop internal shocks and become visible for a short time across all wavebands before fading. X-ray and radio emission do not appear until significant deceleration has occurred. For full-resolution images, please see http://webusers.astro.umn.edu/˜jennis/iracpaper.html. | 2014-10-01T00:00:00.000Z | 2006-10-27T00:00:00.000 | {
"year": 2006,
"sha1": "a2809afb6d966d0d720a96b1c10befbd27db9821",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/22523/1/ENNapj06.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "30d14984a981e2813916580d7c950441ce03dd39",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
204138892 | pes2o/s2orc | v3-fos-license | New Record of a Marine Fish Parasite Nerocila trichiura (Crustacea: Isopoda: Cymothoidae) from Japan, with its Confirmed Distribution in the Western North Pacific Ocean
The cymothoid isopod Nerocila trichiura (Miers, 1877) is reported based on an ovigerous female from the ventral body surface of a flyingfish, Cypselurus hiraii Abe, 1953 (Beloniformes: Exocoetidae), in the coastal Pacific waters of central Japan. This represents the first record of N. trichiura from Japan, and C. hiraii is a new host record for this isopod. Nerocila trichiura has been reported from the tropical and middle-latitude waters of the Indian and Atlantic oceans and was recorded in 1881 from the Philippines. This paper confirms that the species occurs in the western North Pacific Ocean.
Nerocila trichiura (Miers, 1877) is a skin parasite of flyingfishes (e.g., Trille 1975Trille , 1994Bruce and Harrison-Nelson 1988). Recently, we collected a specimen of N. trichiura from a flyingfish, Cypselurus hiraii, in a small bay on the Pacific coast of central Japan. This represents the first and second records of N. trichiura, respectively, from Japan and the western North Pacific Ocean.
Pereopods 1-7 increasing in size towards posterior; pereopods 1-6 without robust setae; pereopod 7 with 3 robust setae on posterior margin of carpus, with 4 robust setae on posterior margin of propodus. Pereopod 7 basis 4.3 times as long as greatest width; ischium 0.3 times as long as basis; merus 0.9 times as long as ischium, 1.0 times as long as wide; carpus 0.6 times as long as ischium, 0.7 times as long as wide; propodus 1.2 times as long as ischium, 1.8 times as long as wide; dactylus slender, 2.1 times as long as propodus, 3.7 times as long as basal width.
Uropod 1.6 times as long as pleotelson; peduncle 0.4 times as long as rami, peduncle lateral margin without setae; rami extending beyond medial point of pleotelson, tapering posteriorly. Endopod apically rounded, lateral margin weakly arched, mesial margin slightly convex, 4.0 times long as greatest width. Exopod slender, characteristically long, 7.5 times long as greatest width, extending far beyond end of endopod, apically rounded, lateral margin almost straight, mesial margin straight.
Brood pouch formed from pair of large oostegites, enclosing embryos, arising from coxa 6 and anterior pairs of small oostegites.
Color. White when fresh/live; whitish yellow in ethanol preservative.
Site of infestation. Ventral body surface, below the base of the pectoral fins, of C. hiraii (Fig. 1A). A skin wound (18.1 mm long, 12.8 mm wide) with exposed muscle was found at the attachment site (Fig. 1B).
Distribution. Recorded from the tropical and middlelatitude waters of the Indian, Atlantic, and western North Pacific oceans (Fig. 3, see Remarks for detailed information on the collection localities).
Remarks. Nerocila trichiura was first listed, without any description, as "Anilocra trichiura, n. sp. ", one of the crustaceans in the collections of the British Museum, London (White 1847, see Clark and Presswell 2001 and the above synonym list). The specimen reported was a female from Mauritius (Indian Ocean) and, using this specimen, Miers (1877) formally described A. trichiura. Later, the species was transferred to Nerocila and redescribed by Schioedte and Meinert (1881). The male from the Atlantic Ocean was also described by Nierstrasz (1918). The dorsal and lateral views of an ovigerous female from Congo were shown by Monod (1931). A photograph of the dorsal view of an ovigerous female from an unknown locality was given by Trilles (1975: pl. 2 The female specimen collected in the present study corresponds well to the descriptions, figures, and photographs of N. trichiura given by Miers (1877), Schioedte and Meinert (1881), Monod (1931), Trilles (1975), Bruce and Harrison-Nelson (1988), and Trilles et al. (2013). Nerocila trichiura is characterized by having short coxae with rounded point and the posterolateral angles of pereonites 1-6 bluntly rounded. The color of a fresh specimen of N. trichiura from India was white without any band (Trilles et al. 2013: fig. 2k), which is also confirmed in this study (Fig. 1C).
There is considerable variation in the ratio of the uropod to the pleotelson between specimens examined in the previous and present studies: 1.9 in the specimen from Mauritius (holotype) (Bruce and Harrison-Nelson 1988: fig. 7A); 2.4 in the specimen from an unspecified locality (Schioedte and Meinert 1881: pl. 7, fig. 6); 2.6 in the specimen from Congo (Monod 1931: fig. 1a); 1.7 in the specimen from South Africa (Kensley 1978: fig. 33G); 2.4 in the specimen from Senegal; 1.9 in the specimen from off south of India (Bruce and Harrison-Nelson 1988: fig. 7F, I); 2.3 in the specimen from India (Trilles et al. 2013: fig. 7j); and 1.6 in the specimen from Japan (Fig. 2D). The number of robust setae on the posterior margin of propodus and carpus of pereopod 7 also varies: 6 and 2 in the specimen from Mauritius; 3 and 2 in the specimen from Senegal (Bruce and Harrison-Nelson 1988: fig. 7E, H); and 4 and 3 in the specimen from Japan (Fig. 2E).
Nerocila trichiura is similar to N. exocoeti Pillai, 1954, which parasitizes beloniform fishes including exocoetids (flyingfishes) in the Indian and western Pacific oceans (Bruce and Harrison-Nelson 1988;Trilles et al. 2013;Aneesh et al. 2017). In the redescription of N. exocoeti, Aneesh et al. (2017) stated that this species can be distinguished from N. trichiura by having the posterior margin of coxae 5-7 acute and the body color (steel blue) when fresh. The difference in body color between two species (white in N. trichiura vs. steel blue in N. exocoeti) is closely related to the difference in their infestation site: N. trichiura attaches to the white-colored ventral body surface of the host (Fig. 1A), while N. exocoeti to the dark-colored dorsal body surface of the host (see Sivasubramanian et al. 2011: figs 1, 2;Aneesh et al. 2017: fig. 6G, H).
Nerocila trichiura can be also differentiated from the two congeneric species, N. japonica and N. phaiopleura, occurring in Japanese waters. Nerocila japonica has a wider body with two submedian pale longitudinal bands than N. trichiura (Yamauchi and Nagasawa 2012). The body shape of N. phaiopleura is similar to that of N. trichiura, but the former species has the large eyes and dark brown or black stripes on the uropod exopod and lateral sides of the posterior pereonites and the pleon (Nagasawa and Tensha 2016). The known hosts of N. japonica and N. phaiopleura are coastal marine fishes but do not contain exocoetids (see the Introduction for references).
The collection of N. trichiura in this study represents its first and second records in Japan and the western North Pacific Ocean, respectively. The species has so far been reported from the Indian Ocean [Mauritius-type locality (Miers 1877;Bruce and Harrison-Nelson 1988), Great Chagos (Stebbing 1910), Durban, South Africa (Barnard 1955;Kensley 1978), 10°20′S, 70°00′E (Bruce and Harrison-Nelson 1988), Comoro Islands (Kensley 2001), Tamil Nadu coast, India (Trilles et al. 2013;Rameshkumar et al. 2013)]; the Atlantic Ocean [31°N, 76°W (Schioedte and Meinert 1881), Banana and an unknown locality, Congo (Nierstrasz 1918;Monod 1931), the West Indies (Trilles 1979, reported as Nerocila sp. 1, which was regarded as N. trichiura by Bruce and Harrison-Nelson 1988), Dakar Harbor, Senegal (Fig. 3). Barnard (1914) collected the species from an unknown locality in South Africa, and Trilles (1975) also recorded it from an unknown locality. These authors questioned Schioedte and Meinert's record of N. trichiura from the Philippines, but the present study has confirmed that the species actually occurs in the western North Pacific Ocean.
As shown in Fig. 3, N. trichiura occurs in the tropical waters of the Indian, Atlantic, and western North Pacific oceans, excluding three localities in the middle latitudes: the western North Atlantic off the southeast U.S.A. (locality 1 in Fig. 3), Durban, South Africa (locality 5), and Kowaura Bay, Japan (locality 12), which are affected by the Gulf Stream, the Agulhas Current, and the Kuroshio, respectively. This indicates that N. trichiura is a tropical species in the three oceans, where the species also occurs in the middle-latitude waters affected by the warm currents.
Cypselurus hiraii is distributed in the western North Pacific Ocean, including the southern Sea of Japan and the East China Sea (Aizawa and Doiuchi 2013). The species migrates for spawning to the southwestern Sea of Japan off western Japan from May to July (Kawano et al. 1995;Kawano 2004) but no information is available on its migration to the coastal Pacific waters of Japan. In Kowaura Bay, where we collected N. trichiura in this study, C. hiraii is commercially caught during summer months with coastal set nets but its catch is low (S. Isozaki, unpublished). Nerocila trichiura is not a common fish parasite in the bay.
In this study, the specimen of N. trichiura was found to be attached to the ventral body surface below the base of the pectoral fins of the host fish (Fig. 1A). A similar attachment site was previously reported for the species (Stebbing 1910;Monod 1970). A skin wound was found at the attachment site and the host's muscle was exposed (Fig. 1B). The wound was located under the anterior part of the body of N. trichiura. The observed disease conditions were most probably induced by the skin feeding and deep insertion of the pereopod's dactyli of the species. Similar skin wounds are also found on marine fishes infested by N. phaiopleura in Japanese waters (Nagasawa and Tensha 2016;Nagasawa and Shirakashi 2017;Nagasawa and Isozaki 2017;Nagasawa and Kawai 2018).
anonymous reviewers provided useful comments to improve the manuscript of this paper. | 2019-09-26T09:01:54.249Z | 2019-09-25T00:00:00.000 | {
"year": 2019,
"sha1": "061fcf81b33a62fba0053081cf505eeae61ef15f",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/specdiv/24/2/24_240211/_pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "05d4877a991b53deadda751a8c49a68c147ab39f",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
257741861 | pes2o/s2orc | v3-fos-license | PRESERVING PERSONAL DIGNITY: THE VITAL ROLE OF THE RIGHT TO BE FORGOTTEN
: The coexistence of the real and virtual worlds has resulted in a complex interplay between them, where the definition of the virtual world remains elusive. The rise of digital technologies and the proliferation of personal data have led to concerns about privacy, and the need to adapt the concept of privacy to the current information infrastructure. This adaptation requires a shift from the traditional focus on defending the private sphere against external invasions to a consideration of privacy issues in the context of the current organization of power. The Right to be Let Alone and the Right to be Forgotten are two legal concepts that have gained importance in this context. The former emphasizes an individual's right to total immunity from injury, while the latter enables users to control their personal data if it is no longer necessary for its original purpose or if it causes more harm than benefits. The Right to be Forgotten is crucial to protecting personal identity and privacy in the digital age, and it provides a solution for issues related to data use and artificial intelligence. Thus, a comprehensive understanding of the Right to be Forgotten is essential to ensure effective protection of individual rights and uphold principles of human dignity.
INTRODUCTION
What we know as the real world, or offline, and the online world coexist and communicate with each other for a long time. When crossing body and movement with the dimension usually called "virtual", it seems that a grouping of things that belong to different worlds is woven. Starting with the conceptualization of what the "virtual" is, a term so repeated in times of proliferation of digital technologies, whose polysemic meaning transits, without agreement, through different perspectives arising from common sense, philosophy, social sciences, and humanities. 1 The new dimensions of collecting and processing information have led to the multiplication of appeals to privacy and, at the same time, raised awareness of the impossibility of including new topics in the institutional framework traditionally identified by that concept. But, today, the problem is not that of adapting a notion born in other times and under other skies to a profoundly changed situation, respecting its reasons and logic of origin. Those who know how to decipher the current debate, in fact, realize that not only is the classic theme of the defense of the private sphere against outside invasions reflected in it, but it also makes an important qualitative change, which leads to considering privacy problems, rather in the context of the current organization of power, of which the information infrastructure is now one of the fundamental components. 2 Attention should be drawn to how current society is conducive to moving from individual autonomy to direct hypothetical creation for the person. So, mainly in France, there is talk of the phenomenon of the multiplication of subjective rights, which is manifest especially in the field of personal and family law. One of these present rights is the right to privacy, including not only the right to informational self-determination, but also the well-known right to be let alone, demarcated from spheres of action free of any interference. 3 The idea of the Right to be let Alone was described by Judge Thomas Cooley, in 1879, where he stated that the right to one's own person is a right of total immunity: to be left alone. The corresponding duty is not to inflict injury and not, within such proximity as would make it successful, attempt to inflict injury. In this, duty goes beyond what is required in most cases; for usually an unfulfilled purpose or an unsuccessful attempt is not noticed. But the attempt to commit aggression involves many elements of injury not always present in breaches of duty; it usually involves an insult, a putting of fear, a sudden call to the energies for immediate and effective resistance. Very likely there is a shock to the nerves, and the individual's peace and tranquility is disturbed for a period of greater or lesser duration. 4 An idea that Warren and Brendeis, in 1890, brought to civil law with the concern of invasion of privacy. They asserted that the right of one who remained a private person to prevent his public portrayal presents the simplest case for such an extension; the right to protect oneself from pen portraits, from press discussion of one's private affairs, would be more important and far-reaching. If casual and unimportant statements in a letter, if they are handiwork, however inartistic and worthless, if goods of all kinds are protected not only against reproduction, but also against description and enumeration, how much more should act and A man's sayings in his social life and domestic relations must be protected from relentless publicity. If you cannot reproduce a woman's face photographically without her consent, how much less should the reproduction of her face, her form and actions be tolerated by graphic descriptions colored to suit a gross and depraved imagination. 5 This right to be forgotten is broad and comprehensive and allows the user to control his personal data if it is no longer necessary for its original purpose, or if, for some other reason, he wishes to withdraw his consent to its processing, among others, if there was data would be considered an abusive practice, as it causes more harm to individuals, to the data holder, than benefits to society and the collective interest, opposing in the legal balance the personal rights of data holders on the one hand and freedom of expression and rights collectives of another. Understanding the application of this right in particular cases, it is an indispensable right and when the individual cannot exercise it, it can imply a serious setback against the principles of human dignity and very personal rights, in particular privacy and personal identity, which define the essence of each one. 6 Therefore, having a broad understanding of the Right to be Forgotten is important for the internet user to have a more effective protection of individual rights, making issues such as data use and artificial intelligence more understandable and clear to all and ensuring that those have fully aware of how your data is collected and treated and exposed on the internet the possible solution, the right to be forgotten, to provide personality rights.
I. PERSONALITY RIGHTS
Law can only be conceived having human beings in coexistence as recipients. The application of civil law to this human coexistence triggers a web of legal relationships between men, relationships translated into powers and legal duties lato sensu. 7 When thinking in a purely technical sense, being a person is precisely having the ability to be the subject of rights and obligations. It is to be a center for attributing legal powers and duties, to be a center of a legal sphere. In this technical-legal sense, there is no coincidence between the notion of person or subject of law and the notion of human being. People in the legal sense are not necessarily human beings, where one can find certain organizations of people, such as associations and societies, and certain sets of goods, such as foundations, to which objective law attributes legal personality. 8 The technical-legal concept of person does not necessarily coincide with that of man or human being. If law, however, aims to discipline human interests, if all law is constituted for the sake of and for the service of men, it is logically imperative that at least some men be endowed with legal personality. The attribution or recognition of the personality of at least some human beings is also a logical assumption of law. The discovery of the self, as a person, a category encompassing the inseparable soul and body, endowed with reason and perfectible, is recent, even in Western thought. 9 At first the individual was nothing more than an element of the material world. Object subject to all the constraints of nature and, therefore, of other men. However, the self was thought of as a double face, as an object of nature and as one of those values. As an empirical being that produces, and as a moral, self-determined being, bearer of unique and supreme values and, therefore, essentially non-social. Recognizing in myself and in the other a moral value, necessarily supreme and equal, and, therefore, the essential identity of all human beings, we started to walk the path of recognition of the Person and its preservation, that is, of their rights, the rights of the person. 10 The person as a space of exclusion because it is an essential assumption of his existence that others do not interfere in what he is: in his life, in his physical structure, in his mind, in his creative capacity, among others. The root of the rights of the person, whether public or private, is embedded in Christianity, as it determines the desecration of the nature of society, freeing man from being an object to transform him into a subject, bearer of values. 11 The quest for autonomous individuality was alien to Eastern and classical Greek culture, and such an idea was typical of the Christian religion, where subjectivity first appeared, along with the infinity of selfconsciousness. The person owes to Christianity its metaphysical base that guarantees the passage from the notion of the person as a member of society clothed in a social state to the notion of the non-social human person, in a radical way. 12 The question of the human person only arose with Christianity, where it was placed at the center of concerns at a philosophical, ethical, legal, and social level. If Christians were not the creators of the Latin persona or the Greek hypostasis, it was they who attributed content to it and drew consequences from that thought. 13 Until Christianity, people were only exceptional beings who played the first roles in society and since Christianity, any human being has become a person, whether man, woman, child, unborn child, slave, foreigner, enemy, among others, through the ideas of brotherly love and equality before God. 14 The traditional conception of the beginning of personality is dominated by the Aristotelian conception of the vegetative or nutritive soul, the faculty of growth and reproduction; of the animal or sensitive soul, faculty of feeling, of desiring and of moving; and of the reasonable or thinking soul, faculty of humanity, is being acquired by birth. 15 Since human society is the presupposition of all law, the law being a social rule, logically only man is susceptible to rights and obligations, a quality that cannot be conferred on irrational beings. It was even useless to attribute to them any street rights and obligations for the simple reason that they could never exercise them. 16 Personality or legal capacity is the precondition or presupposition of all rights; and, therefore, it is found even in newborns or in any other entity to which the law recognizes it; but there is a capacity to act, which presupposes legal capacity, being a different situation. Personality is diverse legal man. The personality is the legal man in a static state; the ability is the legal man in the dynamic state. In other words: to be a person, it is enough that the man needs to have the necessary requirements to act for himself, as an active or passive subject of a legal relationship. 17 In 1879, for Judge Thomas Cooley, the personal right was considered the main class covering the rights that belong to the person. In it are included the right to life, the right to immunity from attack and injury, and the right, equally with others, in a similar way, to control one's action. In all enlightened countries the same class would also include the entitlement to the benefit of every reputation that the earldom bestowed on him and the enjoyment of all civil rights granted by law. Political rights can also be included under the same head.
Snapshots and journalistic endeavors invaded the hallowed precincts of private and domestic life; and numerous mechanical devices threaten to fulfill the prediction that "what is whispered in the closet will be proclaimed from the housetops." It has been felt for years that the law must provide some remedy for the unauthorized circulation of portraits of private persons; and the evil of invasion of privacy by newspapers has long been felt, only recently discussed by a competent writer. 18 Of the desirability-indeed the necessity-of such protection, it is believed, there can be no doubt. The press is pushing in all directions the obvious limits of decorum and decency. Gossip is no longer the resort of the idle and the vicious, but has become a trade, which is practiced with diligence and shamelessness. To satisfy a lustful taste, the details of sexual intercourse are published in the columns of daily newspapers. To occupy the indolent, column after column is filled with idle gossip, which can only be obtained by intrusion into the domestic circle. 19 The intensity and complexity of life, resulting from the advance of civilization, made some withdrawal from the world necessary, and man, under the refining influence of culture, became more sensitive to publicity, so that solitude and privacy became if more essential to the individual; but modern enterprise and invention, through intrusions upon his privacy, have subjected him to mental pains and anguish far greater than could be inflicted by mere bodily injury. Nor is the damage caused by such invasions limited to the suffering of those who may be the objects of journalistic or other endeavors. In this, as in other branches of commerce, supply creates demand. Every crop of unseemly gossip thus harvested becomes the seed of more, and, in direct proportion to its circulation, results in a lowering of social standards and morality. 20 Even seemingly harmless gossip, when widely and persistently circulated, is potent for evil. He both belittles and perverts. It diminishes by reversing the relative importance of things, thus diminishing the thoughts and aspirations of a people. When personal gossip attains the dignity of print and crowds the available space for subjects of real interest to the community, it is no wonder that the ignorant and the unwise confuse their relative importance. Easily understood, appealing to that weak side of human nature which is never quite broken by the misfortunes and frailties of our neighbors, no one can be surprised that it usurps the place of interest in brains capable of other things. Triviality destroys both robustness of thought and delicacy of feeling. No enthusiasm can flourish, no generous impulse can survive under its destructive influence. 21 For Bauman, nowadays, what scares us is not so much the possibility of betrayal or violation of privacy, but the opposite, the closing of exits. The area of privacy becomes a place of imprisonment, with the owner of the private space condemned and sentenced to suffer atoning for his own mistakes; forced into a condition marked by the absence of listeners eager to extract and remove the secrets that hide behind the trenches of privacy, to display them publicly and make them common property of all, that all want to share. 22 For Rodotà it has been said many times that technology puts each of us in the condition of finding a virtual place in which to satisfy our own interests. But this process of selection of interests would lead to greater social fragmentation, not to the strengthening of the sense of community. The available data, in any case, clearly show that virtual communities now also offer the possibility of establishing particularly intense social connections, or even present themselves as the only way to be part of a social formation for that subject who, otherwise, he would be doomed to isolation. 23 All of this means that our behavior has become a commodity, a tiny piece of a marketplace that serves as a platform for personalizing the entire internet. 24 Ultimately, the filter bubble can affect our ability to decide how we want to live. To be the authors of our own life we have to be aware of the wide range of options and lifestyles available. When we enter a filter bubble, we allow the companies that developed it to choose the options that we will be aware of. Perhaps we think we are the masters of our own destiny, but personalization can lead us to a kind of informational determinism, in which what we click on in the past determines what we see next -a virtual history that we are doomed to repeat. And with that, we are trapped in a static, ever- 21 Warren, Samuel D., and Louis D. Brandeis. The Right to Privacy. Harvard Law Review, v. IV, n. 5, 1890, 196. 22 The Internet, nowadays, is a complex network, similar to a spider's web, in which two points are connected by thousands of potential paths. If a message cannot take the shortest and simplest path between sender and receiver, it can be rerouted along any other available path. The distance between the points can be long, but because electronic signals travel so fast, the time difference is negligible. Thus, an e-mail message can travel around the world and reach a computer less than a kilometer away. 26 Globally, is increasingly used and fed with an excessive amount of information, especially of a personal nature, making it possible for nothing to be forgotten. In the past, anyone wishing to remain anonymous only needed to prevent their name and telephone number from appearing in telephone directories, commonly known as the "yellow pages". However, currently, even taking all measures to preserve privacy, it is practically difficult to maintain it. Information that before could take months or even years to be acquired, can now be consulted with ease, being available to internet users. 27 Viktor Mayer-Schönberger states that while we are constantly forgetting and reconstructing elements of our past, most Internet users continue to access digital memories and facts that have not been reconstructed. Thus, as the past we remember changes and evolves, the past captured in digital memory is constant and remains frozen in time. These two views are likely to clash, namely the frozen memory that others have about us and the evolving, emerging memory that we carry in our minds. None of them is an accurate and complete representation of who we are. The former is locked in time, while the latter, our mind's interpretation of the past, is heavily influenced by who we are in the present. 28 Mayer-Schönberger also states that new technologies make the act of forgetting, which used to be a rule, an exception. That's why we need mechanisms, both legal and technological, to find the balance. It is not just a question of forgiving questionable attitudes, but of assuming that common actions, such as taking pictures or engaging in private conversations, if perhaps taken out of context, cannot be criteria for defining someone's 25 character or competence. The referred author argues that people have total control over their digital footprints: photographs could have an expiration date and be deleted after a certain time. 29 New devices, systems, software, while bringing benefits to society, also bring risk. The sale of an eternal memory of unforgettable moments, of travel, of family moments, with the advancement in the quality of photos and videos and the great sharing of information, from the photo of the coffee at the airport, of the food in the well-regarded restaurant, or even a laugh in a park, leave eternal traces that can bring risk in the future for everyone who uses technology in their daily lives. 30 Remembering is a two-step process. The first is successfully committing information to long-term storage. The second is to retrieve this information from memory. But neuroscientists and psychologists are still debating what it means to "forget" information stored in long-term memory. information in long-term memory cannot be erased except by physiological damage. They suggest that when we forget what we lose is not the information itself but the link to it. It's like a web page that no other page links to. Without links pointing to it, the information cannot be found, not even through a stupendous search. For all practical purposes, it's forgotten. 31 Sometimes we forget the past and sometimes we distort it; some disturbing memories haunt us for years. However, we also rely on memory to perform a surprising variety of tasks in our everyday lives. Recalling conversations with friends or family vacations, remembering appointments and errands we need to run, evoking words that allow us to speak and understand others, remembering foods we like and dislike, acquiring the knowledge necessary for a new employment -everything depends, one way or another, on memory. Memory plays such a pervasive role in our daily lives that we often take it for granted until an incident of forgetting or distortion demands our attention. 32 Communication, therefore, already goes beyond the traditional means of mass communication-newspapers, radio, and "generalist" television. The combination of television, computer and telephone is the common 29 denominator of new media. The fundamental difference between the old and new media lies in digitization and interactivity, rather than the passivity that characterized the situation of the newspaper reader, radio listener and television viewer. It is true that certain relatively rudimentary forms of interactivity were achieved by associating the telephone with radio and television, thus allowing the intervention of listeners and viewers in the execution of programs. But only the advent of new media offers real possibilities for dialogue and independent intervention by an interested public. 33 New media are finally broadening the horizon. We can overcome programming constraints. They make it possible to combine images, sounds, documents extracted from the most diverse sources to arrive at a unique program, a kind of direct creation of its author. Thus, we are facing a possible passage from passivity to autonomy: this is evidenced by the observation of telematic networks where interactivity has a greater chance of developing. Here, in fact, personalization and autonomy facilitate continuous exchange with other individuals, the construction of new individual and collective subjectivities and the overcoming of the old distinction between producers and consumers of information. This last possibility is certainly the big news. In networks, traditional logics and hierarchies are not reproduced and it is possible to go beyond interactivity. The offer does not only expand according to a process that implies, in any case, dependence on the supplier of products and services, which thus maintains a position of superiority and maintains a model of vertical communication. 34 The right to privacy, for Brandeis and Warren, does not prohibit the publication of material of public or general interest. In determining the scope of this rule, aid would be granted by analogy, in the law of defamation and slander, of cases dealing with the qualified privilege of comment and criticism on matters of public and general interest. Of course, there are difficulties in applying such a rule; but they are inherent in the subject, and certainly no greater than those which exist in many other branches of the law, -for example, in that large class of cases where the reasonableness or unreasonableness of an act is brought to the test of responsibility. The design of the law should be to protect those persons with whose affairs the community has no legitimate concern, from being dragged into unwelcome and unwelcome publicity, and to protect all persons whatever; their position 33 to have matters they might prefer to keep private made public against their will. It is the unwarranted invasion of individual privacy that is rebuked and, as far as possible, avoided. The distinction, however, noted in the above statement is obvious and fundamental. There are people who can reasonably claim protection against the notoriety that comes from being made victims of journalistic enterprises. There are others who, to varying degrees, have renounced the right to live their lives shielded from public observation. 35 In this sense, in the US, one of the first cases in which one can see traces of the right to be forgotten is Melvin v. Reid. In 1919, Gabrielle Darley, a prostitute, is accused and acquitted of murder. She remakes her life, abandons prostitution, marries Melvin and has children. In this new phase, people in her social circle are unaware of her past, but in 1925, Dorothy Davenport Reid produced the film Red Kimono, which accurately portrayed Gabrielle's past life, including identifying her with her real name. 36 As a result, Melvin sought redress for the violation of his wife's and family's privacy, and in 1931 the California Court of Appeals upheld the claim on the grounds that a person who lives a life of righteousness, regardless of past, has a right to happiness, which includes freedom from unnecessary attacks on his character, social position or reputation. 37 According to the California Court of Appeals, in its decision, the use of the appellant's real name in connection with the incidents of her previous life in the plot and in the advertisements was unnecessary, indelicate, a deliberate and arbitrary disregard of that charity which should act us in our social relations, and which should prevent us from unnecessarily holding another person up to the scorn and contempt of the righteous members of society. 38 For the Court, one of the main objectives of society as it is now constituted, and of the administration of our penal system, is the rehabilitation of the fallen and the reform of the criminal. According to these theories of sociology, our goal is to lift and support the unfortunate, rather than tearing them down. Where a person has rehabilitated himself by his own efforts, we 35 as right-thinking members of society should allow him to continue on the path of righteousness rather than throwing him back into a life of shame or crime. Even the thief on the cross was allowed to repent during the hours of his final agony. 39 The famous Lebach case takes its name from the village located in the Federal Republic of Germany, where in 1969 a robbery took place, which drew a lot of attention from public opinion, with wide coverage in the press and on television. The robbery became known as "the murder of soldiers from Lebach". On that occasion, four soldiers were killed and one was seriously injured due to the action of criminal agents, who subtracted weapons and ammunition from the warehouse, where these soldiers were on guard. In 1970, two defendants were sentenced to life imprisonment and another to six years in prison for having helped prepare the criminal action. 40 Aware of the repercussions of the case, the ZDF (Zweites Deustsches Fernsehen-second German channel) produced a documentary, which would portray the crime through dramatization by actors, and would present photos and real names of all the condemned, including the possible homosexual connections that existed between they. The documentary would be shown on a Friday night, days before the third convict leaves prison after serving his sentence. He sought an injunction to prevent the program from being shown and the State Court of Mainz and the State Court of Koblenz dismissed the request. On the other hand, the German Federal Constitutional Court (TCF) upheld the constitutional claim for envisaging a violation of the right to personality development. Thus, it prohibited the exhibition of the documentary until the final decision of the main action by the competent ordinary courts. 41 For the German Federal Constitutional Court, in this case, and in accordance with its constant practices at the time of the judgment, not every sphere of private life enjoys the absolute protection of fundamental rights. If an individual, in his capacity as a citizen, living within a community, enters into relations with others, influences others by his existence or activity, and thereby interferes with other people's personal sphere or the interests of community life, his exclusive right from being master of his own private sphere can become subject to restrictions, unless his most intimate sphere of life is in question. Any social involvement, if strong enough, can in particular justify measures by public authorities in the interest of the public as a whole-such as publishing photos of a suspected person in order to facilitate a criminal investigation. 42 .
For the Court, the freedom to transmit may have the effect of restricting any claims based on the personality right. However, the damage to 'personality' resulting from a public representation should not be disproportionate to the importance of the publication in upholding freedom of communication. Furthermore, it follows from these guiding principles that the balance of interests required must take into account the intensity of the violation of the personal sphere by broadcasting, on the one hand; on the other hand, the specific interest which is being felt by broadcasting and can be thus served, must be evaluated and examined as to whether and to what extent it can be satisfied even without any interference-or less far-reaching interference-with the protection of the personality. 43 The reflex effect of the constitutional guarantee of personality does not, however, allow the media, in addition to contemporary reporting, to deal indefinitely with the person of the criminal and his private sphere. Instead, when the interest in receiving information has been satisfied, their right to "be left alone" takes on increasing importance in principle and limits the mass media's desire and the public's desire to make the individual sphere of their life the object discussion or even entertainment. Even a culprit, who has attracted public attention for his serious crime and won widespread disapproval, remains a member of that community and retains his constitutional right to the protection of his individuality. If, on prosecution and conviction by a criminal court, the act which appeals to the public interest has met with the just community reaction required by the public interest, any further continued or repeated invasions of the culprit's personal sphere normally cannot be justified. 44 In effect, the TCF, when judging the Lebach case, highlighted that in the collision between freedom of broadcasting and the presentation of the defendant's image, reinforced as a constitutional guarantee of personality protection, one must start from the assumption that both constitutional values are essential to the free democratic order, so that none of them can claim absolute prevalence. He also added that, if possible, values should be 42 Cf. §24 da Gesetz betreffend das Urheberrecht an Werken der bildenden Künste und der Photographie. Available at: https://www.gesetze-im-internet.de/kunsturhg/__24.html. 43 Germany, Bundesverfassungsgericht. BVerfGE 35, 202 -Lebach, de 5 de junho de 1973. 44 Germany, Bundesverfassungsgericht. BVerfGE 35, 202 -Lebach, de 5 de junho de 1973. harmonized. If this does not happen, the decision will have to consider the typical configuration and the special circumstances of the particular case to define which of the two interests should be overridden. He also stressed that "both constitutional values must be seen in their relationship with human dignity as the center of the axiological system of the Constitution". 45 In this sense, Alexandre Pereira points out that although generated within the "right to privacy", as it is known in the USA, the right to privacy, the protection of personal data has developed and acquired a "life of its own", based on the fundamental right to "informative self-determination", as designated by the German Federal Constitutional Court in its judgment of December 15, 1983, in the framework of a case relating to personal information collected during the 1983 census, in which the BFGH considered that, in the context of modern data processing, the protection of the individual against the unlimited collection, storage, use and disclosure of his personal data is covered by the fundamental right of each person to determine, in principle, the disclosure and use of his personal data, subject to this informational self-determination only to limitations justified by reasons of overriding public interest. 46 In the paradigm of the information society, decision-making processes, previously attributed to human beings, are increasingly defined by automated systems under the argument of greater rationalization and efficiency. The human capacity to process a lot of data does not compare to systems such as Artificial Intelligence. However, multiple challenges are generated that transcend the legal sphere, but which demand a response from it. 47 The right to be forgotten originates in the protection of intimacy and private life and has been invoked, especially in the digital world, as the right to erase personal data in the context of internet, but also in the context of the media in general, as a right not to broadcast information that is not current and relevant to the public, but offensive to the interested party. 48 45 When we return to the French teachings, the droit à l'oubli (analog to forgetting) elaborated in France involved three requirements: disclosure of licit information, the resurgence of past facts on television and, a time lapse sufficient to give rise to the loss of public interest in the information. In this phase, the right to be forgotten incorporates the temporal control of data, which fills the privacy protection tools with the chronological factor, complemented by spatial and contextual controls. 49 In this context, that right would be founded on the idea of protection against damage to dignity, personality rights, reputation and identity, and, by its nature, has the potential to collide with other fundamental rights, such as the right to freedom of expression. and access to information. Its objective, therefore, is to limit information considered private from being disseminated and exposed, as the public interest would not justify this disclosure. 50 For Guilherme Magalhães Martins, the subject of the possibility of deleting information on the internet is a recurring topic. Is it fair to allow people to completely erase their web history? The Internet must be able to forget? 51 In theory, for the author, the right to be forgotten is a critical issue in the digital age, as it is difficult to escape the past on the internet, as photos, status updates and tweets live forever in the cloud. The problem is that records from the past, which can be permanently stored, can have consequences after being forgotten by the human mind. 52 The Internet is an open network, designed more for show than hiding, which is even more evident with the use of mobile devices such as cell phones. Often, we don't know who owns information, how it was obtained, what the purpose of the entities that control it is, or what might be done with that information in the future. 53 For Stéfano Rodotà, with the creation of increasingly large databases accessible on the Internet through search engines, social memory expands and conditions individual memory. While before there was damnatio 49 civilistica.com, v. 10, n. 3, p. 1-70, December 7, 2021, 7. memoriae, now there is the obligation to remember, with the Internet's collective memory accumulating every vestige of people's lives, making them prisoners of a past that never passes and challenging the construction of a free personality. This leads to a need for adequate defenses, such as the right to be forgotten, not to know and not to be tracked, to protect privacy and individual freedom. 54 The emergence of the information society has resulted in an expansion of the right to be forgotten, but its nature and scope vary according to public and legal opinion. While people consider this right to be free and unlimited, jurists seek to delimit and balance it with other rights and freedoms provided for in the Constitution. At the moment, attention is mainly focused on the "right to be forgotten online", but it is necessary to establish a clear distinction between this right and the protection of personal data, as both aim to guarantee the dignity of the person. 55 The right to be forgotten has evolved over time, taking different forms according to the generation to which it belongs. The first is the right not to see news that has already been published republished after a certain period without current public interest. The second is the right to contextualize information, established by a decision of the Italian Court of Cassation. And the third is the right to erase personal data in certain situations, reaffirmed by the 2016 European Regulation. 56 Each of these generations protects a different legal asset: the first, reputation; the second, personal identity; and the third, personal data. Therefore, the right to be forgotten is not autonomous, but an important instrument to guarantee other personality rights, such as reputation, honor, privacy, and personal identity. 57 An important aspect that distinguishes the first generation of the internet from the others is time, which is fundamental to characterize the traditional and authentic right to be forgotten. On the internet, as we know, information and data are preserved eternally, therefore, the "time" factor does not apply to the duration or distance between an event and its publication, but to its persistence. In the traditional right to be forgotten, the news in question needs to be republished after years, while on the internet, information is 54 com, v. 10, n. 1, p. 1-9, Aug. 1, 2021, 3. always available, which has changed the way information is used, starting to be understood and used instantly. Although this requirement is important, it must be remembered that the antiquity of a fact does not legitimize the evocation and recognition of the right to be forgotten, but the potential damage that the republication of a person's experience can cause to the truth of the person's image now. 58 In the digital society, characterized by the second and third generations of the internet, the right to be forgotten is linked to the concept of archiving, due to the persistence of information on the internet. Therefore, republication is not necessary, but updating and contextualizing the information is important. The dynamics of the subjects involved also changed, in the first generation it was the journalist who proposed the republishing of a piece of news, while in the internet era, people themselves look for information about themselves or others on the web. 59 Internet consumer groups basically seek three needs, which can be summarized as information, entertainment, and relationship. First, the consumer can quickly find answers through a search platform, or even search tools within platforms such as social networks. In this context, the more content is offered on the platform, the more consumers are attracted, meeting their information needs. 60 Regarding entertainment, the consumer accesses content at a speed that did not exist before, without spatial boundaries. One of the characteristics of this digital universe is digital transmission, known as streaming, which replaces the purchase of physical media with the existence of applications on cell phones, tablets, and notebooks. 61 The relationship, in turn, is facilitated on the internet by the existence of social networks, which have instant communication as one of their main characteristics. Social networks, along with collaborative websites, form social media, helping in the search for relationships by creating a sense of community by bringing individuals together virtually. 62 And in this scenario, based on the digital economy, given the three needs presented, the consumer finds tools to change his behavior and empower himself, becoming an active and more conscious subject in 58 De Cicco, Maria Cristina. O direito ao esquecimento existe. civilistica.com, v. 10, n. 1, p. 1-9, Aug. 1, 2021, 4. 59 De Cicco, Maria Cristina. O direito ao esquecimento existe. civilistica.com, v. 10, n. 1, p. 1-9, Aug. 1, 2021, 4. 60 Quinelato, Pietra Daneluzzi. Preços Personalizados à Luz da Lei Geral de Proteção de Dados: Viabilidade Econômica e Juridicidade. Indaiatuba, SP: Editora Foco, 2022 decision-making, which can impact the advertising dynamics of companies. 63 The right to be forgotten is a complex issue that involves conflicts of interest. On the one hand, there is the public interest in maintaining the memory of the facts, along with freedom of the press and expression and the right of the community to information. On the other hand, there is a person's right not to be haunted for life by a past event. For this reason, it is important to balance the informative interest in disclosing news with the risks that remembering the fact can bring to the person involved. 64 The main objective of the right to be forgotten is to guarantee the "right not to be a victim of harm", and this includes obligations to do and not to do. If, after balancing interests, it is necessary to remove offensive material, this is the consequence of exercising the right to be forgotten. Compensation for damages will only be necessary in exceptional cases, when the offense is consummated and cannot be corrected by other means. 65 The right to be forgotten is not about erasing history or burning books, but it is important to be careful when importing institutes from other cultures, especially those with an exaggerated view of freedom of expression. 66 The right to be forgotten aims to erase traces or data left by its holder, not having the uniform trace of writing, as in unauthorized biographies; moreover, the a priori prevalence of freedom of expression and information, to avoid possible censorship, would go against other values equally dear to Fundamental Rights, linked to the free development of the human person. 67 Personalization is not limited to what we buy. It is influencing how information is distributed beyond social media, with news sites delivering headlines based on our personal interests and preferences. It also affects the videos we watch on video platforms and the blogs we follow. Personalization also impacts the emails we receive, the potential love connections we make on dating apps, and the restaurants that delivery apps recommend. In other words, personalization can easily influence not only who we go out to dinner with, but also where we go and what we talk about. The algorithms that control the advertising we receive are starting to take over our lives. 68 In summary, personalization can affect our ability to choose how we want to live our lives. It is important to be aware of all available options and lifestyles so that we can be the authors of our own stories. By entering the filter bubble, we let companies control what we see and what we are exposed to, which can lead us to a kind of informational determinism, where the choices we've made in the past determine what we'll see in the future. It keeps us trapped in a static, narrow version of ourselves, in constant repetition. 69 In other words, to protect the privacy of users and ensure the security of their data, it is necessary for those who have control over the systems to implement preventive measures. These measures include, among others, reducing the processing of personal data, incorporating privacy in designs, anonymizing data, allowing data subjects to monitor treatment, and conducting regular training with the teams involved. All this is to foster a culture of privacy prevention. 70 In the contemporary scenario, successive updates throughout the day inscribe and erase in minutes headlines and headlines that previously newspapers printed within a 24-hour period, characterizing the dematerialization of the first pages online. If, on the one hand, the first pages online are fluid and constantly changing, the links that lead to the articles printed on the covers of the sites, on the other hand, are perennial: everything is indexed and archived in search engines or in banks. data from the vehicles themselves. From which one concludes: the fuel for social memory continues to be produced. 71 However, such memory in network journalism is now more fragmented. In digital collections of newspapers, it is possible to search the front pages -many of them memorable -according to dates or subjects. In network journalism, however, there is not a home page for the day, but several of them, as events unfold. None of them, however, are archived. 72 The right to be forgotten has a diverse scope, as it involves facts that, over time, have lost historical relevance, so that their disclosure becomes abusive, as it causes more harm to individuals than benefits to society. The right to be forgotten, it is true, is an exceptional right, and cannot be trivialized, but its exclusion, based on general repercussions, may imply a serious setback in the face of the principle of human dignity, considering privacy and identity. personnel, which compose it in its structure. 73 The decision of the German Infraconstitutional Court, Bundesgerichtshof (BGH), of July 27, 2020, which stated that the right to erasure, and therefore the right to de-indexing, is not absolute. For the Court, Art. 17, paragraph 1, GDPR does not apply if data processing is necessary for the exercise of the right to freedom of expression. This circumstance is the expression that the right to the protection of personal data is not an unrestricted right. As the fourth recital of the GDPR states, about their social function and maintaining the principle of proportionality against other fundamental rights, they must be weighed, and this balancing of fundamental rights is based on all the relevant circumstances of the individual case. The seriousness of the interference with the fundamental rights of the data subject must also be considered. 74 It is important to consider that any data management policy or practice must include agreements between controllers and data operators with common and desirable objectives. This means that responsibility for bad data handling practices involves balancing appropriate and expected actions by these agents to prevent tensions that result in the loss of user selfdetermination and the manipulation of consumer choice. 75 When addressing the issue of privacy protection, it is essential that a series of good practices be implemented to ensure the prevention of risks related to personal data. This is particularly relevant for companies that operate in data-rich markets and that, by controlling the architecture and programming of platforms, can override state regulation and oversight. The guarantee of non-discrimination and the principle of net neutrality thus become issues that require an investigation into the limits of economic freedom. 76 The right to be forgotten allows an individual to control their personal data if it is no longer necessary for its original purpose, or if, for some other reason, they wish to withdraw their consent to its processing, among other reasons. 77 Data protection is understood as a guarantee, but its underlying principle, informational self-determination, is considered a freedom. Informational self-determination is a complex legal position that encompasses elements of different fundamental active rights. 78 In the end, one needs to balance security and freedom: it is necessary to have both, but it is not possible to have one without sacrificing, at least in part, the other. The more we have of one, the less we have of the other. When it comes to freedom, it's just the opposite. You can choose to simply delete or decide to stop being interfered with. 79
CONCLUSION
The Right to be Forgotten should be considered a personality right. Starting from the basic principle, the right to be forgotten protects the honor, image, and privacy of internet users from wide dissemination, when not authorized, or can restrict the circulation of information that does not match reality. This can easily be observed in the exemplified cases, where the courts recognized the right to be forgotten for the protection of the victim's personality rights.
On the other hand, with the advent of the internet, now information is starting to be stored in mass. Social networks and search engines bring information to the screen of each user according to the tastes and preferences of each one, no longer caring about the date of publication or the veracity of the information, but only in the simple fact that what appears on the screen can fix the user for longer in your service.
Which generates a dichotomy in relation to the facilities that these tools bring in the day to day in the face of the protection of the intimate life of each user. The main question is to what extent is the information that is used as the basis for creating each user's digital profile correct? Or even, can the user change this information or delete it if they do not consider it relevant or real? In a generation where everything is posted, everything is commented on and everything is on the internet, the Artificial Intelligence database grows more and more and becomes precise in its actions, which makes the user hostage to his own past and his old choices.
The Right to be Forgotten brings a possibility for people to show what they really are in the online world. It brings the perspective that society can evolve, that thoughts can change, and old information and publications may no longer represent the essence of what the internet user is today.
It brings the possibility that information published wrongly, a leak of a personal data, news taken out of context, a wrongly interpreted speech does not cause eternal damage to the user, or even, does not entail and predestine the future of that person in the online environment with reflections in everyday life. Forgetting this information can bring a dignified life to a technology user.
The Right to be Forgotten does not come to erase history or hide the acts of wrongdoers. It comes to bring justice, the right to repentance, to bring dignity, to cherish the privacy of those who are hostage to their data and their past decisions that no longer represent them. Making personality rights prevail and the user can maintain his image, his honor, and his privacy, even in the online world. | 2023-03-26T15:16:16.928Z | 2023-03-03T00:00:00.000 | {
"year": 2023,
"sha1": "723566ab526449143f23afbe049824881ab08f36",
"oa_license": "CCBYNC",
"oa_url": "https://bjlti.emnuvens.com.br/revista/article/download/8/8",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a9b6296cc9a0ddcc230db5fe6e51a2c0a03b46bb",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
10607327 | pes2o/s2orc | v3-fos-license | Synthesis, Characterization and Antidiabetic Activity of Chromium (III) Metformin Complex
The chromium (III) metformin hydrochloride complex as a diabetic drug model was synthesised by the chemical reaction between chromium (III) chloride hexahydrateand metforminHCl (Mfn.HCl) in methanol solvent. The [Cr(Mfn-HCl)2(Cl)2].Cl.6H2O complex was characterized using microanalytical measurements, molar conductance, spectroscopic (infrared, and UV-vis.), effective magnetic moment, and thermal analyses. The infrared spectroscopic data in the comparison between free Mfn.HCl ligand and its chromium (III) complex proved that metformin hydrochloride react with chromium(III) ions as a bidentate ligand through its two imino groups. The anti-diabetic activities of the Mfn.HCl drug, chromium salt and Cr (III)-2Mfn.HCl complex were discussed on the male rats. The chromium (III) metformin HCl complex was recorded successful efficiency in the decreasing blood glucose level and HbA1C against diabetic rats. The Cr(III)-2Mfn.HCl complex has succeeded to great extent as antidaibetic drug with enhanced the antioxidant defence system as well as act as pronounced efficient hypoglycaemic agent compared to metformin HCl free drug. Synthesis, Characterization and Antidiabetic Activity of Chromium (III) Metformin Complex
Introduction
Metformin hydrochloride (Mfn.HCl) structure was refered in Figure 1. Diabetes is a metabolic syndrome which was characterized by hyperglycemia and glycosuria resulting from the defect in the secretion or the action of insulin, or both of them [1,2]. Some metal complexes or organo-metallic compounds have been used in medicine for centuries. Supplement contains trivalent chromium was needed for a person with type 2 diabetes mellitus, according to its important role in glucose metabolism [3]. The Cr (III) metal ion interacts with the insulin and its receptors on the first step in the metabolism of glucose entry into the cell, and facilitates the interaction of insulin with its receptor on the cell surface [4,5]. Chromium increases insulin binding to cells, insulin receptor number as well as activates insulin receptor kinase leading to increase sensitivity of insulin receptor. Additional studies were urgently needed to elucidate the mechanism of the action of chromium and its role in the prevention and control of diabetes [6]. Metformin, the most common prescribed oral medication in type 2 diabetes, lowers HbA1c around 1.5%, rarely causes hypoglycemia (compared with insulin or sulfonylureas), has relatively few contraindications, its adverse effects are generally tolerable, did not cause weight gain, was cheap, and was highly acceptable among patients [7]. Metformin exerts it was main antihyperglycemic effects through activation of AMP-activated protein kinase, resulting in reduced hepatic gluconeogenesis [8]. In addition, moderate improvements in lipid profile and weight reduction have been reported with metformin use [8]. Herein, this paper reports the synthesis, characterization and chromium (III) metformin complex as a prospective antidiabetic candidate Experimental Materials All chemicals, solvents, chromium(III) chloride hexahydrate were commercially available from BDH and were used without further purification. The pure grade metformin hydrochloride drug was received as a gift sample from Egyptian International Pharmaceutical Industrial Company EIPICo.
Synthesis of Cr(III)-2Mfn.HCl
The of metformin hydrochloride drug ligand (2 mmol, 0.332 g) was dissolved in 25 mL methanol then mixed with 25 mL of methanolic solution of (1 mmol, 0.267 g) CrCl 3 .6H 2 O. A mixture of molar ratio of 1:2 was heated at ~80°C under reflux for about 3 hours. The mixture was left overnight at room temperature until precipitated. The precipitate obtained was filtered off and washed by diethyl ether then left over anhydrous calcium chloride. The yield of the solid leaf green colour powder product was about ~ 85%. The formula weight of chromium(III) complex is C 8 H 36 Cl 5 CrN 10 O 6 , molecular weight is 597.74 g mol -1 , and the microanalytical data of theoretical and experimental are as follows: theoretical=%C, 16
Instruments
IR spectra of the Mfn.HCl and its chromium(III) complex were recorded on Bruker infrared spectrophotometer in the range of 400-4000 cm -1 , at Taif University. The electronic spectrum of the Cr(III) complex was measured in DMSO solvent with concentration of 1×10 -3 M, in rang 200-1100 nm by using Unicam UV/Vis spectrometer. SEM images were obtained using a Jeol Jem-1200 EX II Electron microscope at an acceleration voltage of 25 kV. X-ray diffraction (XRD) patterns of the samples were recorded on X Pert Philips X-ray diffractometer. All the diffraction patterns were obtained by using CuK α1 radiation, with a graphite monochromator at 0.02°/min scanning rate. Carbon, hydrogen and nitrogen analysis have been carried out in Vario EL Fab. CHNS. The amount of water and the metal content percentage were determined by gravimetric analysis method. The molar conductance of 10 -3 M solutions of the metformin hydrochloride ligand and its chromium(III) complex in DMF solvent were measured on a HACH conductivity meter model. All the measurements were taken at room temperature for freshly prepared solutions. Differential Thermal Analysis (DTA) and Thermo Gravimetric Analysis (TGA) experiments were conducted using Shimadzu DTA-50 and Shimadzu TGA-50H thermal analyzers, respectively. All experiments were performed using a single loosetop loading platinum sample pan under nitrogen atmosphere at a flow rate of 30 ml/min and a 10°C/min heating rate for the temperature range 25-800°C. The mass susceptibility (X g ) of the solid chromium (III) complex was measured at room temperature using Gouy's method by a magnetic susceptibility balance from Johnson Metthey and Sherwood model. The effective magnetic moment (μ eff ) value was obtained using the following equations (1-3) [9]. Where: R o =Reading of empty tube L=Sample length (cm)
M=Sample mass (gm)
R=Reading for tube with sample C Bal =balance calibration constant=2.086 The values of X M as calculated from equation (2) are corrected for the diamagnetism of the ligand using Pascal's constants, and then applied in Curie's equation (3).
Biological experiments
Forty male Wistar rats (Weighing 200-250 g), were used in all experiments of this study. They were obtained from the Animal House of the National Research Center (Dokki, Giza, Egypt). The animals were maintained in solid-bottom shoe box type polycarbonate cages with stainless steel wire-bar lids, using a wooden dust free litter as a bedding material. We have followed the European community Directive (86/609/EEC) and national rules on animal care.Hyperglycemic rats were weighed and randomly allocated into 4 groups (10 rats each). One group served as hyperglycemic control. Animals were divided into four groups with 10 animals in each group as following:
Group (1):
Saline (control) was normal and injected intraperitoneally (I.P) with 0.1 mL for 30 successive days. superoxide anions and this has been used as the basis of ability of SOD. The reduction of NBT by superoxide radicals to blue colour formazan was followed at 480 nm [17]. Reduced glutathione level (GSH) as nonenzymatic antioxidant was estimated based on the method of Beutler et al., [18].
Determination of blood glucose level, Hb, HbA1c: Glucose was estimated by O-toluidine method of Sasaki et al., [19]. Hb was estimated by cyanmethaemoglobin method of Drabkin and Austin [20]. HbA1c was estimated by the method of Sudhakar and Pattabiraman [21] with modification by Bannon [22].
Insulin level and C-peptide: Insulin in pancreatic homogenates was determined by Immulite Insulin (Diagnostic Products Corporation, Los Angeles) which depends on a two-site chemiluminescent enzymelabelled immunometricability [23] Serum C-peptide was measured by radioimmunoassay (Medgenix Diagnostics) as described by Kumar et al., [24]. All chemicals and reagents were of pure analytical grade.
Pancreatic homogenates preparation: At time of death, pancreas tissues were dissected, cleared of lymph nodes and fat, blotted, washed from blood and weighed. The pancreas was immediately homogenized in 5 ml cold 2 M acetic acid for 5 s. The extract was centrifuged at 15 000 r.p.m. for 10 min, and the resulting supernatant was frozen at -80°C until further analysis of insulin.
Electron microscopy:
The third portion of the pancreas was immediately cut into small cubes and transferred to ice-cold fixation buffer (1.25% v/v glutaraldehyde in 0.1 mMcacodylate-HC1 buffer, 0.1 M sucrose, and 2 mM calcium chloride pH 7.2) and prepared for transmission electron microscopy [25].
Statistical analysis: Data were collected, arranged and reported as mean ± standard error of mean (S.E.M) of four groups (Each group was considered as one experimental unit), summarized and then analyzed using the computer program SPSS/version 15.0) The statistical method was one way analyzes of variance ANOVA test (F-test), and if significant differences between means were found, Duncan's multiple range test (Whose significant level was defined as (P<0.05) was used according to Snedecor and Cochran, [26] to estimate the effect of different treated groups.
Chemical composition
The elemental analysis shows that Cr (III) formed complex with Mfn.HCl in 1:2 (Cr(III): Mfn.HCl) molar ratio. The synthesized Cr(III) complex is leaf green and soluble in dimethylsulfoxide and dimethylformamide, partially soluble in hot methanol and insoluble in water and some other organic solvents. The conductivity of chromium(III) metformin HCl complex is measured in DMF solvent at room temperature, that show the molar conductance value of 10 -3 mol/cm 3 concentration is 66 ohm -1 cm 2 mol -1 . This data reflect that Cr(III)/Mfn-HCl complex is slightly electrolytic nature [27]. The slightly electrolytic value may be due to the contribution of the one chloride anion in the outer sphere of chelating skeleton of the Cr(III) metformin complex.
The infrared absorption bands are one of the important tools of analyses used for determining the mode of chelations. The most significant bands of metformin HCl ligand can be classified into two groups: i) NH vibrational spectra of primary (-NH 2 ), secondary (-NH) and imino (-C=NH) groups and ii) C-N and C=N vibration bands of different amino groups (-NH 2 , -NH and -C=NH). According to the two fundamental vibrational groups mentioned above, the metformin HCl free ligand can be interpreted as follows: N-H Vibrations: i-N-H vibrations; the N-H stretching of C=N-H group occurs in the region 3400-3100 cm -1 . Usually the frequency of this vibration decreases in the presence of the hydrogen bond [28]. The broad bands at 3370 and (3294 and 3174) cm -1 have been assigned to N-H asymmetric and symmetric stretching vibrations, respectively [28]. The band at 1570 cm -1 has been assigned for NH 2 in the plane deformation vibrations [28]. The bands of the medium-to-weak intensities at 935, 798 and 735 cm -1 are due to N-H wagging. ii) The strong absorption band at 1627 cm -1 is due to C=N stretching vibration [29]. The medium-to-weak intensity bands in the IR spectra at 1271, 1168, and 1061 cm -1 have been assigned to C-N stretching vibrations of aliphatic amine compounds. Medium-toweak intensity bands at 639, 582, 541, and 419 cm -1 are due to CNC deformation vibrations [29][30][31][32].
The presence of the IR spectral bands of the imino group (-C=NH) in the Cr(II) complex are shifted in comparison with the metformin HCl free ligand with significant intensity. This indicates that metformin is coordinated to the metal ions through the nitrogen atom of the imino group. The second evidence which denies the displacement of imino group, is the presence of the stretching vibration band of ν(C=NH) almost shifted and not similar to that of the metformin free ligand. The infrared spectra of distinguish bands of water molecules concerning hydrated Mfn-HCl complex exist with overlapping the characteristic bands of the amino group. There is no definite borderline between lattice and coordinated water molecules, especially in the stretching of OH and bending δ(H 2 O) vibration. In addition to these interpretations the new bands at 454 and 421 cm -1 are assigned to the stretching vibration motions of ν(M-N) [33].
Metformin hydrochloride free ligand has absorption in the ultraviolet regions at 228, 262, 284 and 375 nm and in some cases these bands extends over to higher wavelength region due to conjugation. New bands due to charge transfer spectra from metal to ligand (M-L) or ligand to metal (L-M) can be observed and this data can be processed to obtain information regarding the structure and geometry of the complexes [34]. Electronic spectra of Cr(III) complex was recorded in DMSO with 10 -3 mol/cm 3 . UV-visible peaks corresponding to the π→π* transitions in the Mfn-HClCr(III) complex was observed at 278 and 309 nm. This transitions (π→π*) could be assigned to the aromaticity of the double bond. The peaks belonging to n→π* transitions are recorded at wavelengths 432 nm. This is most probably due to the n→π* transitions of imine (=NH), primary (-NH 2 ), secondary (-NH), and tertiary (-N(CH 3 ) 2 ) amino groups. The transition in visible region located at 635 nm for Cr(III) complex can be attributed to the ligandto-metal charge transfer bands LMCT from the electronic lone pairs of adjacent nitrogen coordinated to the Cr(III) ions. The electronic spectrum of the Mfn.HCl ligand exhibited maximum band at 375 nm, which could be assigned to the n-π* transition of the imine (=NH), primary (-NH 2 ), and secondary (-NH) amino groups. This band show a red shift of the absorbance intensity in Cr(III) complex. This clearly indicates the coordination of the imine nitrogen atom with the metal atom. The solid reflectance spectrum of Cr(III) complex display three bands in the range 18,691 (ν 1 ), 22,472 (ν 2 ) and 25,126 cm −1 , characteristic to an octahedral geometry. These bands may be assigned to the transitions 4 A 2g → 4 T 1g (F) ν 2 and 4 A 2g → 4 T 2g (F) ν 1 , respectively, and third one is due to the charge transfer. Various ligand field parameters are calculated. The Nephelauxetic parameter β is obtained by using the relation: β=B(Complex)/B(Free ion). The β values indicate that the complex has appreciable covalent character. Chromium(III) complex shows magnetic moment in the range 3.67 BM recorded at room temperature, corresponding to three unpaired electrons. This value is close to the spin only value [35]. On the basis of the above discussion, the suggested structure of the chromium(III) Mfn-HCl complex can be represented as in (Figure 2).
The homogeneity, surface morphology and chemical composition of Mfn-HCl free ligand and Cr III complex were studied using SEM ( Figure 3). The surface morphology of SEM micrograph reveals the well uniform nature of the [Cr(Mfn-HCl) 2 (Cl) 2 ].Cl.6H 2 O complex with variant grain sizes and shapes. Clear large grains are obtained with agglomerates. The distribution of the grain size is homogeneous. X-ray powder diffraction patterns in the range of 5 o <2θ<80 o of the Mfn-HCl free ligand and Cr III complex were done. The diffractograms collected for these compounds are given in (Figure 4). The definite diffraction data like angle (2θ o ), interplanar spacing (d value, Angstrom), and relative intensity (%) have been discussed. The X-ray patterns refer to the amorphous nature for Cr(III)/Mfn-HCl complex. The variable diffractograms of Cr III complex can be attributed to the formation of new structure. The maximum diffraction patterns of Mfn-HCl and Cr III complex exhibited at 2θ [36] (relative intenisty)=31 (100%) and 29 (100%), respectively. The crystallite size could be estimated from XRD patterns by applying FWHM of the characteristic peaks using De by-Scherrer equation 4 [37].
D= Kλ/βCosθ (4)
Where D is the particle size of the crystal gain, K is a constant (0.94 for Cu grid), λ is the X-ray wavelength (1.5406 Ǻ), θ is the Bragg diffraction angle and β is the integral peak width. The particle size was estimated according to the highest value of intensity compared with the other peaks.
The thermal degradation behaviour of Cr(III)/Mfn-HCl drug complex is one of the interesting tools to confirm the composition and assessment of the role of metal ions. The thermal analysis of metformin hydrochloride free ligand shows two main consecutive steps of mass loss at the temperature ranges The TG/DTG curves recorded for the [Co(Mfn-HCl) 2 (Cl) 2 ]. Cl.6H 2 O complex are given in Figure 5. This curve, which characterize and compare the thermal decomposition behaviour of the Mfn-HCl ligand show seven (weak-to-very strong intensities) continuities successive degradation steps at 30-97, 97-168, 168-315, 315-47, 447-490, 490-546 and 666-740°C. The first-to-third step at T DTG of 60, 131 and 2272ºC, the mass loss of 17.00% is consistent with the evolution of six uncoordinated water molecules (Cal. 18.07%). Consequently, the fourth-to-seventh steps existed at 315-447, 447-490, 490-546 and 666-740°C with endothermally (T dtg ) at 360, 465, 514 and 700ºC, respectively, are assigned to the decomposition of two Mfn-HCl and three chlorine atoms. The finalresidual is chromium oxide (CrO 1.5 ) (Cal. 12.71%, found 12.00%).
The kinetic and thermodynamic parameters were determined using non-isothermal methods. The non-isothermal kinetic analysis for the thermal decomposition of Mfn-HCl ligand and Cr(III) complex in this work was carried out by the application of the Coats-Redfern [38] and Horowitz-Metzger method [39] methods. From the TGA curves (TG/ DTG) recorded for the successive steps in the decomposition process of Mfn-HCl ligand and its Cr(III) complex, it was possible to determine the following characteristic thermal parameters for each reaction step as follows: Initial point temperature of decomposition (T i ): the point at which DTG curve starts deviating from its base line. Final point temperature of decomposition (T f ): the point at which DTG curve returns to its base line. Peak temperature, i.e. temperature of maximum rate of mass loss (T DTG ): the point obtained from the intersection of tangents to the peak of DTG curve. Mass loss at the decomposition step (∆m): it is the amount of mass that extends from the point T i up to the point T f on the TG curve. The material released at each step of the decomposition is identified by attributing the mass loss (∆m) at a given step to the component of similar weight calculated from the molecular formula of the investigated compounds, comparing that with literatures of relevant compounds considering their temperature. This may assist identifying the mechanism of reaction in the decomposition steps taking place in the complex under study. Activation energy (E*) of the decomposition step: the integral method used is the Coats-Redfern equation [38] for reaction order n ≠1, which when linearized for a correctly chosen n yields the activation energy from the slope; Where: α=fraction of weight loss, T=temperature (K), n=order of reaction, Z=pre-exponential factor, R=molar gas constant, E a =activation energy and q=heating rate. The activation energies (E a ) are calculated from the slopes of the best fit straight lines (r ≈ 1) obtained when the plots of the Coats-Redfern equation [38] are used for the best values of reaction order (n). Order of reaction (n): it is the one for which a plot of the Coats-Redfern expression gives the best straight line ( Figure 6) among various trial values of n that are examined, i.e., by trial and error for various trial values of n, estimated by the Horowitz-Metzger method [39] (Figure7). The thermodynamic parameters: entropy change (∆S*), enthalpy change (∆H*) and free energy of activation change (∆G*) were calculated using the following equations:
The negative ΔS* values indicate that the activated complex has more ordered structure than the reactants and the reactions are slower than normal [41]. The positive values of ΔG* indicate the nonspontaneous character for the reactions at the transition-state. The positive ΔH* values show endothermic transition-state reactions [42]. From the abnormal values of Z,the reactions of the complexes at the transition-state can be classified as a slow reaction [43]. The higher stability of the Cr(III) complex than that of the Mfn-HCl ligand may be due to the formation of two stable 6-membered rings structures in the metal complexes [44] and the higher is the molecular symmetry the more stable is the molecule [45].
Biological evaluation Effect on Superoxide Dismutase (SOD), Total Antioxidant Capacity (TAC), Malondialdhyde (MDA) and Glutathione Reduced (GSH):
Effect on SOD: The results revealed that the administration of Metformin/Cr +3 to diabetic rats afforded slight significant decrease when compared to control group while afforded highly significant increase in SOD activity as compared to diabetic control group (STZ). It was recorded from table 1 that diabetic untreated group elicited highly significant decrease in SOD when compared to normal control group. However, diabetic group treated with Metformin elicited significant decrease in SOD activity when compared with normal control group by 31.65%. However, Chromium salt treated group elicited non-significant decrease in SOD activity when compared with normal control group.
Effect on GSH%:
Regarding the effect of Metformin and Metformin/Cr +3 complex on GSH content, It was shown that diabetic untreated group elicited highly significant decrease in GSH content as compared to normal control group while combination of Metformin/ Cr +3 induced slight significant decrease in GSH content when compared to normal control group by 5.21% followed by diabetic group treated with Metformin as compared to normal control group. The chromium salt treated group elicited slight decrease in GSH level as shown in table 1. Table 1 illustrates that administration of STZ only to rats afforded highly significant increase in MDA content by 71.73% as compared to normal control group while diabetic group treated with Metformin/Cr +3 afforded non-significant increase in MDA content as compared to normal control group. While other diabetic group treated with Metformin showed significant increase in MDA content but the effect was much less intense in diabetic group treated with Metformin as compared to diabetic untreated group, a slight decrease was recorded in chromium salt treated group as compared to normal control group.
Effect on MDA%:
Effect on TAC%: It was apparent from table 1 that administration of STZ only to rats afforded highly significant decrease in TAC activities as compared to normal control group by 65.22%. Meanwhile, administration of Metformin/Cr +3 afforded significant increase in TAC activities when compared to diabetic untreated group while elicited slight decrease when compared to normal control group. While another diabetic treated group with Metformin exhibited significant decrease in TAC % activity as compared to normal control group. The chromium salts treated group afforded slight decrease in TAC activity as compared to normal control group and showed non-significant changes as compared to Metformin/Cr +3 complex treated group. But the diabetic group that recorded the highly TAC% value is the diabetic group treated with Metformin/Cr +3 as compared to normal control group and other diabetic groups .
ROS include superoxide free radicals, hydrogen peroxide, singlet oxygen, nitric oxide (NO), and peroxynitrite [52] that if expressed at increased concentrations can lead to cellular injury and demise through oxidative stress [53]. Most ROS occur at low levels and are scavenged by endogenous antioxidant systems that include superoxide dismutase (SOD), glutathione peroxidase, catalase, and small molecule substances such as vitamins C, D, E, and K [54,55]. Yet, one vitamin in particular, namely nicotinamide may be considered to stand-alone among antioxidants since nicotinamide influences multiple pathways tied to both cellular survival and cellular death. In several scenarios, nicotinamide is a robust cytoprotectant that addresses both early membrane PS externalization and later genomic DNA degradation [56,57] during oxidative stress in a way that is different from other vitamin entities and these findings go hand in hand with our results. In addition, nicotinamide prevents membrane PS exposure in vascular cells [58] that can reduce risk for cardiovascular disorders.
Oxidative stress is one of the most dangerous effects on the cellular activities and thus according to our results complexation between Cr +3 and Metformin greatly scavenged free radical molecules and thus decreased MDA level as it is the final end product of lipid peroxidation and also increased the enzymatic capacities of SOD and GSH and thus improving liver function activities and thus enhancing the conversion of blood glucose into glycogen and thus decreasing blood glucose level which reflect the solution for diabetes mellitus complications.
Effect on blood glucose level, Hb, HbA1C
Effect on blood glucose level: It was clear from Table 2 that the administration of STZ in its recommended dose afforded highly significant increase in blood glucose level when compared with normal control group. While administration of Metformin to diabetic rats afforded significant increase in blood glucose level when compared with normal control group but still showed significant decrease in blood glucose level as compared to diabetic untreated group, Meanwhile administration of Metformin/Cr +3 complex to diabetic rats afforded non-significant increase in blood glucose level when compared with normal control group but showed more better results than other diabetic treated and untreated groups. The chromium salts treated group elicited slight significant increase in blood glucose level as compared to normal control group.
DM affects both young and older individuals [59] and complexation of Cr +3 with Metformin succeed to great extent in reducing blood Means within the same column in each category carrying different litters are significant at (P ≤ 0.05) using Duncan's multiple range tests, where the highest mean value has symbol (a) and decreasing in value were assigned alphabetically. glucose level and this explain the important role of Cr +3 in decreasing blood glucose level after complexation with Metformin. Supplement contains trivalent Chromium is needed for a person with type 2 diabetes mellitus, according to its important role in glucose metabolism [61] and this is greatly confirmed and reinforced our findings as the combination of Metformin with Cr(III) afforded a significant reduction in blood glucose level as compared to diabetic untreated group.
It was well known that hyperglycemia is the hallmark of diabetes. Our findings are greatly supported by Hai-yan et al. [46] as they showed that an increase in blood glucose was observed after treating with alloxanintraperitoneally and after15 day's administration of Chromium methionine (CrMet), the blood glucose levels significantly decreased in comparison with the diabetic control group. These findings indicated that CrMet had hypoglycemic effect on AID mice.
Our results are in accordance with Sahin et al. [62] as they indicated that the anti-hyperglycemic activity of CrMet was superior to that of CrCl 3 ·6H 2 O and equivalent to that of CrNic. Ghiasi et al. [63] found out that supplemental CrMet in the diet for 6 weeks could significantly decrease the blood glucose levels of fructose-fed diabetic rat.
Effect on Hb%:
The results revealed that diabetic untreated group (STZ) elicited highly significant decrease in Hb content when compared to normal control group as shown in table 2. While treatment of diabetic rats with Metformin/Cr +3 elicited non-significant decrease in Hb content as compared to normal control group, while, other diabetic group treated with Metformin showed significant decrease in Hb content when compared to normal control group. While, chromium salts treated group elicited slight decrease in Hb level as compared to normal control group.
Effect on HbA1C: After 4 weeks post administration of STZ and other treatments, the diabetic untreated group afforded significant increase in HbA1c compared to normal control group followed by diabetic group treated with Metformin by 23.69% as compared to normal control group. Whereas treatment of diabetic rats with Metformin/Cr +3 afforded non-significant increase in HbA1c as compared to normal control group and thus showing the best results in treating diabetes mellitus table 2, meanwhile, chromium salt treated group afforded non-significant increase in glycated haemoglobin as compared to normal control group.
In uncontrolled or poorly controlled diabetes, there is an increased glycosylation of a number of protein including haemoglobin [60]. Glycosylated hemoglobin (HbA1C) was significantly increased in diabetic animals and this increase was found to be directly proportional to the fasting blood glucose level. During diabetes, the excess glucose present in blood reacts with hemoglobin. Therefore, the total hemoglobin level is decreased in diabetic rats [64]. In this study a decrease in total hemoglobin during diabetes has been observed in diabetic untreated group and this may be due to the formation of glycosylation hemoglobin. Administration of Metformin/Cr +3 to diabetic rats prevent a significant elevation in glycosylated hemoglobin. Thereby increasing the level of total hemoglobin in diabetic rats. This could be due to the improved glycaemic control produced by this new complex.
Effect on lipid profile
Effect on cholesterol: Table 3 and Figure 8 demonstrates that diabetic untreated rats (STZ) afforded a significant increase in cholesterol level as compared to normal control group while treatment of diabetic rats with Metformin/Cr +3 elicited non-significant changes in cholesterol level when compared with normal control group while other diabetic group treated with Metformin afforded significant elevation in cholesterol when compared with normal control group while both groups treated with either Metformin or Metformin/Cr +3 showed significant decrease in cholesterol level when compared with diabetic untreated group but the effect was more intense in diabetic group treated with Metformin/Cr +3 . The non-significant increase in cholesterol level was reported in group treated with CrCl 3 .H 2 O treated group as compared to normal control group.
Our obtained results are greatly in accordance with Hai-yan et al. [46] as they tested the anti-diabetic activity of Chromium methionine (CrMet) in detail. Their obtained results showed that CrMet had beneficial effects on glucose and lipid metabolism, and might possess hepatoprotective efficacy for diabetes. Figure 8 that diabetic untreated group showed significant elevation in TG levels as compared to normal control group by 59.12%. However, diabetic group treated with Metformin/Cr +3 afforded non-significant increase in TG levels as compared to normal control group while showed significant decrease in TG levels as compared to diabetic untreated group. On the other hand, other diabetic group treated with Metformin elicited significant increase in TG levels when compared to normal control group but the best results and the less TG increment was noticed in group treated with Metformin/Cr +3 . Meanwhile, chromium treated group elicited non-significant increase in Triglycerides level as compared to normal control group. Means within the same column in each category carrying different litters are significant at (P ≤ 0.05) using Duncan's multiple range tests, where the highest mean value has symbol (a) and decreasing in value were assigned alphabetically. Means within the same column in each category carrying different litters are significant at (P ≤ 0.05) using Duncan's multiple range tests, where the highest mean value has the symbol (a) and decreasing in value were assigned alphabetically.
Effect on HDL-c:
The administration of diabetic rats with Metformin/Cr +3 afforded slight decrease in HDL-c levels when compared to normal control group while diabetic untreated group elicited highly significant decrease in HDL-c levels as compared to normal control group, At the mean time the diabetic group treated with Metformin elicited significant decrease in HDL-c level with respect to normal control group Figure 8. Chromium treated group afforded slight significant decrease in HDL-c level as compared to normal control group.
Effect on LDL-c: At the same time, Table 3 and Figure 8 demonstrates that administration of Metformin/Cr +3 to diabetic rats exhibited non-significant changes in LDL-c levels as compared to a normal control group. Meanwhile diabetic untreated group induced significant increase in LDL-c levels as compared to a normal control group and also as compared to diabetic group treated with Metformin. Regarding the chromium treated group, it is afforded non-significant decrease in LDL-c level in comparison with normal control group.
Effect on vLDL-c: Table 3 and Figure 8 illustrates the effect of Metformin and Metformin/Cr +3 complex on vLDL-c, there was significant increase in diabetic untreated group in response to administration of STZ only to rats. At the same time, the treatment of diabetic rats with Metformin/Cr +3 afforded significant decrease in vLDl-c as compared to diabetic rats by 57.34%. While showed nonsignificant increase when compared to normal control group. At the meantime, diabetic group treated with Metformin showed significant decrease in vLDL-c when compared to diabetic untreated group and showed significant increase when compared with normal control group.
Effect on risk ratio: It was clear from table 3 and Figure 8 that the administration of STZ to rats elicited the highest risk value as compared to normal control group and other diabetic treated groups and the best group that succeed in reducing the risk factor is the diabetic group treated with Metformin/Cr +3 and this group showed non-significant increase in risk ratio as compared to a normal control group. The results revealed that the diabetic group treated with Metformin/Cr +3 showed the lowest risk ratio as compared to control group and the diabetic untreated group followed by diabetic group treated with Metformin.
The chromium salt treated group elicited slight significant increase in risk ratio as compared to normal control group but still much better than other diabetic groups.
Free fatty acids can lead to ROS release and contribute to mitochondrial DNA damage and impaired pancreatic β-cell function [47]. In patients with type 2 DM, skeletal muscle mitochondria have been described to be smaller than those in control subjects [48]. In addition, a decrease in the levels of mitochondrial proteins and mitochondrial DNA in adipocytes has been correlated with the development of type 2 DM [49]. Insulin resistance in the elderly also has been associated with elevation in fat accumulation and reduction in mitochondrial oxidative and phosphorylation activity [50]. An association also exists with insulin resistance and the impairment of intracellular fatty acid metabolism in young insulin-resistance offspring of parents with type 2 DM [51] and this greatly agreed with our results that confirming the increased level of fatty acids and increasing the level of LDL , Triglycerides and cholesterol and decreasing their levels significantly in diabetic group treated with Metformin/Cr +3 as compared to normal control group and this is correlates to Cr +3 role in decreasing fatty acids level and thus enhancing the properties of Metformin.
Effect on insulin level:
Concerning the effect of Metformin and their Cr +3 complexes on insulin level, the results revealed that administration of Metformin/Cr +3 complexes afforded non-significant increase in insulin level as compared to normal control group while administration of STZ only to rats elicited significant decrease in insulin level when compared to normal control group. The other diabetic treated group with Metformin elicited significant decrease in insulin level by 60 % as compared to normal control group as shown in table 4. There is no significant changereported in group treated with chromium salt treated group. The Cr(III) interact with the insulin and its receptors on the first step in the metabolism of glucose entry into the cell, and facilitates the interaction of insulin with its receptor on the cell surface [65] and thus our results come in harmony with these findings as the combination of Metformin with Cr +3 increased the level of insulin in diabetic rats and the best results was shown in diabetic group treated with Metformin/Cr +3 as they showed the high value of insulin level by 1.76 % as compared to normal control group and by 65.11% increment as compared to diabetic untreated group and this go side by side in confirming our results that reported the success of Cr +3 complexes with Metformin in reducing blood glucose level and increasing insulin level and thus alleviating the side effects of diabetes mellitus and improving characterization of Metformin/Cr +3 .So we consider the first author to clarify this improving effect of Metformin/Cr +3 on diabetes mellitus reducing complications. Chromium increases insulin binding to cells, insulin receptor number as well as activates insulin receptor kinase leading to increase sensitivity of insulin receptor [66].
Effect on serum C-peptide: It was apparent from Table 4 that administration of Metformin/Cr +3 afforded non-significant increase in C-Peptide as compared to normal control group while the diabetic untreated group showed the less serum C-Peptide value among all the diabetic treated groups, while administration of Metformin only induced significant decrease in c-Peptide as compared to control group but showed significant increase in c-peptide as compared to normal control group.CrCL 3 .H 2 O treated group elicited non-significant decrease in serum C-peptide level as compared to normal control group. The connecting peptide, or C-peptide, is a short 31-amino-acid protein that connects insulin's A-chain to its B-chain in the proinsulinmolecule. In the insulin synthesis pathway, first preproinsulin is translocated into the endoplasmic reticulum of beta cells of the pancreas with an A-chain, a C-peptide, a B-chain, and a signal sequence.
Measuring C-peptide can help to determine how much of natural insulin is producing as C-peptide is secreted in equimolar amounts to insulin. C-peptide levels are measured instead of insulin levels because C-peptide can assess a person's own insulin secretion even if they receive insulin injections, and because the liver metabolizes a large and variable amount of insulin secreted into the portal vein but does not metabolize C-peptide, meaning blood C-peptide may be a better measure of portal insulin secretion than insulin itself [67]. A very low C-peptide confirms Type 1 diabetes and insulin dependence and is associated with high glucose variability, hypoglycaemia and increased complications and this is an important concept that explain our findings as the more c-peptide value was found in diabetic group treated with Metformin/Cr +3 and thus we can say the higher C-peptide, the higher insulin level and to best our knowledge , this is the first report on the effect of Metformin/Cr +3 on ameliorating the C-peptide value and consequently increased insulin level.
Ultrastructural results
Acinar cells: Ultrastructure of control pancreas showed acinar cells with euchromatic nuclei, well-developed cisternae of rough endoplasmic reticulum, mitochondria and numerous electron dense secretory granules of variable sizes in the apical part (Figure 9a).
Electron microscopic examination of the diabetic untreated group showed marked changes in pancreatic acini represented by dilated rough endoplasmic reticulum, a decrease of secretory granules, cytoplasmic vacuolation, damaged mitochondria, auotophagicvacoule and irregular contours of nuclei (Figure 9b). STZ + Metformin group showed marked changes in pancreatic acini represented by damaged mitochondria (M), dilated rough endoplasmic reticulum, decrease of secretory granules and cytoplasmic vacuolation ( Figure 9C). Pancreatic acini after supplementation with Metformin/Cr +3 in diabetic rats showed marked improvement represented by increase in zymogen granules, regular contours of nuclei and flattened rough endoplasmic reticulum except few vacuoles (Figure 9d). Electron microscopic examination confirmed the presence of apoptotic changes manifested as nuclear pyknosis, indentation of nuclear membrane, chromatin release into the cytoplasm, swollen mitochondria, dilation of Golgi apparatus vesicles and disappearance of secretary granules in STZ diabetic group and the amelioration in these changes was clearly appeared in metformin/Cr +3 diabetic treated group.
Conclusion
The chemical interaction between chromium (III) chloride hexahydrate and metformin HCl (Mfn.HCl) produce diabetic mimetic model of chromium (III) metformin hydrochloride. The infrared spectroscopic results were proven that metformin hydrochloride reacted with chromium (III) ions as a bidentate ligand through its two imino groups. The chromium (III) Metformin complex has succeeded in decreasing blood glucose parametersin diabetic rats and proving its antidiabetic performance and thus proving the efficiency of metformin and Chromium (III) complex in elevating antioxidant capacities.
Groups Insulin (µU/ml) in pancreatic homogenates
Serum C-peptide (pmol/mL) Table 4: Effect of Metformin and Metformin/Cr +3 complex on insulin level and serum C-peptide in normal and diabetic rat.
Means within the same column in each category carrying different litters are significant at (P ≤ 0.05) using Duncan's multiple range tests, where the highest mean value has symbol (a) and decreasing in value were assigned alphabetically. | 2019-04-08T13:10:58.771Z | 2015-03-24T00:00:00.000 | {
"year": 2015,
"sha1": "c527485be1635a6ba1ae83887ad98f99a7dea255",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/synthesis-characterization-and-antidiabetic-activity-of-chromium-iii-metformin-complex-1948-5948-1000184.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed9293292ebfdf3e58c1f3f3f10159bce1742296",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
38542726 | pes2o/s2orc | v3-fos-license | Fine-needle aspiration findings in epithelioid myoepithelioma of the parotid gland: A diagnostic pitfall
The cytological features of myoepithelioma of the parotid gland are documented in only a few case reports. We describe the fine-needle aspiration cytological findings in a case of epithelioid myoepithelioma of the parotid gland in a 43-year-old male. The differential diagnosis with other salivary gland neoplasms is discussed.
INTRODUCTION
Myoepithelioma is an uncommon benign tumor of the salivary glands. It accounts for only 1.5% of all tumors in the major and minor salivary glands. [1][2] Four major histological subtypes are well established (epithelioid, spindle cell, plasmacytoid cell and clear cell). [1][2] Because of the varied morphology, the lack of established cytologic criteria and the vast differential diagnosis, diagnosing myoepithelioma on fi ne-needle aspiration cytology (FNAC) is diffi cult.
CASE REPORT
A 43-year-male presented with 2-year history of a painless swelling in the right preauricular region. On examination, a 3-cm diameter well-defi ned fi rm lesion was palpable. There was no associated facial weakness or cervical lymphadenopathy. Ultrasonography revealed a hypoechoic lesion measuring 3.6 × 3.4 × 2 cm posterior to angle of mandible. FNAC showed moderate to high cell yield [ Figure 1a]. These cells were arranged predominantly in discrete forms, cohesive clusters, rare acinar, papillary and cylindromatous pattern [ Figure 1b and c]. These cells were epithelioid having mild anisokaryosis, fi ne uniformly distributed chromatin, regular nuclear border, ovoid nucleus, indistinct nucleoli and moderate amount of cytoplasm [ Figure 1d]. Some of the clusters showed fi brillary material without defi nite chondromyxoid stroma on Giemsa stain [ Figure 1a inset]. Plenty of cyst macrophages were seen. There was a notable absence of tubule formation, mitotic fi gures, necrosis and cytological atypia. Based on the cytomorphology, a diagnosis of low-grade parotid neoplasm was rendered.
The patient subsequently underwent a right superfi cial parotidectomy with resection of the mass. Grossly, the salivary gland specimen measured 4 × 3.5 × 2 cm. On cut section the tumor was solid, grey-white, well-capsulated and measured 3 × 2.5 cm [ Figure 2a]. Histopathologic examination showed an encapsulated tumor composed of small round to polygonal cells with centrally located nuclei, fi ne chromatin and variable amounts of eosinophilic to clear cytoplasm arranged in nests, glandular, trabecular pattern and in sheets. These cells showed occasional nuclear grooves with microcystic areas [ Figure 2b-d]. There was no necrosis, cellular atypia or mitosis. Histopathological differential diagnosis of primary neuroendocrine tumor-carcinoid, paraganglioma and metastatic papillary carcinoma thyroid-were considered. On immunohistochemistry, tumor cells were positive for p63, S-100, SMA (smooth muscle actin), CK (cytokeratin) and HMWCK (high molecular weight CK). However, tumor cells were immunonegative for TTF1 (thyroid transcription factor), chromogranin and EMA (epithelial membrane antigen). Final diagnosis of myoepithelioma-epithelioid type was made. There was no evidence of recurrence at one year of follow up.
DISCUSSION
Myoepitheliomas are common in parotid gland, clinically both sexes are affected equally with a mean age of 44 years at presentation. [3] The criteria for classifying a myoepithelioma on morphology alone is somewhat subjective composed predominantly of myoepithelial cells, with (less than 10%) or without duct-like structures or the presence of myxoid stroma. [4] Hence, immunohistochemical and/or ultrastructural studies is essential at arriving correct diagnosis.
In one study consisting of seven myoepitheliomas, the cytological fi ndings of plasmacytoid, spindle and mixed types were described. In the plasmacytoid type, the smears showed clusters and single cells. The cells were round to oval with eccentrically placed nuclei with abundant granular cytoplasm. Acinar and gland-like arrangements were noted, but myxoid tissue was rarely seen. This contrasted to the spindle cell type consisting of numerous clusters of spindle cells with scant cytoplasm and the focal presence of myxoid tissue. The mixed type showed a combination of plasmacytoid and spindled cells in a myxoid background. None of the seven cases described were diagnosed as myoepithelioma on aspiration cytology. Plasmacytoid myoepitheliomas were diagnosed on FNAC as "plasmacytoma or pleomorphic adenomas." Those that were diagnosed histologically as myoepithelioma, spindle cell type were diagnosed on FNAC as "spindle cell neoplasm." Mixed-type myoepitheliomas were diagnosed on FNAC as "infl ammatory pseudotumors or pleomorphic adenomas." This study highlighted the diffi culty in differentiating myoepitheliomas from pleomorphic adenomas by cytomorphology. [5] In pleomorphic adenomas, the epithelial/ductal cells are small and cuboidal, arranged in fl at sheets or trabeculae that can undergo squamous, oncocytic or sebaceous metaplasia. Myoepithelial cells are usually present, can be spindled, stellate or plasmacytoid and are found in clusters, discretes or within the chondromyxoid matrix. Chondromyxoid matrix material is the most specifi c feature for making the correct diagnosis. In our case, there was a predominance of epithelioid-type cells with minimal stroma making the distinction morphologically diffi cult [Tables 1 and 2]. Also in our case there was presence of scant hyaline to myxoid material that appeared fi brillar or in dense globules that surrounded the cells in a vaguely cylindromatous pattern. Others have described this pattern in myoepitheliomas. [6,7] Adenoid cystic carcinomas consist of numerous cohesive three-dimensional clusters of small basaloid cells with scant cytoplasm often arranged around an amorphous basement membrane-type material (cylindromatous pattern). The basaloid cells have high nuclear to cytoplasmic ratio, hyperchromatic nuclei and coarse granular chromatin. [7] Although our case had rare cylindromatous structures, the tumor cells were extremely bland in appearance lacking cytologic atypia or mitotic fi gures and thus making adenoid cystic carcinoma unlikely. Basal cell adenoma should also be included in the differential diagnosis.
Malignant myoepithelioma is diffi cult to distinguish from benign myoepithelioma. [8,9] Histologically, the most important diagnostic feature for malignancy is tumor infi ltration into the adjacent normal tissue that is impossible to discern on cytology. The diagnosis of malignancy depends on cytologic features like cellular atypia, pleomorphism, necrosis and mitotic activity, which can be absent. Many authors have reported that when malignant cytologic features are not present, a diagnosis of malignancy may not always be possible, especially when the diagnosis of malignancy is based solely on an infi ltrative growth pattern. The assessment of cell proliferative activity with the use of the immunohistochemical stain for MiB-1 (Ki-67) (>10%) is a useful aid to distinguish between benign and malignant myoepithelioma as described. [10] Immunohistochemistry is useful for confi rming the diagnosis of myoepithelioma. Reports have shown positivity for alpha SMA, S-100 protein, CK, vimentin, desmin, p63, calponin, smooth muscle myosin heavy chain and glial fi brillary acidic protein [4] and negativity for EMA. [3] The positivity for p63, SMA, S-100, CK and HMWCK confi rmed the diagnosis of myoepithelioma in our case.
To conclude, myoepithelioma-epithelioid type should be considered in the differential diagnosis of low-grade parotid neoplasm with a cylindromatous pattern because it is less prone to recurrence than pleomorphic adenomas, basal cell adenoma and adenoid cystic carcinoma. To the best of our knowledge, this is the third case report of an epithelioid myoepithelioma of the parotid gland on FNAC. | 2018-04-03T04:55:37.544Z | 2014-01-01T00:00:00.000 | {
"year": 2014,
"sha1": "795a56918c36dcc5c8fbb88b301bfc039e67bd8f",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc4065431",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "03336c4cee124533eef4058bc5daa38f11d70b67",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252055068 | pes2o/s2orc | v3-fos-license | Socioeconomic gradient in mortality of working age and older adults with multiple long-term conditions in England and Ontario, Canada
Background There is currently mixed evidence on the influence of long-term conditions and deprivation on mortality. We aimed to explore whether number of long-term conditions contribute to socioeconomic inequalities in mortality, whether the influence of number of conditions on mortality is consistent across socioeconomic groups and whether these associations vary by working age (18–64 years) and older adults (65 + years). We provide a cross-jurisdiction comparison between England and Ontario, by replicating the analysis using comparable representative datasets. Methods Participants were randomly selected from Clinical Practice Research Datalink in England and health administrative data in Ontario. They were followed from 1 January 2015 to 31 December 2019 or death or deregistration. Number of conditions was counted at baseline. Deprivation was measured according to the participant’s area of residence. Cox regression models were used to estimate hazards of mortality by number of conditions, deprivation and their interaction, with adjustment for age and sex and stratified between working age and older adults in England (N = 599,487) and Ontario (N = 594,546). Findings There is a deprivation gradient in mortality between those living in the most deprived areas compared to the least deprived areas in England and Ontario. Number of conditions at baseline was associated with increasing mortality. The association was stronger in working age compared with older adults respectively in England (HR = 1.60, 95% CI 1.56,1.64 and HR = 1.26, 95% CI 1.25,1.27) and Ontario (HR = 1.69, 95% CI 1.66,1.72 and HR = 1.39, 95% CI 1.38,1.40). Number of conditions moderated the socioeconomic gradient in mortality: a shallower gradient was seen for persons with more long-term conditions. Conclusions Number of conditions contributes to higher mortality rate and socioeconomic inequalities in mortality in England and Ontario. Current health care systems are fragmented and do not compensate for socioeconomic disadvantages, contributing to poor outcomes particularly for those managing multiple long-term conditions. Further work should identify how health systems can better support patients and clinicians who are working to prevent the development and improve the management of multiple long-term conditions, especially for individuals living in socioeconomically deprived areas. Supplementary Information The online version contains supplementary material available at 10.1186/s12889-023-15370-y.
Background
The prevalence of multiple long-term conditions (known as multimorbidity) is rising [1,2], and in the general population varies between 13 to 72% depending on the definition, country, data source, and age range [3]. Multiple long-term condition prevalence is increasing in most demographic groups and is higher in socioeconomically deprived groups [4][5][6][7][8][9][10][11]. Greater socioeconomic deprivation can increase the exposure to health-damaging factors leading to higher vulnerability, which may lead to higher prevalence and earlier onset of multiple-long term conditions among deprived groups [2,6,7,10].
Multiple long-term conditions also influence subsequent outcomes (including mortality/survival), and it is possible that the intersection of multiple long-term conditions and socioeconomic factors may contribute to worse outcomes. People with multiple-long term conditions are at risk of poorer outcomes such as lower quality of life and higher mortality rate [12][13][14][15][16]. Managing multiple-long term conditions is also associated with higher health care use with direct and indirect costs such as increasing inability to work [17,18]. There is evidence that socioeconomic deprivation contributes to higher burden of treatment associated with managing multiple long-term conditions [19][20][21] including potential issues in treatment or care quality (e.g. polypharmacy, drug adverse events, compliance) [22][23][24][25][26]. However, it is less clear whether socioeconomic deprivation also contributes to poorer survival for those with multiple long-term conditions. Current evidence is mixed on whether socioeconomic deprivation exacerbates the mortality risk of people with multiple long-term conditions [13,[27][28][29][30][31][32] It is important to understand these socioeconomic inequalities in order to mitigate adverse health outcomes associated with multiple long-term conditions.
In addition, even though the socioeconomic gradients in morbidity and mortality are well recognised [13,[27][28][29][30][31][32][33][34][35][36], there has been less attention to whether the association between multiple long-term conditions and mortality holds across age groups. A focus on working age adults may be warranted because the association between socioeconomic deprivation and mortality is stronger at younger ages and socioeconomic inequalities in mortality have widened over time in people of working age [36][37][38]. Similarly, socioeconomic inequalities in multiple long-term condition prevalence have increased over time for all ages but increases are greatest in people of working age [2]. Emerging evidence suggests that the influence of number of conditions on survival weakens with age and is stronger for working age adults (under 65 s) than older adults (65 + years), though this warrants replication in nationally representative samples [13]. Together these studies suggest that socioeconomic deprivation, having multiple long-term conditions, and plausibly the combination of these could have greater impact on survival in people of working age compared with older people, though this has not been directly tested.
Drawing conclusions from these different studies is difficult because they used different measures of multiple long-term conditions (including the number and type of conditions) and because national or local differences in inequalities in health and access to healthcare could lead to differences in outcomes. Our study will provide a cross-jurisdiction comparison between England and Ontario (Canada's largest province by population). Both jurisdictions offer universal healthcare and have comparable measures of socioeconomic deprivation (at the area level). They have also both previously prioritised tackling inequalities in primary care but have taken different approaches to do so which has resulted in differing trajectories in inequalities in mortality [39].
Consequently, the aims of this retrospective cohort study are to: i) explore whether number of conditions contribute to explaining socioeconomic inequalities in mortality, ii) examine whether the association between number of conditions and mortality differs by level of socioeconomic deprivation and iii) assess whether the magnitude or direction of these associations vary between working age and older adults. We replicate the analysis using the same long-term health conditions and comparable data sets from England and Ontario. We use individual-level measures of long-term conditions, age, sex and rely on area-level measures of socioeconomic deprivation. Both individual level socioeconomic position and area level socioeconomic deprivation indicators have been used to study the gradient in multiple long-term conditions [4][5][6][7][8][9][10][11]. Although multilevel study designs have shown that each can independently affect outcomes such as mortality through multiple material, psychosocial and behavioural mechanisms, area level deprivation is often recommended for use as a proxy for patient socioeconomic deprivation in population-based studies using routine health data [40].
England
A simple, random sample of 600,000 adults aged 18 + was drawn from the Clinical Practice Research Datalink (CPRD Aurum). This is an ongoing pseudonymised research database from routinely collected primary care records. Over 98% of the population in England is registered in a primary care practice and CPRD Aurum is derived from the largest of the four IT systems used across primary care practices in England (56% of practices) [41]. It is representative of the English population in terms of geographical spread, deprivation, age and gender [41] and includes primary care records from over 40 million patients (13.35 million patients currently registered as of February 2022). Primary care records include information on diagnoses, symptoms, prescriptions, referrals, and tests. Eligible adults (n = 8,750,306) were registered in a CPRD practice on 1st January 2014 (to ensure records were up to date at least one year before the study start), were alive and still registered at the study start on 1st January 2015, and were eligible for linkage to Hospital Episode Statistics (HES) and Office for National Statistics mortality data. They were followed until the study end (31 December 2019) or death if this was earlier and were censored if they left the CPRD practice or the practice stopped providing data to CPRD. The CPRD reviewed and approved the ethics and methods of this study (eRAP protocol number 20_000239). Informed individual consent was not required as patients are pseudonymised and cannot be identified from CPRD. Patients were still able to opt-out from sharing their data for research purposes through the national data opt-out scheme introduced in 2018 [42].
A simple, random sample of adults was drawn from health administrative data of residents for the Ontario population eligible for universal health coverage (Supplementary Table 1). These data are linked using unique encoded identifiers and analysed at ICES. Specifically, we identified all persons in the Registered Persons Database who were alive and residing in the province of Ontario between the ages of 18 and 105 years on January 1, 2015 (n = 11,916,158). Residents were excluded from the study if they did not have a valid OHIP health card at index, or for the full 365 days prior to index (n = 1,188,678). From those remaining (n = 10,727,480) we retained a random sample of 600,000 for analysis. Participants were followed up until the study end (31 December 2019) or death if this was earlier and individuals where no outcome was observed were censored at the study end date or the date at which they were no longer eligible for OHIP coverage, whichever came first. The use of data in this project was authorized under section 45 of Ontario's Personal Health Information Protection Act, which does not require review by a Research Ethics Board. Informed consent from participants was not required because we used health information routinely collected in Ontario and held in health administrative databases.
Measures
Survival time was calculated from 1st January 2015 to death or censoring. Age (in years) and sex (classed into two categories (men/women) were measured at baseline. Socioeconomic deprivation was captured by area level deprivation. In England this was using 2015 Index of Multiple Deprivation (IMD) decile in the patient's area of residence based on lower-level super output area boundaries and includes dimensions of education, living environment, income, employment, housing, health and disability and crime [43]. In Ontario, deprivation was measured using the 2016 material deprivation index (ON-MARG data). This area-based measure includes dimensions of income, education, housing and family structure characteristics and is closely connected to poverty [44]. For each measure decile 1 represents the 10 th least deprived areas and decile 10 represents the 10 th most deprived areas for analyses. Supplementary Table 2 describes the components of IMD and ON-Marg indices. These are comparable indicators of socioeconomic deprivation that have been previously used to compare socioeconomic inequalities in mortality across these jurisdictions [39].
The number of long-term conditions was counted at study start. In the primary analyses, we used a sub-set of nineteen physical and mental health conditions that have previously been associated with higher mortality risk, poorer functioning, and requiring primary care input [8,9]. To ensure that our analyses between the two jurisdictions are comparable, long-term conditions included need to be measurable both in the England and Ontario samples. These were alcohol misuse, arthritis, asthma, atrial fibrillation, cancer (any), chronic obstructive pulmonary disease, congestive heart failure, dementia, diabetes, epilepsy, ischemic heart disease, hypertension, kidney disease, multiple sclerosis, Parkinson's disease, psoriasis, schizophrenia, stroke (including transient ischemic attack) and thyroid disorder. To test whether the total number and type of conditions included influenced the relationship with mortality, we also included an additional seven conditions in the sensitivity analyses including mental health conditions. These were anxiety and depression, blindness, bronchiectasis, diverticulosis, hearing loss, liver disease and substance misuse. These were limited to sensitivity analyses because there was an established SNOMED code for England, but the constructs did not align well, or we were unable to check our prevalence estimate with published data for Ontario (Supplementary Table 3 for England and Supplementary Table 4 for Ontario). In England, conditions were coded using previously published SNOMED code list [2,45] and/or prescription code from CPRD (see Supplementary Table 3). For Ontario, conditions were identified using a mix of disease-specific registries using validated algorithms for health administrative data, and from OHIP, DAD, OMHRS and NACRS datasets using case ascertainment algorithms from the Canadian Chronic Disease Surveillance system [46,47] or code sets identified from prior research studies (see Supplementary Table 4). For each person in the dataset, we summed the total number of conditions prevalent at baseline.
Statistical analyses
The final analytical sample (England: N = 599,487, Ontario: N = 594,546) included those with complete data on baseline sex, age and deprivation (England: N = 513 excluded, Ontario: N = 5,474 excluded). Excluded patients in England and Ontario are more likely to be younger and male and in Ontario they had more longterm conditions (Supplementary Table 5).
The association between survival time and deprivation was modelled using Cox proportional hazards models. It was not possible to merge the two patient-level data sets so separate analyses were done for England and Ontario but the same modelling approach was used. For England, two-level models were used to allow for the clustering of patients within GP practices. The assumption of proportional hazard was tested statistically and visually using Schoenfeld residuals in a model that includes, age, sex, number of long-term conditions and deprivation. For England, the results show varying baseline hazard for age (p < 0.001) and sex (p = 0.060) at later follow up times. For Ontario, the results show varying baseline hazard for age (p < 0.001) and number of conditions (p < 0.001) at later follow up times. However, when these variables were included as time-varying for the respective models in both jurisdictions, the estimates did not materially change and therefore the simpler models are presented and were deemed to be representative.
Model 1 included age, sex and deprivation. In model 2, we included deprivation and the interaction of age by deprivation and compared with model 1 to test the hypothesis that the association between deprivation and mortality differs with age. If the interaction was significant then subsequent models were stratified by age groups, to allow for the relationship between deprivation and mortality to differ between working age (18-64 years) and older (65 + years) adults.
To examine whether number of long-term conditions contributed to the association between mortality and deprivation, we added number of long-term conditions in the model (model 3) [48]. Number of long-term conditions was added as a continuous variable after confirming the linear association with survival time ( Supplementary Fig. 1). Number of long-term conditions was capped at 6 conditions as non-linearity in survival was observed for persons with more than 6 conditions which was likely affected by smaller and variable sample sizes in these groups.
To test whether the association between mortality hazard and number of conditions was consistent across deprivation groups, interaction terms for deprivation by number of conditions were added to the stratified models (model 4).
To assess the improvement in goodness of fit, likelihood ratio tests were performed for nested models. Significance refers to statistical significance at 5% level.
In sensitivity analyses, models 3 and 4 were repeated for England and Ontario to include this list of 26 conditions.
All statistical analyses were conducted using R software for England and using SAS (data management) and Stata (analysis) for Ontario, Canada.
Results
At the end of the follow up period, 32,804 (5.5%) participants had died in England and 27,947 (4.7%) participants had died in Ontario (Table 1). A higher proportion of participants was censored during the follow up period in England than in Ontario (Table 1), those censored in England were more likely to be working age and had fewer long-term conditions compared with the sample with full follow-up time, but they were spread evenly across deprivation deciles (Supplementary Table 6). Those living in the most deprived areas in England were more likely to be younger compared to patients living in the least deprived areas, whereas in Ontario, patients living in the most deprived tenth of areas were more likely to be either younger (18-29 years) or older (80 + years) ( Table 2).
The unadjusted mean number of long-term conditions at baseline was lower in England (mean = 0.70, standard deviation = 1.11) than Ontario (mean = 0.90, standard deviation = 1.31) ( Table 1). For both jurisdictions, those living in most deprived areas had a higher mean number of long-term conditions compared to those living in the least deprived areas ( Table 2). The distributions of number of conditions by age group in England and Ontario (Canada) are similar ( Supplementary Fig. 2) and prevalence of conditions for each jurisdiction is shown in a supplementary table (Supplementary Table 7).
We found a statistically significant interaction between age and deprivation in England and Ontario (supplementary table 8 for full estimates). Age-stratified models show that the deprivation gradient in mortality rate was steeper in working age adults than older adults; this is consistent for England and Ontario (Fig. 1).
For example, in England, those living in the most deprived areas (decile 10) had a higher mortality rate compared to their counterparts living in the least deprived areas (decile 1), this was evident for working age adults and older adults with hazard ratios (HR) of 3.25 (95% CI 2.88, 3.66; Table 3 for full estimates) and 1.71 (95% CI 1.61,1.81; Table 4 for full estimates), respectively. In Ontario, for working age and older adults, those living in the most deprived areas (decile 10) had a higher mortality rate compared to those living in the least deprived areas (decile 1), with HRs of 2.42 (95% CI 2.17, 2,71; Table 3 for full estimates) and 1.35 (95% CI 1.27, 1.43; Table 4 for full estimates), respectively. For both age groups, the deprivation gradient was steeper in England than in Ontario (Fig. 1).
Number of long-term conditions was associated with increased mortality rate; this was stronger for working age adults (Table 3) than older adults ( After accounting for number of long-term conditions, the association between deprivation and mortality decreased in magnitude but remained substantial and significant (Tables 3 and 4 for full estimates).
To test whether the association between number of long-term conditions and mortality differed by the level of deprivation, interactions between number of conditions and deprivation were added in the age-stratified models. Adding the interactions statistically improved the model fit for working age adults in Ontario and older adults living in England and Ontario (Supplementary Table 9 and 10 for full regression coefficients). Adults with more long-term conditions had a higher mortality rate and those living in deprived areas also had a higher mortality rate but for adults living in the most deprived areas, the relative difference in mortality with more vs no conditions was smaller compared to those living in least deprived areas. This means that having more conditions attenuates the deprivation gradient in mortality. This was true for working age adults in Ontario (Fig. 2) and older adults (Fig. 3) in England and Ontario.
The pattern of results was similar in the sensitivity analyses (Supplementary Table 11 -12).
Contribution of our study
Our analysis confirms that having multiple long-term conditions is associated with higher mortality rate. This association was stronger in working age compared with older adults. The analysis also highlights the contribution of multiple long-term conditions to socioeconomic inequalities in the survival of working age and older adults. Number of conditions and socioeconomic deprivation both contribute to higher mortality risk, but number of conditions moderates the socioeconomic gradient in mortality: a shallower gradient was seen for persons with more long-term conditions.
Comparison with previous research
Our findings based on large population-based samples from two jurisdictions (England and Ontario) add to the evidence that having multiple long-term conditions is strongly associated with a higher mortality rate [13][14][15] and that the influence of more conditions on mortality is stronger in working age than in older adults [13]. Some previous studies have focused on the interaction of socioeconomic deprivation and number of conditions and our findings align with those that showed increasing number of conditions attenuates socioeconomic inequalities in mortality [27,29,32]. We have shown that this is consistent for older adults in England and both working age and older adults in Ontario. Our findings are also similar to a study set in Ontario which found little evidence for an interaction between multiple long-term conditions and socioeconomic factors but a trend towards a stronger association between multiple conditions and mortality amongst individuals living in the highest income areas [31].
Others have found that the influence of multiple longterm conditions on mortality remained consistent across levels of socioeconomic deprivation [13,28] or shown shorter survival in people with multiple long-term conditions living in the most deprived areas [30]. Our findings contradict these and there are various possible explanations for the differences.
The two studies finding no interaction between multiple long-term conditions and socioeconomic factors [13,28] were conducted in the UK used UK Biobank data and Whitehall II cohort. These are high quality studies but participants are mainly of white British ethnicity from less deprived groups. The occupational cohort tends to be healthier compared to the general UK population so the findings of those studies may reflect lower socioeconomic diversity with lower condition prevalence rates [13,28]. So, estimates on outcomes particularly in higher counts of conditions are likely to be conservative [15].
One study conducted in England found shorter survival in men with two or more long-term conditions living in the most compared with the least deprived areas [30]. The authors proposed, though did not test, that sex differences in survival after the onset of multiple long-term conditions could be due to differences in disease combinations. In fact, studies conducted with similar cohorts in Ontario found that disease combinations somewhat varied between men and women within age groups [7].
Possible mechanisms
Health service factors may contribute to poorer survival of people with multiple long-term conditions. People managing multiple long-term conditions tend to require access to multiple health and social care services, however, they report poorer access to primary care, poorer [49] and higher perceived unmet need in primary care and local services [50,51]. Despite higher health care use such as more frequent primary care and ambulatory care visits [19], people managing multiple long-term conditions experience fragmented care [52] and poorer outcomes such as higher emergency admissions [19], suggesting their needs are not fully met by the current health care system. Also having multiple long-term conditions, particularly a combination of physical and mental health conditions, is associated with lower health literacy [53] suggesting possible difficulties in navigating the complex health system. (Canada). Estimates are shown based a model with sex, deprivation, number of long-term conditions and interaction between deprivation and number of conditions. Mortality hazard ratio of greater than 1 means a higher likelihood of dying during follow up compared to a person from the least deprived areas (decile 1) and no conditions Interventions that promote self-management with the aim to increase personalised and self-care (e.g., supporting medical adherence or condition-specific education) on their own and in combination with interventions that promote a collaborative process with better communication between patients and clinicians across different care settings (such as case management, multidisciplinary teams and discharge management) have been shown to reduce care fragmentation and improve outcomes for people managing long-term conditions [54]. Some previous studies might lead us to hypothesise that socioeconomic deprivation would exacerbate poor outcomes for individuals with multiple long-term conditions. Individuals from areas of high deprivation have a faster acquisition of additional long-term conditions [55] and are more likely to have complex multimorbidity (three or more conditions affecting three or more different body systems) [2], a combination of physical and mental conditions [56] or frailty [57] that may make it harder to navigate day to day living and care. Qualitative evidence highlights how living with multiple long-term conditions in deprived areas often requires people to manage physical, mental and social problems and challenges their ability to preserve autonomy [58,59]. For example, having a manual job that is physically exhausting, an unfavourable living situation that make it difficult to access and engage with the necessary support, not being able to speak English that makes it challenging to navigate healthcare and to get a clear understanding of the conditions and treatments they need to manage (i.e., having lower health literacy) often also contribute to the difficulty in adopting healthy behaviours [60]. This suggests that the needs of people with multiple long-term conditions particularly in deprived areas go beyond the management of physical and mental symptoms and therefore require different types of care and self-management techniques. This is echoed by the experience of GPs working in areas of high deprivation where they feel that the current healthcare system and self-management strategies do not meet the patient's needs and they are not allocated resources to provide the optimal additional support [61]. There are fewer GP practices and GPs per head after adjusting for population need in areas of higher deprivation [62].
However, our results show that the relative difference in mortality between those with no and multiple longterm conditions is smaller in more socioeconomically deprived areas. It is possible that having existing conditions may increase the opportunity to see physicians which may lead to better detection and management of subsequent conditions [63]. It has been suggested that increasing number of conditions may be associated with better quality of care when measured through process or outcome indicators such as all-cause mortality and worse quality of care when measured through patient reported information such as quality of life measures, so, we may not have fully captured the added burden of increasing multiple long-term conditions in deprived areas [64].
On the other hand, the shallower gradient in deprivation for increasing number of conditions may be an Estimates are shown based a model with sex, deprivation, number of long-term conditions and interaction between deprivation and number of conditions. Mortality hazard ratio of greater than 1 means a higher likelihood of dying during follow up compared to a person from the least deprived areas (decile 1) and no conditions Alarilla et al. BMC Public Health (2023) 23:472 expression of "healthy survivor bias". It could be that people still alive and eligible for inclusion in our study are different from those who have already died with multiple long-term conditions. For example, they may be more likely to present with an accumulation of less lethal conditions such as hypertension or arthritis compared with their counterparts that died before the study start [16,32,[65][66][67][68]. Our study may therefore be underestimating the mortality risk of multiple long-term conditions especially in deprived areas where people acquire these at younger ages [8,9]. Furthermore, our study assumes that increasing number of conditions represents worsening disease trajectories and clinical complexity whereas the impact on mortality may be smaller for certain conditions [19]. Alternative study designs and longer followup are needed to assess worsening disease trajectories including accumulation, severity of conditions that may differ by deprivation [19,30,32,55,69]. Complexity of care may be mediated by the trajectory and combination of conditions and the degree to which management and treatment of conditions interact with one another [70][71][72]. Consequently, number of conditions may not be reflective of the complexity of care needed [58,59].
Currently there is no consensus on the best method to identify patterns or clusters of conditions and there is a lack of research on how clusters and condition trajectories may influence complexity of management [4,72,73] Future studies should explore how severity, complexity of clusters of conditions and the accumulation of new conditions may influence the survival of people with multiple long-term conditions particularly in areas of deprivation.
Strengths and limitations
A strength of our study is the sample sizes of over half a million people in England and in Ontario with a long follow up period. The replication in two jurisdictions with great similarity in the findings is an indication of the robustness of the analysis. Currently, there is no single accepted definition or measurement of multiple longterm conditions [74] and this has previously made it difficult to compare findings [75] but our study captured the same conditions and used the same analytical protocol. Emerging work on establishing a consensus is underway [76] and likely to be cited as the standard for definition and description which will make comparability and reproducibility easier [77]. The use of population-based electronic health records means that we reduced our selection bias, on the other hand we cannot assume accuracy and standardisation of coding in the database [78,79]. Additionally, routine health records do not include high quality information on severity of conditions, frailty or measures of material, psychosocial or behavioural factors which may contribute to higher mortality of people with multiple long-term conditions. Similarly, we only had access to area-level deprivation, and this is a marker of contextual risk and may not be representative of individual social determinants of health and day to day living. This is a common limitation when using routine health records. Even so, individual and area-level measures of socioeconomic position have shown similar relationship with morbidity [5]. In addition, exploring the rate of development of new conditions during follow up was beyond the scope of this study. Our study period excludes the start of the COVID-19 pandemic. The COVID-19 pandemic disproportionately affected those in the most deprived areas [80] and it also disrupted usual care with an overall reduction in consultations or in appointments [81] and the halt of services designed to monitor new conditions and address inequalities such as the NHS health check [39]. It is possible that our study underestimates the impact of multiple long-term conditions on inequalities in survival in the context of the pandemic.
Conclusion
Long-term conditions are increasing in prevalence particularly among working age adults in deprived areas in England and Ontario [2,7]. Having multiple long-term conditions and living in deprived areas is associated with a higher mortality rate in working age and older adults in England and Ontario. Both jurisdictions need to pay greater attention to working age adults with multiple long-term conditions. Currently the health care system has a fragmented approach in the management of multiple long-term conditions which results in high and complex unmet needs, particularly in areas of high deprivation. Evidence suggests that improving the availability, access and relationships with local health and social care services and improving coordination between different services is necessary to help people manage multiple long-term conditions and prevent the development of new conditions particularly in areas of deprivation [60]. There is also a need for health care professionals to better understand the groups experiencing inequalities particularly in their local area so they can encourage individuals to take control of their own health and manage their health conditions, for example, giving targeted recommendations and encouragement on self-management and care based on their social and cultural context [60]. Further work should explore how we can better support patients and clinicians in preventing the development and improving the management of multiple-long term conditions. | 2023-03-12T13:14:30.250Z | 2023-03-11T00:00:00.000 | {
"year": 2023,
"sha1": "f3b6e4794768e4c1b8ed905993a2b56486f7392e",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/counter/pdf/10.1186/s12889-023-15370-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3b6e4794768e4c1b8ed905993a2b56486f7392e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
217068 | pes2o/s2orc | v3-fos-license | Maximum Principle and generalized principal eigenvalue for degenerate elliptic operators
We characterize the validity of the Maximum Principle in bounded domains for fully nonlinear degenerate elliptic operators in terms of the sign of a suitably defined generalized principal eigenvalue. Here, maximum principle refers to the non-positivity of viscosity subsolutions of the Dirichlet problem. This characterization is derived in terms of a new notion of generalized principal eigenvalue, which is needed because of the possible degeneracy of the operator, admitted in full generality. We further discuss the relations between this notion and other natural generalizations of the classical notion of principal eigenvalue, some of which had already been used in the literature for particular classes of operators.
Introduction
This paper is concerned with the Maximum Principle property for degenerate second order elliptic operators.Our aim is to characterize the validity of the Maximum Principle for arbitrary degeneracy of the operator -including the limiting cases of first and zero-order operators -in terms of the sign of a suitably defined generalized principal eigenvalue.Such a complete characterization is missing, as far as we know, even for the case of linear operators, which was of course our first motivation.Due to the possible loss of regularity, as well as of boundary conditions, which is caused by degeneracy of ellipticity, the appropriate framework to deal with this problem is, even in the linear case, that of viscosity solutions.This approach is of course not restricted to the linear case, so we study the question in the more general setting of homogeneous fully nonlinear degenerate elliptic operators F (x, u, Du, D 2 u).Let Ω be a bounded domain in R N and S N be the space of n × n symmetric matrices endowed with the usual partial order, with I being the identity matrix.A fully nonlinear operator F : Ω × R × R N × S N → R is said to be degenerate elliptic if F is non increasing in the matrix entry, see condition (H1) in the next section.The basic example to have in mind is that of linear operators in non divergence form satisfies u ≤ 0 in Ω.
We denote by USC(Ω) the set of upper semicontinuous functions on Ω.It is worth pointing out that in the above definition both the PDE and the boundary conditions are understood in the viscosity sense (see Section 7 of [8]).Precisely, u is a subsolution of (1) if for all ϕ ∈ C 2 (Ω) and ξ ∈ Ω such that (u − ϕ)(ξ) = max Ω (u − ϕ), it holds that Note, in particular, that the validity of the MP property implies that viscosity subsolutions cannot be positive on ∂Ω, namely, the inequality u ≤ 0 on ∂Ω holds in the classical pointwise sense.
Before describing our results, let us recall some classical and more recent results concerning the Maximum Principle and the principal eigenvalue.
A standard result in the viscosity theory is that, under suitable continuity assumptions on the degenerate elliptic operator F , the Maximum Principle for viscosity subsolutions holds true if r → F (x, r, p, X) is strictly increasing (see e.g.[8]).This is only a sufficient condition.It is well known that if Ω is a bounded smooth domain and F is a uniformly elliptic linear operator with smooth coefficients, then the validity of the Maximum Principle for classical subsolutions is equivalent to the positivity of the principal eigenvalue λ 1 (F, Ω) associated with Dirichlet boundary condition.This eigenvalue is the bottom of the spectrum of the operator F acting on functions satisfying the Dirichlet boundary conditions.It follows from the Krein-Rutman theory that λ 1 (F, Ω) is simple and the associated eigenfunction ϕ is positive in Ω.So, if λ 1 (F, Ω) ≤ 0 then ϕ violates the Maximum Principle.As a consequence, if the Maximum Principle holds then the problem admits a positive strict supersolution.The reverse implication is also true, but its proof is not completely straightforward since it requires an analysis of how sub and supersolutions can vanish at the boundary.To this aim, one typically makes use of barriers and Hopf's lemma, for which uniform ellipticity or some other properties are required.As we will see, the possibility of different behaviours of supersolutions at the boundary is one of the most delicate points one has to handle in order to deal with degenerate operators.
The connection between the Maximum Principle and the existence of positive strict supersolutions led the first author, L. Nirenberg and S. R. S. Varadhan to introduce in [3] the following notion of generalized principal eigenvalue: Here and henceforth we will write F [•](x) for short, or simply Using this generalization, they were able to extend the characterization of the Maximum Principle for linear elliptic operators to the case of non-smooth domains, where the classical principal eigenvalue is not defined.
In [6], I. Birindelli and F. Demengel adapted the definition of [3] to a class of fully nonlinear operators F which are homogeneous of degree α > 0, including some degenerate elliptic operators which are modeled on the example of the p−Laplacian.They defined the principal eigenvalue as Here, LSC(Ω) denotes the set of lower semicontinuous functions on Ω.
Actually, in their earlier work [5], the same authors had defined the generalized principal eigenvalue in the following slightly different way: The two notions coincide in the case treated in [6], but, as we will see in the proof of Proposition 2.1 part (i) below, this may not be the case in general.Let us mention that the non-equivalence between λ 1 and λ 1 , which in the cases considered in the present paper is due to the degeneracy of the operator, can occur when Ω is unbounded even for uniformly elliptic linear operators.The characterization of the Maximum Principle in terms of generalized principal eigenvalues such as λ 1 and λ 1 , as well as the study of their properties, for uniformly elliptic linear operators in unbounded domains is the object of the recent paper [4].
It turns out that the generalized principal eigenvalue λ 1 is not well suited to characterize the validity of MP for the general degenerate cases that we address in the present paper.This is showed in the next section, in which we also discuss the pertinence of other natural candidates, such as λ 1 , as well as the limit of the principal eigenvalues of the ε-viscosity regularized operators F ε = −ε∆ + F .None of those choices will be sufficient in order to characterize the MP property in the most general situation.Indeed, one of our main goals is to identify the right set of admissible functions, in the definition of a generalized principal eigenvalue, which can be suitable for general degenerate elliptic operators.Eventually, the right notion for our purposes turns out to be given by the following.
Hence, the definition of the generalized principal eigenvalue µ 1 in a domain Ω requires the operator to be defined in a larger set.Equivalent formulations for µ 1 (F, Ω) are where the last one follows from the monotonicity of λ 1 (F, Ω) with respect to inclusion of the domains.
Hypotheses and main result
Throughout the paper, Ω is a bounded domain in R N , not necessarily smooth, and O is an open set such that Ω ⊂ O ⊂ R N .We assume that F : O × R × R N × S N → R is a continuous function which satisfies the following hypotheses: As it was established in [8], hypothesis (H4) is the key structure condition for the validity of the Comparison Theorem for viscosity solutions of degenerate elliptic equations.Let us emphasize that no regularity assumption is required on the set Ω.
We now state the main result of this article.Some remarks on the statement of Theorem 1.3 are in order.Since our characterization of MP in Ω requires the operator F to be defined in some O ⊃ Ω then, if F is just defined in Ω and satisfies (H1)-(H4) there, in order to apply our result we need to extend it to an operator satisfying (H1)-(H4) in the larger domain O.This is not completely satisfactory, even though the result itself ensures that the notion µ 1 (F, Ω) does not depend on the particular extension.A characterization expressed in terms of a more intrinsic notion, such as λ 1 or λ 1 , would be preferable.Proposition 2.1 below provides examples showing that MP is not guaranteed by λ 1 > 0 nor by λ 1 > 0. However, in the case of λ 1 , the only examples we are able to construct do not satisfy (H4).
We leave it as an open problem to know whether µ 1 coincides with λ 1 under the assumption (H4), and then whether µ 1 (F, Ω) can be replaced by λ 1 (F, Ω) in Theorem 1.3.In Theorems 4.2 and 4.4 below we show that, for a smooth domain Ω, this is true in two significant cases: if the operator admits barriers at each point of the boundary, and, for the case of linear operators, if in each connected component of ∂Ω the so-called Fichera condition is either always satisfied or always violated.The general case remains open.
Finally, let us point out that a generalized principal eigenfunction associated with µ 1 does not always exist (see Remark 2 in Section 3).This is due to the degeneracy of the operator and is true even in the linear case.
Examples
In this section we present some examples of operators to which Theorem 1.3 applies, recovering some known results.We further analyse the generalized principal eigenvalues in these particular cases.The examples are divided into classes, but none of them is intended to be exhaustive.
The standard sufficient condition
If an operator F satisfies min x∈Ω F (x, r, 0, 0) > 0 for all r > 0, then MP holds.This is an immediate consequence of the definition of viscosity subsolution.Notice that in this case λ 1 (F, Ω) > 0 and, up to extending F outside Ω as a continuous function, µ 1 (F, Ω) > 0 too.
First-order operators
Theorem 1.3 applies to the generalized eikonal operator F The Lipschitz-continuity of b is required for (H4) to hold.Furthermore, the result still holds for an operator Another family of operators which can be considered is F
Subelliptic operators
For several classes of subelliptic operators one can derive the sign of µ 1 and thus apply Theorem 1.3.For example, if the ellipticity of F is not degenerate in a direction ξ, in the sense that there exists β > 0 such that and the positive constants are supersolutions of F = 0 in O, i.e., F (x, 1, 0, 0) ≥ 0 in O, then µ 1 (F, Ω) > 0. This is seen by taking φ(x) = 1 − εe σξ•x , with σ large and then ε small.The above conditions are satisfied for instance by the Grushin operator: −∂ xx −|x| α ∂ yy , with α > 0. This operator, which is a Hörmander operator if α is an even integer, belongs to the class of ∆ λ operators studied in [11].The key property of such operators is the 2-homogeneity with respect to a group of dilations of R N .Concerning the generalized principal eigenvalues introduced before, this property yields µ 1 (∆ λ , Ω) = λ 1 (∆ λ , Ω) if Ω is starshaped with respect to the origin.Actually, the following much weaker scaling property is required on an operator F in order to have Indeed for φ ∈ LSC(Ω), the above condition implies (in the viscosity sense) It follows from the definition of λ 1 that the mapping
Parabolic operators
It is well known that the classical Maximum Principle holds for uniformly parabolic linear operators of the type Uniformly parabolic means that A ≥ αI, for some positive constant α.Note that a crucial difference with the elliptic case is that the Maximum Principle holds even if the zero order term c is positive and very large.One can interpret the validity of the Maximum Principle as a consequence of the positivity of the principal eigenvalues.Indeed, considering the function φ = e σt and letting σ → +∞, one finds that in this case all notions of principal eigenvalues introduced in Section 1 are equal to +∞.However, the parabolic Maximum Principle cannot be derived right away from Theorem 1.3 due to the unboundedness of the domain.
1-homogeneous, uniformly elliptic operators
A fully nonlinear operator An important role in the theory of fully nonlinear, uniformly elliptic operators is played by 1-homogeneous operators, that is, operators satisfying (H2) with α = 1.Besides linear operators, this class includes Pucci, Bellman and Isaacs operators.The latter class, which is the most general, fulfils (H1)-(H4) under suitable regularity conditions on the coefficients.
Being uniformly elliptic, it also satisfies the hypotheses of Theorem 4.2 below, which implies that the MP property is characterized by the sign of λ 1 if Ω is smooth.
Several works have already addressed the question of the validity of the Maximum Principle and the existence of the principal eigenfunction.Among them, let us cite the papers [9] and [18] for the Pucci operator and [19] for more general operators, including the Bellman one.In [19], the simplicity of the principal eigenvalue is further obtained.The method used in the above mentioned papers differs from the one of [5] and ours.It follows the line of the classical proof based on the Krein-Rutman theory.This is possible because of the uniform ellipticity of the operator, which makes the W 2,N estimates available and avoids the direct use of the definition of viscosity solution.
1-homogeneous, degenerate elliptic operators
Two examples of fully nonlinear degenerate operators are k being an integer between 1 and N and η being the ordered eigenvalues of the matrix D 2 u, and the degenerate maximal Pucci operator Tr(AD 2 u).
The p and the infinity Laplacian
The p and the infinity Laplacian are defined respectively by These definitions have a meaning, in the viscosity sense, if the gradient is nonzero.One has to extend them in suitable way to get a general definition.Both the operators F = −∆ p , −∆ ∞ , with the possible addition of a degenerate elliptic operator sharing the same homogeneity property, fit with the hypotheses (H1)-(H4) of Theorem 1.3 above.The characterization of the Maximum Principle for the p-Laplacian was derived by I. Birindelli and F. Demengel in [5] and [6], using the principal eigenvalue λ 1 and λ 1 respectively.The fact that λ 1 = λ 1 = µ 1 in that case is due to the validity of the Hopf lemma and the existence of barriers.The result for the infinity Laplacian is due to P. Juutinen [17] and is expressed in terms of λ 1 .The existence of barrier is crucial also in this case.We remark that, owing to the particular structure of the infinity-Laplacian, barriers exist without assuming any regularity of ∂Ω.Finally, the existence of a generalized principal eigenfunction is proved in [6] and [17], but not its simplicity.
Exploring other notions of generalized principal eigenvalue
In this section we show that the validity of the MP is not characterized by the positivity of λ 1 , nor by that of other natural notions of generalized principal eigenvalue.One is the quantity λ 1 (F, Ω) defined before.Another natural candidate is where λ ε denotes the classical Dirichlet principal eigenvalue of the regularized operator −ε∆+ F in Ω.If Ω is smooth and F is a uniformly elliptic linear operator with smooth coefficients, then the notions λ 1 , λ 1 , µ 1 , λ * coincide.In the general case we only have that We now show that, even in the linear case, the sign of λ 1 , λ * , λ 1 do not characterize the validity of the MP for degenerate elliptic operators.
Proposition 2.1.For each of the following conditions: there exists a degenerate elliptic linear operator F with smooth coefficients in Ω that does not satisfies the MP property and yet satisfies that condition.Moreover, for cases (i) and (ii), such operator satisfies (H4).
(iii) For this case, we give two examples, one with a first order and one with a second order operator.The operator F [u] = − √ xu ′ does not satisfies MP in Ω = (0, 1), as it is seen by taking u equal to the indicator function of {0}.But, taking φ(x) = 2 − √ x in the definition of λ 1 yields λ 1 (F, Ω) ≥ 1/4.An example of the second order is provided by F [u] = −xu ′′ and Ω = (0, 1).As before, the indicator function of {0} violates MP.On the other hand, the function φ Remark 1.The two operators used as examples for case (iii) do not satisfy hypothesis (H4), hence Theorem 1.3 does not apply to them.Nevertheless, they do not violate the conclusion of the theorem because, as one can check, µ 1 (F, Ω) = 0 in both cases, independently of the extension of F outside Ω.This seems to suggest that hypothesis (H4) in Theorem 1.3 could be relaxed.
Proposition 2.1 involves linear operators, for which the notion of viscosity solution could appear artificial.However, one cannot characterize the validity of the Maximum Principle for C 2 solutions in terms of the signs of λ 1 , λ 1 , µ 1 or λ * .Indeed any C 2 (or even C 0 ) subsolution of the equation F [u] := x 2 u = 0 in Ω = (−1, 1) is necessarily nonpositive, but it is not hard to check that λ 1 (F, Ω) = λ 1 (F, Ω) = µ 1 (F, Ω) = λ * (F, Ω) = 0. Also, notice that the operators used in the proof of Proposition 2.1 would still yield the result under the additional requirement that φ ∈ C 2 (Ω) in the definitions of λ 1 and λ 1 .
The case (ii) in Proposition 2.1 shows that, for degenerate elliptic operators, the notion of generalized principal eigenvalue is unstable with respect to perturbations of the operator.Thus, owing to Theorem 1.3, the same is in some sense true for the MP property.We now present an example that exhibits the instability of the notions λ 1 , λ 1 , µ 1 with respect to perturbations of the operator and approximations of the domain from inside.Let F be the operator defined by F [u] = −xu ′ , x ∈ Ω = (0, 1).It turns out that The instability of the principal eigenvalue is one of the main differences with the uniformly elliptic case.In particular, the stability with respect to interior perturbations of the domain is crucial in the arguments of [3].Its validity is based on the Harnack inequality, which is not available in the general degenerate elliptic case.
It is straightforward to check that µ 1 is stable with respect to perturbations of the domain from outside.If the same property holds for λ 1 , λ 1 then they coincide with µ 1 .Proposition 2.1 shows that this is not always the case.
Proof.If µ 1 (F, Ω) > 0 then there exist λ > 0, Ω ′ ⊃ Ω and φ ∈ LSC(Ω ′ ) such that Up to shrinking Ω ′ , it is not restrictive to assume that φ ∈ LSC(Ω ′ ) and φ > 0 in Ω ′ .Assume by contradiction that (1) admits a subsolution u which is positive somewhere in Ω.We claim that the function ũ defined by Indeed, if ψ is a smooth function touching ũ from above at some x 0 ∈ Ω ′ , then either ũ(x 0 ) = 0, or ũ(x 0 ) = u(x 0 ) > 0 and x 0 ∈ Ω.In the first case ψ has a local minimum at x 0 and then F [ψ](x 0 ) ≤ 0 by (H1) and (H2), in the second case F [ψ](x 0 ) ≤ 0 because u is a subsolution of (1).Next, up to replacing φ with max Ω ũ φ φ, we can restrict the study to the case where max Ω ′ (ũ − φ) = 0.Then, the standard doubling variable technique used to prove the comparison principle yields a contradiction (see Theorem 3.3 in [8]).Let us sketch the argument.Define the following function on Ω ′ × Ω ′ : Calling (x n , y n ) a maximum point for Φ in Ω ′ × Ω ′ , we see that It follows that x n − y n = o(1) as n → ∞.Whence, since ũ(x n ) − φ(y n ) ≥ 0, x n and y n converge (up to subsequences) to a point z where ũ − φ vanishes and In particular, z ∈ Ω.We can therefore apply Theorem 3.2 of [8] and find that for some X, Y ∈ S N satisfying , using (H3), (H4) we eventually derive That is, φ(z) ≤ 0, which is a contradiction.
Let us prove now that if F satisfies MP in Ω then µ 1 (F, Ω) > 0. This is a consequence of the following general property of µ 1 .Proposition 3.2.Under the assumptions (H1)-(H4), there exists a nonnegative subsolution U ∈ USC(Ω), U ≡ 0, of the problem Proof.We construct the subsolution U at the eigenlevel µ 1 (F, Ω) following the method of [6]: we solve the problem at level less than µ 1 (F, Ω) with a positive right-hand side (say equal to 1) and we show that as the level approaches µ 1 (F, Ω) the renormalized solutions tend to a function U satisfying the desired property.An extra difficulty with respect to [6] is that U could be positive somewhere on ∂Ω, due to the lack of existence of barriers.In order to show that U ≤ 0 on ∂Ω in the viscosity sense, we combine the above procedure with an external approximation of the domain Ω.This is the point where the definition of µ 1 is really exploited.Let (Ω n ) n∈N be a family of smooth domains such that n∈N For n ∈ N, we consider subsolutions of the equation whose support is contained in Ω n .Following Perron's method, we define ∀x ∈ O, w n (x) := sup{z(x) : z ∈ USC(O) is a subsolution of (2), z = 0 outside Ω n }.
The function w n could possibly be infinite at some -and even any-point of Ω n .Taking z ≡ 0 yields w n ≥ 0. We claim that lim It follows from the standard theory (see Lemma 4.2 in [8]) that (w n ) * is a subsolution of (2).Since the function w n vanishes outside Ω n , its definition yields w n = (w n ) * .By Lemma 4.4 in [8], if (w n ) * fails to be a supersolution of (2) at some point in Ω n then there exists a subsolution of (2) larger than w n and still vanishing outside Ω n , which contradicts the definition of w n .Therefore, It follows that, for n large enough, (w n Replacing z n with its positive part, it is not restrictive to assume that z n ≥ 0. The functions u n defined by Define the function U by setting By stability of viscosity subsolutions (see e.g.Remark 6.3 in [8]), we know that U satisfies Moreover, U = 0 outside Ω and max Ω U = 1.It remains to show that U satisfies the Dirichlet condition on ∂Ω in the relaxed viscosity sense.Suppose that there exists ξ ∈ ∂Ω, ρ > 0 and ϕ ∈ C 2 (Ω) such that By continuity of F , we can assume that ϕ is a paraboloid, thus defined in the whole R N .Up to decreasing ρ if need be, we have that ϕ > 0 in B ρ (ξ).Since U = 0 outside Ω, we infer that sup As a corollary, we immediately deduce that if F satisfies MP in Ω then µ 1 (F, Ω) > 0.
Remark 2. The function U constructed in the above proof is a good candidate for being the principal eigenfunction of F in Ω, i.e., a positive solution of However, this is not true in general.There are indeed operators which do not admit a principal eigenfunction.It is clearly the case if µ 1 (F, Ω) = +∞, as for instance for the operator F [u] = u ′ .An example with µ 1 (F, Ω) finite is given by the operator . Indeed, the indicator function of {0} violates MP, and then µ 1 (F, Ω) ≤ 0 by Theorem 1.3.On the other hand, µ 1 (F, Ω) ≥ 0, as it is seen by taking φ ≡ 1 in the definition.Hence, µ 1 (F, Ω) = 0.But the unique solution of ( 4) is U ≡ 0.
4 Conditions for the equivalence between µ 1 and λ 1 Theorem 1.3 provides a characterization of the MP property in terms of the sign of the generalized principal eigenvalue µ 1 .We do not know if µ 1 can be replaced by the more intrinsic notion λ 1 , that is, if µ 1 and λ 1 always have the same sign.This property reduces to the equivalence of µ 1 and λ 1 , because they satisfy Let us see what happens if we try to follow the arguments in the proof of Proposition 3.1 with µ 1 (F, Ω) replaced by λ 1 (F, Ω).The difference is that now the supersolution φ is only defined in Ω, but still has positive infimum.Setting φ(ξ) := lim inf x→ξ φ(x) for ξ ∈ ∂Ω, one sees that the arguments fail only if the points y n used in the proof belong to ∂Ω.This difficulty can be overcome if at any ξ ∈ ∂Ω one of the following occurs: any strictly positive supersolution φ of Indeed, the limit ξ of (a subsequence of) y n cannot satisfy ( 5), but if (6) holds one can conclude exactly as in the the proof of Proposition 3.1.This Section is devoted to establish sufficient conditions for either (5) or (6) to occurr, in order to have µ 1 = λ 1 .Under suitable assumptions on F , the case ( 5) is guaranteed by the existence of a continuous barrier (see Definition 4.1 below).This is shown in Section 4.1.In Section 4.2 we show that, for linear operators, µ 1 = λ 1 if the boundary only contains connected components where the so-called Fichera condition is satisfied or violated.
Problems with barriers
Here is the definition of barrier.Definition 4.1.We say that a point ξ ∈ ∂Ω admits a (continuous) barrier if there exists a ball B centred at ξ and a nonnegative function w ∈ C(Ω ∩ B) vanishing at ξ and satisfying We will need the following extra assumptions on F in an open neighbourhood V of ∂Ω: uniformly with respect to x ∈ Ω ∩ V .
Proof.Let w ∈ C(Ω ∩ B) be the barrier at ξ, provided by Definition 4.1.Conditions (H2), (H5) imply that, up to replacing w with 2w + k|x − ξ| 2 , with k > 0 small enough, it is not restrictive to assume that w > 0 outside the point ξ.We can also suppose without loss of generality that w ≥ 1 > u on Ω ∩ ∂B.Assume by contradiction that u(ξ) > 0. For ε > 0, we set Let x ε be a point where k ε is attained.Since k ε ≥ u(ξ) ε , we have that, as ε → 0 + , k ε → ∞, whence x ε → ξ.Then, it makes sense to use ν(x ε ) := Dd(x ε ), where d(x) is the signed distance function from ∂Ω, positive inside Ω and smooth in a neighbourhood of ∂Ω.We follow now the strategy of the strong comparison principle when comparing a continuous supersolution with a possibly discontinuous subsolution, see Theorem 7.9 in [8].We consider the function where n, δ > 0. Let then (x n , y n ) ∈ Ω be such that Of course the two points also depend on δ, ε but we avoid to stress this fact to simplify the notation.We have that Φ Since w is continuous, we have We first use this inequality to infer that both x n and y n converge to x ε as n tends to infinity.Then, together with the upper semicontinuity of u and the fact that u − k ε w ≤ 0, it implies that n(x n − y n ) + δν(x ε ) = o(1) as n → ∞.Since ν is continuous we eventually derive It follows that y n ∈ Ω for n large.This allows us to use the equation of w as a supersolution.
As far as u is concerned, we have Choosing δ small enough, compared to εk ε , we get that u(x n ) > 0 so that we can use the equation of u at x n even if x n ∈ ∂Ω.Usual viscosity arguments (see Theorem 3.2 in [8]) yield where Hence, by (H6), there is K such that, for bounded r, as n → ∞, by (H5) we can choose δ small enough and n large in such a way that Whence, by (7), At least if F is linear, the existence of a global smooth barrier implies that µ 1 = λ 1 .Namely, assume that there exists v ∈ C 2 such that F [v] ≥ 1 in some neighbourhood of ∂Ω and v = 0 on ∂Ω.Let λ > 0 be such that F [ϕ] ≥ λϕ for some ϕ ∈ LSC(Ω) ∩ L ∞ (Ω) such that ϕ > 0 in Ω.There exists ρ > 0 (only depending on λ) such that F Remark 5.The previous result applies to a significant example, namely to the case that the domain Ω is invariant for the associated stochastic dynamics dX t = b(X t )dt + √ 2 dW t defined in a standard probability space, being W t a Wiener process in R N .In fact, it is well known that Ω is invariant (and, at the same time, Ω is invariant) if and only if the Fichera condition is violated everywhere on the boundary, see e.g.[12], [13] and [7] for a complete discussion of this property even in non smooth domains.
Recall that the result would follow if we show that (5) or (6) hold at every ξ ∈ ∂Ω.One can readily check that the Fichera condition implies that, for δ > 0 small enough, the function w(x) := log(δ + d(x)) − log δ is a barrier at ξ in the sense of Definition 4.1.Thus, by Proposition 4.3, (5) holds in the connected components where the Fichera condition is fulfilled.Let us show that (6) holds in the others.Since the Fichera condition does not involve the zero order term of the operator, we can restrict to λ = 0 in (6).Hence, the proof of Theorem 4.4 relies on the following result, which is essentially proved in [2], Lemma 4.1.For the sake of clarity, since there are minor differences in our setting, we provide a simple proof below.Proof.In this statement, we use the convention that φ automatically satisfies the condition of being a supersolution at the points ξ ∈ Γ where φ(ξ) = +∞.Let ξ ∈ Γ and ψ ∈ C 2 (Ω ∪ Γ) be such that (φ − ψ)(ξ) = min Ω∩B (φ − ψ) = 0, for some closed ball B (with positive radius) centered at ξ satisfying B ∩ ∂Ω ⊂ Γ.Our aim is to show that F [ψ](ξ) ≥ 0. By usual arguments, it is not restrictive to assume that the above minimum is strict.Consider the family of functions (ψ ε ) ε>0 defined in Ω by ψ ε (x) := ψ(x) + ε log(d(x)).Let (x ε ) ε>0 in Ω ∩ B be such (φ − ψ ε )(x ε ) = min Ω∩B (φ − ψ ε ), and let ζ ∈ Ω ∩ B be the limit as ε → 0 + of (a subsequence of) x ε .For x ∈ Ω ∩ B, we see that Since this holds for any x ∈ Ω ∩ B, applying this inequality to a sequence of points along which φ tends to φ(ξ), we infer that ζ = ξ, because φ − ψ has a strict minimum at ξ.This shows that x ε → ξ as ε → 0 + .In particular, since x ε / ∈ ∂B, we deduce, being φ a supersolution in Ω, that Choosing in place of x a sequence of points converging to ξ, along which φ tends to φ(ξ), we eventually infer that (φ − ψ)(x ε ) → 0 as ε → 0 + .Therefore, passing to the limit in (10) we deduce F [ψ](ξ) ≥ 0, which concludes the proof.
Remark 6.We do not know whether or not µ 1 and λ 1 do coincide when ∂Ω has a connected component containing both points where the Fichera condition is satisfied and points where it is not.The problem is that positive supersolutions in Ω may not be supersolutions at the points ξ that satisfy the Fichera condition but belong to the boundary of the set where the Fichera condition does not hold.In such case, one could replace the perturbation ε log(d(x)) used in the proof of Lemma 4.5 with ε log(|x − ξ|), and the perturbation terms could be controlled if the sequences (x n ) n∈N converging to ξ on which φ tends to φ(ξ) satisfy d(x n , ∂Ω) |x n − ξ|.This is the so-called cone condition, namely that the value of ϕ at ∂Ω may be reached along at least one sequence of points lying in a cone.The relevance of this condition for strong comparison results (i.e.comparison of viscosity solutions discontinuous at the boundary) was already pointed out before and specifically in connection with stochastic control problems, see [14], [2].In particular, the conclusion of Lemma 4.5 would still hold if the cone condition is fulfilled at any point of the boundary (or at least at those points where a barrier does not exist).
FDefinition 1 . 1 .
(x, u, Du, D 2 u) = −Tr(A(x)D 2 u) − b(x) • Du − c(x)u, x ∈ Ω, where A(x) is nonnegative definite.We are interested in the following version of the Maximum Principle, MP in short : The operator F satisfies MP in Ω if every viscosity subsolution u ∈ USC(Ω) of the Dirichlet problem F (x, u, Du, D 2 u) = 0 in Ω u = 0 on ∂Ω,
Definition 1 . 2 .
Given a domain Ω in R N and an open set O such that Ω ⊂ O, and a fully nonlinear degenerate elliptic operator F in O, we define
The operator −P N has been used by F. R. Harvey and H. B. Lawson to characterize the validity of the Maximum Principle for operators only depending on the Hessian: Theorem 2.1 of[15] states, with a geometrical terminology, that an operator F : S N → R satisfies the MP if and only if F (X) ≤ 0 ⇒ −P N (X) ≤ 0. Notice that if F satisfies such a property then the function φ(x) := k − |x| 2 , with k > sup{|x| 2 : x ∈ Ω}, satisfies φ > 0 in Ω and F (D 2 φ) > 0, whence µ 1 (F, Ω) > 0.Both P k and −M + 0,1 have positive principal eigenvalue µ 1 and satisfy the MP.In addition, they admit continuous barriers at every point of the boundary of a smooth domain, in the sense of Definition 4.1 below.Therefore Theorem 4.2 implies that µ 1 coincides with λ 1 .
Assume by way of contradiction that (3) does not hold.Then (w n ) n∈N satisfies (up to subsequences) sup Ωn w n ≤ C, for some C independent of n.For n ∈ N, consider the lower and upper semicontinuous envelopes of w n :∀x ∈ O, (w n ) * (x) := lim r→0 + inf|y−x|<r w n (y), (w n ) * (x) := lim r→0 + sup |y−x|<r w n (y).
and by (H3) the same is true for (w n ) * +ε, with ε > 0 small enough.This contradicts the definition of µ 1 , hence (3) is proved.There exists then a family (z n ) n∈N , with z n ∈ USC(O) subsolution of (2) vanishing outside Ω n , such that lim n→∞ max Ωn z n = +∞. | 2013-10-11T16:41:48.000Z | 2013-10-11T00:00:00.000 | {
"year": 2013,
"sha1": "390dfb416e517113e7ee7906d234dd148ea485bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1310.3192",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1cc01c81d7cefc9a043c562d9f75e58c9654fa02",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.