id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
261637801
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the “Shifting Weight using Intermittent Fasting in night-shift workers” weight loss interventions: a mixed-methods protocol
Introduction Shift workers are at a greater risk for obesity-related conditions. The impacts of working at night presents a challenge for designing effective dietary weight-loss interventions for this population group. The Shifting Weight using Intermittent Fasting in night-shift workers (SWIFt) study is a world-first, randomized controlled trial that compares three weight-loss interventions. While the trial will evaluate the effectiveness of weight-loss outcomes, this mixed-methods evaluation aims to explore for who weight-loss outcomes are achieved and what factors (intervention features, individual, social, organisational and wider environmental) contribute to this. Methods A convergent, mixed-methods evaluation design was chosen where quantitative and qualitative data collection occurs concurrently, analyzed separately, and converged in a final synthesis. Quantitative measures include participant engagement assessed via: dietary consult attendance, fulfillment of dietary goals, dietary energy intake, adherence to self-monitoring, and rates for participant drop-out; analyzed for frequency and proportions. Regression models will determine associations between engagement measures, participant characteristics (sex, age, ethnicity, occupation, shift type, night-shifts per week, years in night shift), intervention group, and weight change. Qualitative measures include semi-structured interviews with participants at baseline, 24-weeks, and 18-months, and fortnightly audio-diaries during the 24-week intervention. Interviews/diaries will be transcribed verbatim and analyzed using five-step thematic framework analysis in NVivo. Results from the quantitative and qualitative data will be integrated via table and narrative form to interrogate the validity of conclusions. Discussion The SWIFt study is a world-first trial that compares the effectiveness of three weight-loss interventions for night shift workers. This mixed-methods evaluation aims to further explore the effectiveness of the interventions. The evaluation will determine for who the SWIFt interventions work best for, what intervention features are important, and what external factors need to be addressed to strengthen an approach. The findings will be useful for tailoring future scalability of dietary weight-loss interventions for night-shift workers. Clinical trial registration: This evaluation is based on the SWIFt trial registered with the Australian New Zealand Clinical Trials Registry [ACTRN 12619001035112].
Introduction
Shift workers make up an almost 30% subset of the workforce worldwide, who undertake critical work for a 24-h society to function (1).This essential work comes with a disproportionately greater risk for obesity, type 2 diabetes, and cardiovascular disease (2,3).Nightshift workers fall within the highest risk category, with greater odds for these poor health outcomes (2,3).Given the link between increased weight and metabolic conditions (4)(5)(6), weight loss is a logical target for reducing disease risk in this population.There is currently limited guidance for night-shift workers on best-practice dietary approaches for weight loss (7).Recent reviews on nutrition or weight-loss interventions for night-shift workers have identified a limited number of published studies and no statistically or clinically significant effect for weight loss (8).More research is needed to understand what dietary weight-loss interventions are best suited to night-shift workers (2,9).
Night work is associated with metabolic misalignment, circadian disruption, and differences in 24-h energy expenditure compared to day workers, which in turn, is thought to contribute to weight gain (10).These factors offer a target for dietary weight loss approaches for night-shift workers.In order for weight loss interventions to be successful in the night-shift working population, meal timing may need to be considered in addition to energy restriction (10,11).The Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study is a randomized-controlled (RCT), three-arm parallel intervention to compare the effectiveness of three dietary interventions on weight loss in night-shift workers, to investigate whether the timing of energy restriction is beneficial for night-shift workers (11).The SWIFt trial aims to investigate whether a 5:2 intermittent fasting approach that aligns two fast periods with night shift has benefits on both weight and metabolic outcomes (11).The 5:2 approach limits energy consumption to 20%-25% of energy requirements on two 'fast' days per week and ad libitum eating on the remaining 5 days (11).While the SWIFt RCT will examine the effectiveness of the dietary interventions for weight loss, it is now recommended evaluations are conducted alongside the trial to more fully understand the factors (both mechanisms of action and external influences) contributing to intervention effectiveness (12)(13)(14)(15)(16)(17)(18).This information is needed to improve effectiveness and to tailor future scalability of an intervention (12)(13)(14)(15)(16)(17)(18).Evaluations typically explore participant engagement to determine the extent to which the "active ingredients" or proposed mechanisms of action of an intervention can explain the study outcomes (13,15).In addition, a key consideration in interventions involving human participants is how participants engage with the study requirements, that is, what drives participant "responsiveness" to the requirements of the intervention and leads to engagement (18).This allows a deeper understanding of what may drive participant engagement to a dietary intervention in a real-world setting.
Evaluations also typically explore whether contextual factors influence outcomes and underlying mechanisms (15).Contextual factors may be positive (enablers) or negative (barriers) and relate to different spheres of influence ranging from the individual (e.g., participant characteristics such as age or personality type), social (e.g., peer influence), and organisational (e.g., workplace environment), to the wider environment (19).An understanding of these underlying mechanisms and contextual factors in the SWIFt study will be useful to understand who can benefit the most from the interventions and to replicate the potential benefits in other non-research settings.
Aims and research questions
The overall aim of this study is to explore the contributing factors (intervention features, individual, social, organizational and wider environmental) to weight loss.Specifically:
Intervention of interest
The full study protocol has been described previously (11).In summary, shift workers will be randomized to one of three dietary interventions including: 20% continuous daily energy restriction (CER), versus 5:2D (day fast twice per week), versus 5:2N (night fast twice per week aligning with a night shift) for 24-weeks with a 12-month follow-up period.In the 5:2 interventions, participants will aim to limit intake for each fasting period to 2,100 kJ/day (females) or 2,500 kJ/day (males), providing a weekly energy restriction comparable to the CER intervention (i.e., 20%).Study participants will see a study dietitian fortnightly for the first 8 weeks and then monthly for the last 16 weeks of the intervention period and will be provided with study foods for 2 days per week.The dietitian will explain the dietary intervention, set goals with participants, discuss strategies to assist with dietary adherence, and monitor progress.At the conclusion of the 24-week intervention, participants will be given practical suggestions and food options for continuing their allocated dietary intervention.Participants will be followed up at 2-, 6-and 12-months' time during the follow-up period to monitor progress.No food will be supplied to participants during this time.
Methods and analysis
This study uses "pragmatism" as a theoretical perspective guiding research design.Pragmatism is oriented to what works and choosing multiple, best-fit methods to answer the research questions posed (20,21).A convergent, mixed-methods, experimental (or intervention) design (20) has been selected given that the purpose of this study is to evaluate a randomized controlled trial (experimental intervention).In addition, this design has been selected due to the timeframe of the SWIFt study, where the outcomes of the first set of quantitative data analysis will not be known until all participants have completed 24-weeks of their dietary intervention (see Figure 1).As such, quantitative and qualitative data will be collected concurrently during the intervention and follow-up period of the SWIFt study.A mixedmethods approach has been chosen to obtain different but complementary data on the research aims and to bring together the strengths of both methods (20)(21)(22).
Participants
Eligibility criteria and recruitment for the overall SWIFt study has been previously described (11).In summary, study participants will be aged between 25 and 65 years and working a minimum of two nights per week for a minimum of six consecutive months.Participants will have a body mass index (BMI) of ≥28 kg/m 2 for non-Asian men and women and ≥ 26 kg/m 2 for Asian men and women and be able to attend either study site.Please refer to the previously published study protocol for exclusion criteria relating to medical and lifestyle conditions that may affect body composition, metabolism, or ability to follow the dietary protocol (11).Participants who consent to be invited to complete longitudinal audio diaries (LADs) and/or semistructured interviews will be contacted after randomization based on a maximum-variation sampling approach (23).Maximum variation is used to provide a diverse set of participant viewpoints across age, sex, occupation, shift-type, and intervention group.Participants who discontinue with the intervention will also be invited for an interview.Identifying sample size a priori for qualitative research is problematic (24), therefore data collection and preliminary data analysis will occur concurrently, and recruitment will cease once maximum-variation is met, and when limited new information is identified.
Data sources
Key objectives and data sources used to meet the overall aim of this mixed-methods evaluation have been developed in line with similar evaluations (25) and are outlined in Table 1.Qualitative data will be collected and reported in line with the consolidated criteria for reporting qualitative research (COREQ) (26).It is anticipated that data collection will be complete by September 2023.For the purposes of this evaluation, participant "engagement" with the intervention is defined as whether participants: follow the general requirements of the study protocol (e.g., attendance of dietetic consults, self-monitoring); follow the dietary requirements of the intervention; and choose to continue with the intervention (e.g., rates of drop-out).
Objective
Quantitative data source/analysis Qualitative data source/analysis 1.To describe participant engagement overall and for each of the SWIFt weight loss interventions 1.1 Dietary review attendance.
• Number of dietetic reviews attended (Yes/No).Also described as percentage of reviews attended overall and by intervention group.
• Number of reviews where participant is following dietary goals as assessed by dietitian (Yes/No).Also described as percentage of reviews where goals are met overall and by intervention group.
• Pre-and post-intervention energy restriction: 7-day food diary (baseline compared to 24-weeks and 18-months).Overall and by intervention group.
• Number of checklists collected.Also described as percentage of food checklists collected overall and by intervention group.
• Frequency and percentage of participants who drop-out/time to drop out overall and by intervention group.
2. Explore factors (intervention features, individual, social, organisational and wider environmental) that influence participant engagement for each of the SWIFt weight loss interventions.
• Features of intervention mapped to the BCT and TDF.
• Participant reasons for drop-out.
• Enablers/barriers to engagement mapped to the SEM.
3. Explore for who weight change outcomes are achieved for each of the SWIFt weight loss interventions and the influence of participant engagement.
• Features of intervention mapped to the BCT and TDF.
• Enablers/barriers to outcomes mapped to the SEM.
Quantitative data sources
The SWIFt study data will be entered primarily via direct data entry using Research Electronic Data Capture (REDCap; Vanderbilt University, Nashville, United States).REDCap is a secure web interface with data checks during data entry and uploading to ensure data quality and is housed on secure servers operated by Monash University, Australia.
Dietary review attendance
Details of participant attendance at each dietetic consult/review will be recorded, including the following information: attendance or missed appointment, reasons for missed appointment (if known), and a summary of the discussion at the consult.Each participant will have a score out of 8 (the total number of dietetic reviews over the 24-weeks), representing the number of reviews attended.See Objective 1.1 in Table 1.
Dietary goals
At each study visit, the study dietitian will judge and record (yes/ no) whether the participant has followed the goals of their allocated dietary approach (e.g., followed the dietary changes recommended by the dietitian for the CER intervention, or consumed the study foods for each fasting period for the 5:2 diets).Each participant will have a score out of 8 (the total number of dietetic reviews over the 24-weeks), representing the number of reviews to have followed their dietary goals.See Objective 1.2 in Table 1.
Dietary energy restriction
Estimated daily energy intake will be measured via a food diary for the 7 days leading into baseline, Week 24 (end of active interventions) and the 18-months follow-up.Participants will choose to complete the food diary either by a paper record or an online food diary equivalent ("Research Food Diary App", Xyris Software Pty Ltd., Australia) depending on participant preference.Both these methods have been shown to result in similar nutrient intake estimates for participants following a weight loss intervention (27).Participants will be encouraged to complete the food diary in real time to minimize the potential of recall bias if diaries are completed retrospectively.The food diary will be entered into Foodworks 7 (Xyris Software Pty Ltd., Australia) to calculate total daily energy intake.The 24-week and 18-month average daily energy intake will be divided by the baseline average daily energy intake to provide an estimate of the percent change in energy intake achieved at each time-point.See Objective 1.3 in Table 1.
Self-monitoring
Each SWIFt study participant will be provided with food for 2 days of the week totaling approximately 2,100 kJ/day (female) -2,500 kJ/day (male).For the 5:2 intervention groups, the total energy content of the food to be provided is equivalent to the total energy intake permitted during fasting periods, and is designed to assist with adherence to the intervention.For participants in the CER group, the foods to be provided will be equitable in terms of what will be provided to all participants and, if these foods are consumed, should replace other foods in their diet (i.e., not increase energy intake and form part of their 20% energy restriction for the day).If a participant is unable to attend the clinic facility to collect food (e.g., due to COVID-19 travel restrictions), supermarket vouchers will be provided to allow a participant to buy the food items specified by the study dietitian.For all intervention groups, a food checklist for each week of the 24-week intervention will be provided to participants to note down the food provided, the date/time the food was consumed, the amount consumed (g/ml), and to note down other foods that were also consumed in addition to the study foods on that day.At each study visit, collection of the checklist will be recorded by the study dietitian (yes/no).Each participant will have a score out of 8 (the total number of dietetic reviews over the 24-weeks), representing whether checklists were collected at each visit.See Objective 1.4 in Table 1.
Other engagement measures
Participant drop-out will be recorded, including date and reasons for drop-out (if specified).See Objective 1.5 in Table 1.
Other SWIFt study quantitative data sources
Other SWIFt study data sources to be used for this evaluation have been described previously as part of the wider SWIFt study protocol (11) and include: weight at baseline, 24-weeks and 18-months; demographics and socioeconomic factors at baseline [age, sex, ethnicity (28), occupation, shift schedule, number of night shifts per fortnight, years in night shift].See Objectives 2 and 3 in Table 1.
Semi-structured interviews
In-depth, semi-structured interviews will be undertaken with participants from each dietary intervention.Three sets of interviews will be undertaken (at baseline, 24-weeks, and 18 months).Each interview is expected to be approximately 30-45 min in duration.An interview guide for each stage of interviews has been informed by existing dietary research in the shift working population (29,30), the SWIFt pilot study, and the Theoretical Domains Framework (TDF) (see Supplementary material 1).The TDF provides a comprehensive set of determinants of behaviour grouped into constructs that have been derived from a review of relevant behaviour change theories (31).It has been successfully used for designing interview guides for previous evaluations of behaviour change interventions (32) and for analyzing interview data in a shift worker population (29).It is anticipated that interviews will occur either over the phone or via Zoom video-conference (Zoom Video Communications Inc., Version 5.13) at a time convenient to the study participant.The interviews will be recorded and transcribed verbatim (e.g., including ums, ahs, laughter and so on) by the main researcher (CD) or a transcription service.See Objectives 2 and 3 in Table 1.
The purpose of the baseline interview is to explore the motivations of participants for participating in the SWIFt study and perceived enablers or barriers to prior weight management.The interview will also identify what participants are hoping to achieve during the 24-week intervention.
The purpose of the 24-week (post-intervention) interview is to explore the participant's experience of their allocated dietary intervention.In particular, to explore the perceptions of participants around reasons for engagement/non-engagement to the intervention, study factors thought to have assisted, and contextual factors (e.g., enablers and barriers at the individual, social, organisational, and wider environment) that have thought to influence engagement and weight change outcomes of the study.The purpose of the 18-month interview is to explore the participant's experience in following their allocated dietary approach without the support of the study intervention.This will explore the enablers and barriers to following their allocated dietary approach and weight management over this 12-month follow-up period.
Longitudinal audio diaries (LADs)
A subset of participants will keep an approximate five-minute, fortnightly longitudinal audio diary (LAD) account of their experience of their dietary intervention over the 24-week period.LADs have been found to be a flexible and useful tool for enriching qualitative data collected on experiences over time that can be context specific (33).LADs allow participants to reflect on experiences at the time of the experience, rather than relying on recall as is typically the process of standard in-depth interviewing (33).In addition, it has been suggested that participants may more freely disclose matters of personal salience without the presence of a researcher (33).
The aim of the LADs is to capture the participant's experience of their allocated dietary intervention each fortnight, in particular what has worked or not worked for aspects of food consumption and the enablers or barriers thought to have influenced this experience.Participants will be encouraged to use their mobile/smart phone to record their diary, or will be provided with an alternative option if preferred.An audio diary prompt sheet will be provided to participants with questions to consider (see Supplementary material 2), which has been designed in accordance with previous research using this method as an evaluation tool (34).A practice session with their mobile phone/ recorder will be undertaken at the end of the first interview to ensure participants are able to use the technology.The audio diaries will be transcribed verbatim by the main researcher (CD) or a transcription service.After listening to the audio diary, if issues of concern are raised, such as participant discomfort, this will be discussed with the wider research team for appropriate next steps and escalation if required.
Researcher reflexivity
Researcher reflexivity is an important component to allow for an awareness of how a researcher's positionality may influence the research process, and to provide transparent and high-quality qualitative research (35,36).The main researcher (CD) is an accredited practising dietitian undertaking this work as part of her PhD studies and has experience in both quantitative and qualitative research methods.CD will be involved in the wider SWIFt study as a researcher collecting data and as a study dietitian undertaking dietary consults with SWIFt participants.Reflexive diaries will be completed by the main researcher at key data points along the research process, including: after each participant interview, after reviewing participant LADs, and at each step of the data analysis process.Insights into this process will be discussed, as appropriate, at fortnightly meetings with the wider researcher team.This will allow documentation of key steps to the data collection and analysis process that can be reported as part of study results, adding transparency to the research process.
Data analysis
Quantitative data on participant engagement (see Objective 1 at Table 1) will be analyzed for frequency/count and proportions or as overall daily energy restriction as described above.Regression models will be used to examine associations between engagement measures, participant characteristics (sex, age, ethnicity, occupation, shift schedule, number of night shifts per fortnight, years in night shift), intervention group, and weight change.It is anticipated that general linear regression models will be used for dependent variables that are continuous data and generalized linear regression models will be used for dependent variables that are count data.Differences in drop-out/ time to drop out between intervention groups will be examined via survival curve analysis.
Qualitative data includes interview transcripts and longitudinal audio diary transcripts and will be entered into NVivo (qualitative research computer software; Version 12) and analyzed using the five-steps of "framework analysis" (37,38).In step one (familiarization), a subset of transcripts representing a mix of dietary intervention groups will be analyzed inductively by one researcher (CD) and reviewed by another researcher (SK) to identify influencing factors (intervention features in addition to enablers and barriers) for participant engagement for the allocated dietary intervention.
Step two (identifying a thematic framework) involves developing an initial coding framework based on step one that details coding for the influencing factors for participant engagement.This step will also map enablers and barriers to the domains of the social-ecological model (SEM) (19) as a guide for developing themes and sub-themes.The SEM consists of five distinct domains (individual, social, organisational, community, public policy) and considers how these factors interact to influence health behaviours, enabling the development of strategies to improve behaviours.In step three (indexing), the framework will be used to code all data in NVivo.The Theoretical Domains Framework (TDF; described previously) (31) and the behaviour change taxonomy (BCT) (39) for intervention features, will also be used to map themes and sub-themes.The BCT is a set of 93 distinct behaviour change techniques that describe the components of an intervention thought to drive behaviour change (39).Where themes do not appear to fit within the mentioned frameworks, additional theoretical frameworks [e.g., Behaviour Change Wheel (BCW)] may be incorporated as necessary.In step four (charting), any differences between the dietary interventions and participant characteristics will be identified.
Finally, step five (mapping and interpretation) involves interpreting the findings based on existing literature.Merging of the qualitative and quantitative data (side-by-side comparison method) (40) will also be undertaken at this stage (see last two analysis steps at Figure 1).Qualitative themes will be used to help explain any associations found between engagement measures, participant characteristics, intervention group, and weight change from the quantitative data.This will be undertaken in both table and narrative form to assist in interpretation.Results from the different data sources will be integrated to confirm or refute the findings of each data source and to interrogate the validity of conclusions.
Discussion
Night shift workers are at greater risk for obesity, type 2 diabetes, and cardiovascular disease (2,3).There is limited guidance on what dietary interventions may be useful for weight management in this population group (7)(8).It has been suggested that both energy restriction and meal timing may be needed to address circadian misalignment and to result in effective weight loss (10,11).To explore this concept, the Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study is a world-first, randomized controlled trial (RCT) that compares three weight-loss interventions (11).While the SWIFt RCT will determine the effectiveness of these dietary interventions for weight-loss, this mixed-methods evaluation will provide an important step in determining for who the SWIFt interventions work best for, what intervention features are important, and what external factors may need to be addressed to strengthen an approach.The findings from this mixed-methods evaluation will be useful for tailoring any future scalability of the SWIFt dietary weight-loss interventions for night-shift workers.
Ethics and dissemination
This mixed-methods evaluation protocol was approved by Monash Health Human Research Ethics Committee (RES 19-0000-462A) and registered with Monash University Human Research Ethics Committee.Ethical approval has also been obtained from the University of South Australia (HREC ID: 202379) and Ambulance Victoria Research Committee (R19-037).Consent to participate in both the quantitative and qualitative parts of the study was sought before data collection.Personal information (name and contact details) will be collected by study researchers and stored separately to research data in a password protected electronic database.Only the study's researchers will have access to this information.All methods will be performed in accordance with the relevant guidelines and regulations of the approving ethics committees.Results from this evaluation will be disseminated via peer-reviewed journals, conference presentations, student theses, and presentations to interested workplaces that include shift workers.
FIGURE 1 Mixed
FIGURE 1Mixed-methods evaluation design of the Shifting Weight using Intermittent Fasting in night shift workers study.RCT, randomized controlled trial; CER, continuous energy restriction; 5:2N, intermittent fasting protocol whereby the two fast days coincide with night shifts; 5:2D, intermittent fasting protocol whereby the two fast days coincide with day shifts and/or days off.
TABLE 1
Objectives, data sources and analysis for the evaluation of Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study.
Participant characteristics include: age, sex, ethnicity, occupation, shift schedule, number of night shifts per fortnight; years in night shift; BCT, Behaviour Change Taxonomy; TDF, Theoretical Domains Framework; SEM, Social-Ecological Model. *
|
2023-09-10T15:20:30.156Z
|
2023-09-07T00:00:00.000
|
{
"year": 2023,
"sha1": "6162469ee3ac44d6f3913a64776280877856ba2d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "cd5b275ce867f1b5d62ea9c98ad0ae4b2fecf76c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
26474067
|
pes2o/s2orc
|
v3-fos-license
|
A quantum measure of coherence and incompatibility
The well-known two-slit interference is understood as a special relation between observable (localization at the slits) and state (being on both slits). Relation between an observable and a quantum state is investigated in the general case. It is assumed that the amount of ceherence equals that of incompatibility between observable and state. On ground of this, an argument is peresented that leads to a natural quantum measure of coherence, called"coherence or incompatibility information". Its properties are studied in detail making use of 'the mixing property of relative entropy' derived in this article. A precise relation between the measure of coherence of an observable and that of its coarsening is obtained and discussed from the intutitive point of view. Convexity of the measure is proved, and thus the fact that it is an information entity is established. A few more detailed properties of coherence information are derived with a view to investigate final-state entanglement in general repeatable measurement, and, more importantly, general bipartite entanglement in follow ups of this study.
Introduction
In a preceding article [1] coherence in a relative sense, i. e., understood as a relation between a given observable and a given quantum state, was postulated to be identical with incompatibility between observable and state as far as its quantity I C is concerned. (For notation see the passage immediately following the proof of Proposition 5 below.) Then it was shown that bipartite pure state entanglement is expressible as I C (with a suitable observable).
Pure states cannot be obtained as mixtures. Therefore, the question if I C is concave, i. e., a genuine entropy quantity, or convex, i. e., a genuine information one, or something third, could not be put in this context. The first aim of this study is to clarify this point. (This is done in Proposition 5.) To enable this, the mixing property of relative entropy (paralleling the mixing property of entropy and Donald's identity for relative entropy, see the Remark) is derived.
In a follow up of the mentioned article [2] the special case of the final bipartite pure state | ψ 12 in repeatable measurement, when the initial state is pure, was studied. It was shown that the initial quantity of incompatibility between the measured observable and the initial state reappears as the amount of entanglement in |ψ 12 , and is further preserved when it is shifted in reading the measurement result. This completes Vedral's result [3] that the information transfer from object (subsystem 1) to measuring apparatus (subsystem 2) does not exhaust the mutual information I 12 in the final state.
I think it is of interest to find out if the mentioned preservation of the quantity of incompatibility between the measured observable and the initial pure state is restricted to pure state, or it can be generalized to mixed initial state. This is not a straightforward generalization. It requires more knowledge on I C . The second aim of this study is to provide such knowledge, which will be possible due to the mentioned auxiliary relativeentropy relations (see section 3).
In a further preceding article [4] an arbitrary discrete incomplete observable A and its completion A c to a complete observable were investigated and it was shown that I C (A, ρ) ≤ I C (A c , ρ) for any state ρ. This inequality is expected if the assumption on the identity of the amount of coherence and that of incompatibility is correct. But it is desirable to evaluate I C (A c , ρ) − I C (A, ρ) and thus to try to acquire more insight into the nature of I C . This is the third aim of this article. (See the discussion after the proof of the theorem below.) The fourth aim of this paper is to present an argument that starts with the mentioned identity assumption and leads to an expression for the quantity of coherence in a natural way. Will this expression be the same as the ad hoc introduced one? This is done in section 2 and an affirmative answer is obtained. It is summed up in the conclusion (subsection 5.2.).
The fifth and last aim of this investigation is perhaps the most important one. Namely, in [4] it was established that I C plays an important role also in some mixed bipartite states. This line of research should be continued in a follow up because it may contribute to our understanding how mutual information in general bipartite states breaks up into a quasi-classical part and entanglement, which is the object of study of a wide circle of researchers, e. g. [5], [6]. To this purpose, one may need more detailed knowledge of the properties of I C . To acquire such knowledge is the fifth aim of this article (see section 4).
Background in classical statistical physics
To obtain a background for our quantum study of coherence, we assume that a classical discrete variable A(q) = l a l χ l (q) is given (all a l ∈ R being distinct). The symbol q denotes the continuous state variables (as a rule, it consists of twice as many variables as there are degrees of freedom in the system); χ l are the characteristic functions ∀l : χ l (q) ≡ 1 if q ∈ A l , and zero otherwise. Naturally, A l are (Lebesgue measurable) sets such that A(q) = a l if and only if q ∈ A l , and l A l = Q, where Q is the entire state space (or phase space) and the sum is the union of disjoint sets.
Let ρ(q) be a continuous probability distribution in Q with the physical meaning of a statistical 'state' of the system. One can think of ρ(q) as of a mixture where ∀l : p l ≡ Q ρ(q)χ l (q)dq are the statistical weights (probabilities of the results a l if A(q) is measured in ρ(q)), and ∀l, p l > 0 : ρ l (q) ≡ ρ(q)χ l (q)/p l are the 'states' with definite (or sharp) values of A(q). Let B(q) be any other continuous or discrete variable. Then, utilizing (1), its average can be written One distinguishes the contributions of the individual eigenvalues a l of A(q) through the terms on the RHS. They contribute to B ρ each separately. All this serves only as a classical background to help us to understand the nonclassical, i. e., purely quantum relations between the analogous quantum entities.
Transition to the quantum mechanical case
The quantum mechanical analogues of the mentioned classical entities are the following.
Discrete observables (Hermitian operators) A = l a l P l (spectral form in terms of distinct eigenvalues), ρ quantum state (density operator), and B an arbitrary observable (Hermitian operator). The quantum average is B ρ ≡ tr(ρB). In the transition from classical to quantum one runs into a surprise, that is known but, perhaps, not sufficiently well known. Before we formulate it in the form of a lemma, let us introduce the Lüders state ρ L [7] in order to obtain the quantum analogues of relations (1) and (2). It is that mixture of states, each with a definite value of A, which has a minimal Hilbert-Schmidt distance from the given state ρ [8]. It is defined as where ∀l : p l ≡ tr(ρP l ) (3b) are again the statistical weights in (3a) (or the probabilities of the results a l when A is measured in ρ), and ∀l, p l > 0 : ρ l L ≡ P l ρP l /p l (3c) are the states with definite values a l of A. Finally, Decomposition (3a) is the analogue of (1), and (3d) is that of (2). Lemma 1. The following four statements are equivalent: (i) The state ρ cannot be written as a mixture of states in each of which the observable A has a definite value.
(ii) The observable A and the state ρ are incompatible, i. e., the operators do not commute [A, ρ] = 0.
(iv) There exists an observable B such that where the RHS is given by (3d). Proof is given in Appendix 1.
The physical meaning of lemma 1 is that it defines a kind of quantum coherence as a special relation between observable and state. Experimentally it is exhibited in interference. In this relative sense (relation between variable and state) it is lacking in classical physics because there a state can always be written as a mixture of states in each of which the variable in question has a definite value (negation of (i), cf (1)). Though classical waves do exhibit a kind of coherence and show interference, but this is in a different sense (cf section 5).
One should note that the Lüders state needs no other characterization than its role in lemma 1 (in particular (iii)). The fact that it is "closest" to ρ in Hilbert-Schmidt metrics, though actually not important for this study, raises the thought-provoking questions if "closest" is true also in other metrics; if not, why is the Hilbert-Schmidt metrics more suitable.
We take two-slit interference [9] to serve as an illustration for lemma 1. Let A be a dichotomic position observable with two eigenvalues: localization at the left slit, and localization at the right slit on the first screen. Let ρ be a wave packet that has just arrived at this two-slit screen. Next, one has to find a suitable observable B such that inequality (4) be satisfied at the mentioned moment. Moreover, one wants to observe experimentally the LHS of (4), or rather the individual probabilities of the eigenvalues of B (that go into the LHS).
To this purpose, one actually replaces B by another localization observable A ′ on a second screen, to which the photon will arrive some time later. This observable is suitable for observation (of its localization probabilities). Hence, one can define B ≡ U −1 A ′ U, U being the evolution operator expressing the movement of the particle from the two-slit screen to the second one. One should note that B is not a position observable though A ′ is because the hamiltonian that generates U contains the kinetic energy (square of linear momentum).
Claim (i) of lemma 1 says that the particle is not moving through either the left or the right slit. Claim (ii) expresses the same fact algebraicly. Namely, ρ, being a pure state | ψ ψ |, would commute with A only if | ψ lay in an eigensubspace of A. In our case this would mean that the particle traverses one of the slits.
The Lüders state ρ L is, in some sense, the best approximation to ρ of a state traversing one or the other of the slits. Naturally, ρ = ρ L as claimed by (iii). Claim (iv), i. e., relation (4), amounts to the same as the fact that the interference pattern on the second screen is not equal to the sum of those that would be obtained when only one of the slits were open (for some time) and then the other (for another, disjoint, equally long time).
In the two-slit experiment one actually observes the time-delayed equivalent of (4): Since the LHS of (5) is distinct from the RHS, one speaks of the former as interference.
In the described two-slit case the LHS of (5) gives fringes, whereas the RHS does not. Nevertheless, it is not always true that the LHS of (5) itself means interference. This is the case only with a suitable pair of A and ρ (cf (ii) in lemma 1). Let me give a counterexample. Let us take another two-slit experiment in which the slits have polarizers that give opposite linear polarization to the light passing the slits [10]. The state ρ in the slits is then such that we have equality in (5) (though A ′ is the same), and there is no interference because [A, ρ] = 0. (The state ρ =| ψ ψ | is now in the composite spatial-polarization state space, and the spatial subsystem state -the reduced statistical operator -is a Lüders state.) One should note that when interference is displayed, one has three ingredients: the state ρ, the observable A the two eigenvalues of which play a cooperative role, and the second observable A ′ the probabilities of eigenvalues of which are observed. Since in theory there can be many observables like A ′ , or B in (4), one likes to omit them. Then one speaks of coherence of the observable A in the state ρ. We make use of the same concepts in the general theory. (4) is valid, is called interference. If an observable A and a state ρ stand in such a mutual relation that any of the four claims of lemma 1 is known to be valid, then one speaks of coherence.
Definition 1. The LHS of relation (4), in case inequality
One should note that the concepts of interference and of coherence stand in a peculiar relation to each other: There is no coherence (between A and ρ) unless an observable B that exhibits interference can be, in principle, found; if the latter is the case, and only then, one may forget about B, and concentrate on the relation between A and ρ, i. e., on coherence. The kind of quantum coherence investigated in this paper can be more fully called "eigenvalue coherence of an observable in relation to a state" in view of the cooperative role of some eigenvalues (or, more precisely, their quantum numbers, because the values of the eigenvalues play no role) as seen in (4).
Thus, any of the four (equivalent) claims in lemma 1 defines coherence. But for the investigation in this article the important claim is (ii): coherence exists if and only if A and ρ do not commute. This remark is the corner stone of the expounded approach to investigating coherence (as in the preceding studies [1], [4]).
How to obtain a quantum measure of coherence?
We start with the assumption that coherence of an observable A with respect to a state ρ is essentially the same thing as incompatibility of A and ρ: [A, ρ] = 0. The quantum measure will be called coherence or incompatibility information, and it will be denoted by I C (A, ρ) or shortly I C (cf (10) below).
One wonders what the meaning of a larger value of I C for coherence is. It is more of what? The only answer I can think of is in accordance with the above assumption: More of incompatibility of A and ρ.
The next question is: Do we know what is a "larger amount of incompatibility"? The seminal review on entropy of Wehrl [11] (section III.C there) explains that each member of the Wigner-Yanase-Dyson family of skew informations is a good measure of incompatibility of ρ and A. Namely, I p (ρ, A) is positive unless ρ and A commute, when it is zero. It is also convex as an information quantity should be. Substituting the spectral form of A in (6), one obtains One can see that I p depends on the eigenvalues of A.
As well known, A and ρ are compatible if and only if all eigenprojectors P l of the former are compatible with the latter. The eigenvalues of A do not enter this relation. Hence, I p (ρ, A) given by (6) is not the kind of incompatibility measure that we are looking for. One wonders if there is any other kind.
To obtain an answer, we turn to a neighboring quantity: the quantum amount of uncertainty of A in ρ. It is the entropy S(A, ρ): where H(p l ) is the Shannon entropy and ∀l : p l ≡ tr(P l ρ). .
It is known that whenever A and ρ are incompatible, and A is a complete observable, i. e., if all its eigenvalues are nondegenerate (we'll write it as A c ), then always S(A c , ρ) > S(ρ). When A c is compatible with ρ, the two quantities are equal. The interpretation that the larger the difference S(A c , ρ) − S(ρ), the more incompatible A c and ρ are seems plausible. Hence, we require for complete observables A c , that I C (A c , ρ) should equal this quantity: Equivalently, one can require that the following peculiar decomposition of the entropy in case of a complete observable should hold: On the other hand, if A is a discrete observable that is complete or incomplete but compatible with ρ, then the following decomposition parallels (8): (cf (7a), (7b) and (7c)). If p l = 0, the corresponding term in the sum is by definition zero.
Decomposition (9) is obtained by application of the mixing property of entropy [11] (see Sections II.F. and II.B. there). It applies to orthogonal state decomposition, in this case to ρ = l p l (P l ρP l /p l ), and it reads S(ρ) = H(p l ) + l p l S(P l ρP l /p l ) (cf (7b)).
The coherence information I C does not appear in (9). This is as it should be because it is zero due to the assumed compatibility of A and ρ.
In case of a general discrete A, which is complete or incomplete, compatible with ρ or not, we must interpolate between (8) and (9). This can be done by observing that both decompositions can be rewritten in a unified way as (valid for either A = A c or for [A, ρ] = 0). The searched for interpolated formula should thus be the same relation (10), but valid this time for all discrete A. Thus, I C (A, ρ) is obtained by the presented argument.
Making use of the mixing property of entropy, we can rewrite (10) equivalently as the following general decomposition of entropy: (Note that A is any discrete observable in (11).) In order to derive a number of properties of coherence information, we make a deviation into relative entropy theory.
Useful relative-entropy relations
The relative entropy S(ρ||σ) of a state (density operator) ρ with respect to a state σ is by definition or else S(ρ||σ) = +∞ (see p. 16 in [12]). By 'support', denoted by 'supp', is meant the subspace that is the topological closure of the range. If σ is singular and condition (12b) is valid, then the orthocomplement of the support (i. e., the null space) of ρ, contains the null space of σ, and both operators reduce in supp(σ). Relation (12b) is valid in this subspace. Both density operators reduce also in the null space of σ. Here the log is not defined, but it comes after zero, and it is generally understood that zero times an undefined quantity is zero. We'll refer to this as the zero convention.
The more familiar concept of (von Neumann) quantum entropy, S(ρ) ≡ −tr[ρlog(ρ)], also requires the zero convention. If the state space is infinite dimensional, then, in a sense, entropy is almost always infinite (cf p.241 in [11]). In finite-dimensional spaces, entropy is always finite.
There is an equality for entropy that is much used, and we have utilized it, the mixing property concerning orthogonal state decomposition (cf p. 242 in [11]): being the Shannon entropy of the probability distribution {w k : ∀k}.
The first aim of this section is to derive an analogue of the mixing property of entropy. The second aim is to derive two corollaries that we shall need in this paper.
We will find it convenient to make use of an extension log e of the logarithmic function to the entire real axis: if 0 < x : The following elementary property of the extended logarithm will be utilized.
Lemma 2.
If an orthogonal state decomposition (13) is given, then where Q k is the projector onto the support of σ k , and the prim on the sum means that the terms corresponding to w k = 0 are omitted.
Proof. Spectral forms ∀k, w k > 0 : σ k = l k s l k | l k l k | (all s l k positive) give a spectral form σ = k l k w k s l k | l k l k | of σ on account of the orthogonality assumed in (13) and the zero convention. Since numerical functions define the corresponding operator functions via spectral forms, one obtains further (In the last step Q k = l k | l k l k | for w k > 0 was made use of.) The same is obtained from the RHS when the spectral forms of σ k are substituted in it.
✷ Proposition 1. Let condition (12b) be valid for the states ρ and σ, and let an orthogonal state decomposition (13) be given. Then one has where, for w k > 0, Q k projects onto the support of σ k , and Q k ≡ 0 if w k = 0, p k ≡ tr(ρQ k ), and is the classical discrete counterpart of the quantum relative entropy, valid because (p k > 0) ⇒ (w k > 0). One should note that the claimed validity of the classical analogue of (12b) is due to the definitions of p k and Q k . Besides, (13) implies that ( k Q k ) projects onto supp(σ). Further, as a consequence of (12b), ( k Q k )ρ = ρ. Hence, tr k Q k ρQ k = tr( k Q k ρ) = 1.
We call decomposition (15) the mixing property of relative entropy. Proof of proposition 1: We define ∀k, p k > 0 : ρ k ≡ Q k ρQ k /p k .
On account of (12b), the standard logarithm can be replaced by the extended one in definition (12a) of relative entropy: S(ρ||σ) = −S(ρ) − tr[ρlog e (σ)] . Substituting (13) on the RHS, and utilizing (14), the relative entropy S(ρ||σ) becomes Adding and subtracting H(p k ), replacing log e (σ k ) by Q k [log e (σ k )]Q k , and taking into account (16) and (17), one further obtains (The zero convention is valid for the last term because the density operator Q k ρQ k /p k may not be defined. Note that replacing k by ′ k in (16) does not change the LHS because only p k = 0 terms are omitted.) Adding and subtracting the entropies S(ρ k ) in the sum, one further has Utilizing the mixing property of entropy, one can put S k p k ρ k instead of [H(p k ) + ′ k p k S(ρ k )]. Owing to (18), we can replace log e by the standard logarithm and thus obtain the RHS (15). ✷
Remark. In a sense, (15) runs parallel to Donald's identity
when an orthogonal decomposition ρ = k p k ρ k of the first state ρ in relative entropy is given.
For a general decomposition ρ = k p k ρ k of the first state Donald's identity reads [14], [15] (relation (5) in the latter). The more special relation in the remark follows from this on account of the relation that generalizes the mixing property of entropy: If ρ = k p k ρ k is any state decomposition, then is valid (cf Lemma 4 and Remark 1 in [16]). Now we turn to the derivation of some consequences of proposition 1. Let ρ be a state and A = i a i P i + j a j P j a spectral form of a discrete observable (Hermitian operator) A, where the eigenvalues a i and a j are all distinct. The index i enumerates all the detectable eigenvalues, i. e., ∀i : tr(ρP i ) > 0, and tr[ρ( i P i )] = 1.
The simplest quantum measurement of A in ρ changes this state into the Lüders state: (cf (3a) and (3c)). Such a measurement is often called "ideal". Corollary 1. The relative-entropic "distance" from any quantum state to its Lüders state is the difference between the corresponding quantum entropies: Proof. First we prove that To this purpose, we write down a decomposition (19a) of ρ into pure states. One has supp( i P i ) ⊇ supp(ρ) (equivalent to the certainty of ( i P i ) in ρ, cf [4]), and the decomposition (19a) implies that each | ψ n belongs to supp(ρ) (cf Appendix 2(ii)). Hence, | ψ n ∈ supp( i P i ); equivalently, | ψ n = ( i P i ) | ψ n . Therefore, one can write On the other hand, (19a) implies As seen from (23b), all vectors (P i | ψ n ) belong to supp( i P i ρP i ). Hence, so do all | ψ n (due to (23a)). Since ρ is the mixture (19a) of the | ψ n , the latter span supp(ρ) (cf Appendix 2(ii)). Thus, finally, also (22) follows. In our case σ ≡ i P i ρP i in (15). We replace k by i. Next, we establish Since Q i is, by definition, the support projector of (P i ρP i ), and P i (P i ρP i ) = (P i ρP i ), one has P i Q i = Q i (see Appendix 2(i)). One can write P i ρP i = Q i (P i ρP i )Q i , from which then (24) follows.
Realizing that w i ≡ tr(Q i ρQ i ) = tr(P i ρP i ) ≡ p i due to (24), one obtains H(p i ||w i ) = 0 and ∀i : S(Q i ρQ i /p i ||P i ρP i /w i ) = 0 in (15) for the case at issue. This completes the proof. ✷ Now we turn to a peculiar further implication of Corollary 1. Let B = k l k b kl k P kl k be a spectral form of a discrete observable (Hermitian operator) B such that all eigenvalues b kl k are distinct. Besides, let B be more complete than A or, synonymously, a refinement of the latter. This, by definition means that Note that all eigenvalues b kl k of B with indices others than il i are undetectable in ρ.
Properties of coherence information
To begin with, we notice in (10) that I C depends on ρ and A, actually only on the eigenprojectors of the latter.
As a consequence of (10), one can also write the definition of I C in the form of a relative entropy: as follows from corollary 1. It was proved long ago [17] that S l P l ρP l > S(ρ) if and only if A and ρ are incompatible, and the two entropies are equal otherwise. Thus, in case of compatibility [A, ρ] = 0, I C is zero, otherwise it is positive. This is what we would intuitively expect.
It was proved in [4] (theorem 2 there) that where "inc" on the sum denotes summing only over all those values of l the corresponding P l of which are incompatible with ρ, and w inc ≡ tr(ρ inc l P l ). This corresponds to an intuitive expectation that the quantity I C should depend only on those eigenprojectors P l of A that do not commute with ρ, and not at all on those that do.
We obtain (27) as a special case of a much more general result below (cf the theorem and propositions 2 and 3).
We shall need another known concept. For the sake of precision and clarity, we define it.
Definition 2. One says that a discrete observableĀ = māmPm (spectral form in terms of distinct eigenvaluesā m ) is coarser than or a coarsening of A = l a l P l if there is a partitioning Π in the set {l : ∀l} of all index values of the latter Π : {l : ∀l} = m C m , such that ∀m :P m = l∈Cm P l (C m are classes of values of the index l, and the sum is the union of the disjoint classes). One also says that A is finer than or a refinement ofĀ. Theorem. LetĀ be any coarsening of A (cf definition 2). Then and ∀m : p m ≡ tr(ρP m ). (If p m = 0, then, by the zero convention, the corresponding I C in (28) need not be defined. The product is by definition zero.) Before we prove the theorem, we apply corollary 2 to our case. Under the assumptions of the theorem, one has Proof of the Theorem. On account of (26), (29) takes the form Utilizing (10) for the second term on the RHS, the latter becomes S l (P l ρP l ) − S m (P m ρP m ) . Making use of the mixing property of entropy in both these terms, and cancelling out H(p m ) (cf (7b) mutatis mutandis), this difference, further, becomes m p m S ( l∈Cm P l ρP l )/p m − m p m S P m ρP m /p m ) . Its substitution in (30) with the help of (10) (and definition 2) then gives the claimed relation (28). (Naturally, one must be aware of the fact thatĀ is a coarsening of A, hence ∀m : ✷ IfĀ is any coarsening of A, then the index values m of the former replace classes C m of index values l of the latter. Hence, coherence inĀ -as a cooperative role of index values -must be poorer than in A. Therefore, one would intuitively expect that I C (Ā, ρ) must not be larger than I C (A, ρ). The theorem confirms this, and tells more: it gives the expression by which I C (A, ρ) exceed I C (Ā, ρ). One wonders what the intuitive meaning of this is.
Discussion of the theorem. Let us think of ρ as describing a laboratory ensemble, and let us imagine that an ideal measurement ofĀ is performed on each quantum system in the ensemble. The ensemble ρ is then replaced by the mixture m p m (P m ρP m /p m ) of subensembles (P m ρP m /p m ). One can think of the measurement of the more refined observable A as taking place in two steps: the first is the mentioned measurement of the coarser observableĀ, and the second is a continuation of measurement of A in each subensemble (P m ρP m /p m ). Let us assume additivity of I C in two-step measurement.
Further, let us bear in mind that, though I C is meant to be a property of each individual member of the ensemble ρ, it is statistical, i. e., it is given in terms of the ensemble. Finally, in the second step we have an ensemble of subensembles (a superensemble). Since our system is anywhere in the entire ensemble m p m (P m ρP m /p m ) of the second step, one must average over the superensemble with the statistical weights p m of its subensemble-members (P m ρP m /p m ).
If m ′ = m, then the partP m ′ A of A = m ′′P m ′′ A is evidently undetectable in the subensemble ρ m . Hence, onlyP m A is relevant from the entire A, i. e., I C (A, ρ) reduces to I C (P m A, ρ m ) there.
In this way one can understand relation (28). What have we learnt from this? It is that I C is additive and statistical. This conclusion is in keeping with the neighboring quantity S(A, ρ). Namely, one can easily derive a relation similar to (28) for it: That I C and S(A, ρ) behave equally in an additive and statistical way is no surprise since they are terms in the same general decomposition (11) of the entropy S(ρ) of the state ρ.
The theorem is a substantially stronger form of a previous result (theorem 3 in [4]), in which I C (A, ρ) ≥ I C (Ā, ρ) was established with necessary and sufficient conditions for equality, which are obvious in the theorem. (I C was denoted by E C in previous work, cf my comment following proposition 5 below.) The theorem has the following immediate consequences. Proposition 2. If the coarseningĀ defined in definition 2 is compatible with ρ, then (28) reduces to Proposition 3. Let us define a coarsening Π (cf definition 2) that partitions {l : ∀l} into at most three classes: C inc comprising all index values l for which a l is detectable (i. e., of positive probability) and P l is incompatible with ρ, C comp consisting of all l for which a l is detectable and P l is compatible with ρ, and, finally, C und which is made up of all l for which a l is undetectable. The coarsening thus defined is compatible with ρ, and (31) reduces to (27).
Proof. In the coarsening Π of proposition 3 the index m takes on three 'values': 'inc', 'comp', and 'und'. It is easily seen that the coarser observableĀ thus defined is compatible with ρ. Hence, (31) applies. Further, the second and third terms are zero. In this way, (27) ensues.
Proof. Relative entropy is known to be unitary invariant. On account of (26), so is I C .
✷ This is as it should be because I C should not depend on the basis in the state space: UAU −1 and UρU −1 can be understood as A and ρ respectively viewed in another basis.
Proposition 5. Coherence information I C is convex.
Proof. This is an immediate consequence of the known convexity of relative entropy (cf (26)) under joint mixing of the two states in it.
On account of convexity we know that I C is an information entity, and not an entropy one (or else it would be concave). In previous work [1], [4], [2] the same quantity (the RHS of (10)) was erroneously denoted by E C (A, ρ) and treated as an entropy quantity. But this does not imply that any of the applications of E C (A, ρ) was erroneous. All one has to do is to replace this symbol by I C (A, ρ) and keep in mind that one is dealing with an information quantity.
Conclusion
Perhaps it is of interest to comment upon the more standard uses of the term "coherence" in the literature.
One encounters the basic use of the word "coherence" in the properties of light waves. One distinguishes two types of coherence there: (i) Temporal coherence, which is a measure of the correlation between the phases of a light wave at different points along the direction of propagation, and (ii) spatial coherence, which is a measure of the correlation between the phases of a light wave at different points transverse to the direction of propagation. (The fascinating phenomenon of holography requires a large measure of both temporal and spatial coherence of light.) Quantum "coherence" refers also to large numbers of particles that cooperate collectively in a single quantum state. The best known examples are superfluidity, superconductivity, and laser light, all macroscopic phenomena. In the last example different parts of the laser beam are related to each other in phase, which can lead to interference effects. "Coherence" is often related to different kinds of correlations, see, e. g., [18].
In all mentioned examples "coherence" refers to an absolute property of the quantum state of the system; in contrast with the use of the term in this article, which expresses a relative property: relation between observable and state. As it was mentioned, the kind of quantum coherence studied in this article can be more fully called "eigenvalue coherence of an observable in relation to a state" in view of the cooperative role of the eigenvalues (or rather their quantum numbers, because the values of the eigenvalues play no role) as seen in (4).
In the literature one often finds the claim that quantum pure states are coherent. From the analytical point of view of this article one can say that a pure state |ψ is not coherent with respect to any observable for which |ψ ψ | is an eigenprojector. But it is coherent with respect to all other observables.
On generality of the results
A question may linger on to the end of this study: What if the observable is not a discrete one? Can one still speak of eigenvalue coherence in relation to a given state ρ?
It seems to me that the answer is that one should write down the following partial spectral form of a general observable A ′ : where the summation goes over all eigenvalues of A ′ , and P ≡ l P l . One should take the discrete coarsening A of A ′ : where the eigenvalue a is arbitrary but distinct from all {a l : ∀l}. Then the expounded eigenvalue coherence theory should by applied to A, and it should be valid for A ′ (as the best we can do for the latter). In a preceding article [4] the case when P ⊥ = 0 with the eigenvalue a undetectable was studied.
One has eigenvalue coherence of a general observable A ′ in relation to a state ρ if either A ′ has at least two eigenvalues or if A ′ has at least one eigenvalue and P ⊥ = 0.
Another question that may linger on is whether the state ρ that was used in this paper is really general. If ρ has an infinite-dimensional range and A has infinitelymany eigenvalues, it may happen that there are infinitely-many detectable ones. The expounded theory covers also this case.
Summing up
In an attempt to understand the essential features of two-slit interference (see lemma 1 followed by its application to two-slit interference in subsection 1.2), a general coherence theory was developed based on the assumption that 'coherence' equals 'incompatibility' [A, ρ] = 0 between observable and state. Since this relation means that ρ is incompatible with at least one eigenevent (eigenprojector) P l of A, and this property is independent of the eigenvalues, it was argued that the entire family of observables with one and the same decomposition of the identity l P l = I (the latter is called "closure relation" if A is complete) should have the same amount of incompatibility. This discarded the Wigner-Yanase-Dyson family of skew informations (6). Further, it was argued that the necessarily nonnegative quantity S(A c , ρ) − S(ρ) was a natural measure of incompatibility between a complete observable A c and the state ρ satisfying the stated claim. Finally, interpolating between the case of a complete and that of a compatible observable (see (8), (9) and (10)), the general expression (10) was obtained.
Thus, a natural quantum measure of how much of coherence, and, equivalently, incompatibility, there is if a discrete observable A = l a l P l and a state ρ are given was derived along the expounded argument. It was called coherence or incompatibility information (denoted by I C (A, ρ) or shortly I C ) in section 2.
A deviation into a general relative-entropy investigation was made in section 3. What was called 'the mixing property of relative entropy' (parallelling that of entropy) was derived, and so were two corollaries.
The relative-entropy results were utilized to express coherence information I C (A, ρ) in the form of a relative entropy (cf (26)) in section 4. Connection between the coherence information I C (Ā, ρ) of any coarseningĀ (cf definition 2) of an observable A and I C (A, ρ) was obtained in the theorem. Its intuitive meaning was discussed. It was concluded that I C is additive in two-step measurement and statistical.
The corresponding relation took a much simpler form in caseĀ was compatible with ρ (cf proposition 2). In a special case of this a result from previous work was recognized (cf proposition 3 and (27)). Coherence information was shown to be unitary invariant (proposition 4) and convex (proposition 5).
In previous work [1], [4], [2] the coherence information I C was successfully utilized in analyzing bipartite quantum correlations. The last one of them filled in an informationtheoretical gap noted in preceding investigation of the measurement process [3].
Since a number of new properties of I C have now been obtained, even more fruitful applications can be expected.
|
2017-09-22T02:54:37.022Z
|
2005-03-08T00:00:00.000
|
{
"year": 2005,
"sha1": "56ec5a1640c41869b30ec3e2718296fd8d6e4336",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0503077",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "56ec5a1640c41869b30ec3e2718296fd8d6e4336",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
239023220
|
pes2o/s2orc
|
v3-fos-license
|
Pipeline monitoring technology in Nord Stream 2
This article proposes to use ROV technology to assess the stability of an underwater gas transmission system. The assessment methodology should be based on the methods of magnetoscopy and visual monitoring using differential magnetometers. The monitoring technology was developed and tested at JSC «YUZHMORGEOLOGIYA» during the inspection and technical supervision of the Dzhubga-Sochi subsea pipeline.
Introduction
The stability of main pipelines is regulated by a set of organizational and technical measures aimed at ensuring the required operating parameters, controlled by the geometry of the pipeline, the appearance of dangerous defects, metal loss, hydroabrasive wear and stress corrosion cracking.
When calculating strength and stability, the most unfavorable combination of functional, natural, construction and accidental loads that can occur simultaneously must be taken into account [1].
The organization of monitoring a subsea pipeline faces the problem of inability to access the subsea part of the pipeline from the inside. Onshore control is provided by passing in-line shells. For this, special chambers are provided for placing the shell inside the pipeline. Possible control is provided only from the outside using remote-controlled underwater vehicles, which are equipped with special ships [13,14].
The integration of possible methods for monitoring pipelines according to the parameters of monitoring the pipe body from the inside and from the outside by remote-controlled vehicles makes it possible to predict hazardous and emergency situations associated with the emergence and development of dangerous defects in offshore subsea pipelines [2].
Materials end methods
According to the UK Standard "Smart pigs and defect assessment codes: completing the circle", determined types of defects available for investigation by in-line inspection shells are given in Table 1 [3]. There is a lack in studying pitting corrosion damages -pitting corrosion. The mechanism for the formation of pitting damage is associated with the physics and the structure of a metal. The most informative diagnostic method is research using inline inspection shells on a pipeline route [4,15].
Inspection projectiles detect and measure defects in the pipeline. The received information contains data [5]: 2 -metal defects, metal losses -the location of a defect -the geometry in terms of depth, length and width. The degree of confidence in the definition of defects as defect depth as '+/-15, wall thickness % (weight), 80% of the time, is determined by the frequency of checks, as shown in Figure 1 [3,6].
The theoretical aspects of the inspection shells use are well studied: the pipe body is magnetized and any change in the pipe cross-sectional area caused by the presence of defects leads to magnetic flux losses, the level of which can be used to conclude that there is a defect. Taking into account the projectile velocity parameter and scanning by the system of orthogonal sensors, it is possible to construct a picture of the distribution of defects in the pipe body.
Monitoring the development of the depth of defects testifies to the operational stock of underwater pipelines in terms of the parameters of the working pressure in a pipe and, accordingly, the volume of gas pumped and to the effectiveness of the project [7].
The use of inspection shells is possible only in areas with special cameras for the use of IIT. The existing criteria in Table 1 are "1", "6" and "7" allow the use of unmanned underwater vehicles, and actually control the operational parameters along the outer wall of the pipelines. In this case, an accurate description of the processes requires mathematical models to accurately describe the state of pipelines. The main observed signs include the stability of a pipeline shape, the presence of pitting damage along the outer wall and the presence of bulges and concavities along the pipe body caused by stretching and compression along the longitudinal axis of the pipeline [8]. Figure 1. Actual defect depth (% wt) determined by in-line inspection shells, type "Magnescan" [3] Determination of defects such as ulcers and various cavities is complicated by the complex defect shape, as shown in Figure 2. Then, defects can be divided into two sets: -half of these defects can be placed in a set of "checks"; -the other half into the "training" set. From defects referred to the array, "training" expands the base of measurements. As a rule, they include measures of signal amplitude, width and length. According to the resulting base, we built a model for errors minimizing and a forecast of the defect development ( Figure 3).
Reported Defect Depth/wt
Actual Defect Depth/wt Error in Depth Measurement (defect depth/wt) Figure 3. Refined mathematical estimates of the defect parameters [3] Further, using the analytical methods, we measure the effect of the defect on the stability of the system. The calculations are carried out by the method of two curves, developed at the Battelli Institute: where: is fracture stress; 0 are stresses of plastic deformations of the material; d is the depth of the defect; t is the pipe wall thickness; M is an empirical factor (deviations) influencing the increase in stress at the ends of defects, consisting of an external radial deflection along a defect initiating fracture, but usually a function of loading (Y) and/or ultimate strength (T) [10].
where YS is yield strength, TS is ultimate strength.
where 2C is the length of the defect; R is the radius of the pipe. The forecast of the defect development from the pipeline inspection to the next inspection is based on the analysis of two curves, presenting pressure calculated during tests to the curve of the maximum allowable operating pressure (MAOP).
The determination of defects and the stability forecast are based on the principles of mathematical modeling with the calculation of correction coefficients according to the equations of NG -18 of the Battelli Institute.
The methods for an approximate assessment of the stability of underwater pipelines can be based on the use of remotely controlled underwater vehicles.
In the Russian Federation, the direction of research of systems of underwater pipelines with the use of remotely controlled unmanned vehicles RCUV is actual [10].
Results
The standard equipment of RCUV is shown in Table 3. Unlike piston shells, RCUV requires high costs associated with the involvement of specialists trained to operate underwater complexes, the involvement of a carrier vessel, navigation conditions favorable for the use of ships.
The view from the video surveillance systems and diagnostic windows of the ROV is shown in Figures 4 and 5. The control of the parameters of the offshore subsea pipeline is ensured by remotely operated unmanned underwater vehicles when performing verification calculations of steel subsea pipelines for stability (collapse) under the influence of hydrostatic pressure and corrosion damage. The stability parameters are determined by calculating the ovality coefficient according to criterion "7" in Table 1 (Rules for the Classification and Construction of Submarine Pipelines, Part I of the Register of Shipping in the Russian Federation).
Discussion
The use of in-line inspection projectiles for the purpose of inspecting underwater gas pipelines is possible only in areas with special inspection chambers and allowing the projectile to pass through the section.
In all other cases, it is necessary to use remotely controlled underwater vehicles. A feature of the wear of the route sections is local isolation at the ends of the pipeline. Therefore, the greatest loss of metal is observed near the turbulent sections of the route, where there are irregularities that are not covered with a special coating.
Usually these areas include shut-off valves, control valves and chambers for the passage of in-line inspection shells. The issues of control of the route according to the quality criteria of stability, in particular, the collapse of a part of the pipeline due to corrosion damage, is ensured by the use of remotely controlled underwater vehicles.
It is obvious that the use of VIS and RCUV allow to effectively solve the issues of environmental safety of the Nord Stream-2 project.
Conclusions
Efficient operation of an offshore subsea pipeline is possible if the operating (maximum allowable) pressure in the pipeline is ensured. Then the maximum gas pumping is provided with practically equal rates of gas transportation costs. Provision of the maximum pumping rate is achieved by continuous technological and technical control of the pipeline route, with fixing the temperature of the pumped product. The use of ROV technology, together with the available control of magnetoscopy, makes it possible to maintain the desired optimum performance of the Nord Stream-2 offshore main underwater gas pipeline.
|
2021-10-19T20:08:39.299Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4d67625eb310d099c1672302c12fb5ef77e109a8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/872/1/012021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "4d67625eb310d099c1672302c12fb5ef77e109a8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
3946485
|
pes2o/s2orc
|
v3-fos-license
|
Dynamic Onset of Feynman Relation in the Phonon Regime
The Feynman relation, a much celebrated condensed matter physics gemstone for more than 70 years, predicts that the density excitation spectrum and structure factor of a condensed Bosonic system in the phonon regime drops linear and continuously to zero. Until now, this widely accepted monotonic excitation energy drop as the function of reduced quasi-momentum has never been challenged in a spin-preserving process. We show rigorously that in a light-matter wave-mixing process in a Bosonic quantum gas, an optical-dipole potential arising from the internally-generated field can profoundly alter the Feynman relation and result in a new dynamic relation that exhibits an astonishing non-Feynman-like onset and cut-off in the excitation spectrum of the ground state energy of spin-preserving processes. This is the first time that a nonlinear optical process is shown to actively and significantly alter the density excitation response of a quantum gas. Indeed, this dynamic relation with a non-Feynman onset and cut-off has no correspondence in either nonlinear optics of a normal gas or a phonon-based condensed matter Bogoliubov theory.
The Feynman relation 1 , originally derived to describe the density excitation spectrum of superfluid 4 He at T = 0, provides a very fundamental understanding of the collective response of an ultra-cold gaseous or solid-state system in the small phonon regime without spin changes. The most celebrated predictions of this important relation are the linear dependency of quasi-momentum transfer in the excitation spectrum of the ground state when the external interaction is neglected, and the corresponding behavior of the system structure factor. Indeed, such monotonic behavior approaching zero excitation energy as the quasi-momentum transfer reduces has been widely accepted in solid-state physics 2 .
Nonlinear optics 3 , a completely non-related field of study, investigates a wide range of light-matter interactions from sub-atomic particles and condensed matter physics [4][5][6] , to astrophysical phenomena 3,7 . Although widely used in many fields of physical science, nonlinear optics usually only serves as an indispensable probe (especially when the light intensity is not very high) rather than a tool to actively and dynamically alter the fundamental properties of a material under investigation except the intensity-dependent effects induced by ultra-high power ultra-short-pulse lasers 3,8 . The discovery of gaseous phase Bose-Einstein condensates 1 , now referred to as bosonic quantum gases 9 , has significantly changed our understanding of nonlinear optics of light-matter interactions, even at very weak field strengths. Surprisingly, the non-linear optical response of quantum gases can be fundamentally different from that of normal gases 10 . Indeed, many effects and phenomena well-known to nonlinear optics in normal gases are now subject to significant modification and often require a completely different interpretation. Moreover, many nonlinear wave-mixing processes in normal gases with well-understood physics are now found to have no correspondence in quantum gases.
Here we show how nonlinear optics of a weak Sum-Frequency-Generation (SFG) process can profoundly impact the collective response of a bosonic quantum gas in a spin-preserving process. We show that even a weak light-matter wave-mixing process in a quantum gas can significantly alter the well-known single-spin Feynman relation in the phonon regime, resulting in a dynamic non-Feynman onset and cut-off in the ground state excitation spectrum. This is a profoundly fundamental change because (1) never in the history of condensed matter physics has the single-spin Feynman relation in the phonon regime been challenged; and (2) never before has a nonlinear optical process been shown to have such a profound impact on both the condensed-matter collective response of the system and the physics of the light-field generation process. These dynamic effects may open many possibilities for novel nonlinear optical processes in quantum gases.
Model.
We begin by considering an elongated Bose condensate with its long axis aligned with the z-axis ( Fig. 1).
We excite the condensate with a pump laser field E L with wave vector k L that is linearly polarized along the x-axis and propagates along the z-axis. Because of the allowed dipole coupling between states | → | 2 1 an SFG field E M with wave vector k M and frequency ω M is generated from electronic state |2 (Fig. 1a).
The Gross-Pitaevskii equation 11,12 describing the evolution of the atomic mean-field wave function in the presence of the pump field is given by The critical element that distinguishes the Hamiltonian in Eq. (1) from the Hamiltonian describing light-matter multi-wave mixing in a quantum gas 10 is the dipole potential energy term U D on the right side of Eq. (1). This term arises from the internally-generated SFG field but has been neglected in all light-quantum gas studies reported to date. However, we show here that this term is a vitally important element in nonlinear optical properties of light-matter interactions in the phonon regime; the regime where the well-known Feynman relation dominates. It is important to emphasize that U D (r, t) is a dynamically changing quantity depending on the generation and coherent propagation of the wave-mixing field Ω M (r, t), and hence it cannot be considered as a static trap potential in Ĥ 0 . In fact, it can be shown mathematically that any quasi-static external trapping mechanisms, magnetic or optical, can be removed from the Maxwell equation for the SFG field by a phase transformation and therefore have no effect on the Feynman relation in the phonon regime.
In the single-spin Feynman phonon response regime no trap potential exists 13,14 , and the excitation of the system is described by small quasi-momentum transfer q usually arising from thermal agitations. This trap-free Hamiltonian corresponds to an atomic Bose-Einstein condensate system where the trapping potential is fully turned off. This avoids the initial mean-field reaction that completely masks the small phonon regime in which the Feynman relation applies. This is exactly what has been done experimentally in measurements of the structure factor of a Bose-condensate 13,14 . It is then immediately clear that with the external potential an additional small, dynamic and yet negative excitation energy can profoundly alter the energy spectrum and response of the system. This is achieved by an optical wave-mixing process with a negative detuning which results in a dynamic internally-generated field and a non-adiabatic dipole potential U D (r, t) < 0 that can cancel the phonon energy in the excitation spectrum and thereby drastically change the Feynman relation. Maxwell-Bogoliubov theoretical framework. With the above general argument we begin our calculation using the Maxwell-Bogoliubov theoretical framework for quantum gases 10 . We generalize the seminal study of Raman wave mixing and scattering by Bloembergen and Shen 15,16 to encompass both atomic center-of-mass (CM) motion and density excitations required for a quantum gas.
We assume that the Bose-Einstein condensate wave function of a single-specie is given by Here, Ψ 0 (r) is the ground state condensate wave function in the absence of any external light fields and is the chemical potential. In addition, q m and ω = M q /2 m q 2 2 m are the quasi-momentum transfer and the energy of the elementary excitation induced by the light-wave mixing and scattering process, respectively, with m being the Bogoliubov excitation mode index. For mathematical simplicity and without loss of generality, we only consider the lowest Bogoliubov mode by neglecting the mode index m. Multi-Bogoliubov modes can be similarly solved analytically. The effect is just a slight broadening of the width of the SFG field.
The generalized optical-matter wave vector and energy mismatch ∆ = − ∆ K q k and ω ω ∆Ω = − ∆ q encompass both optical-wave and fundamental excitations. We have also introduced a phenomenological motional state resonance line width γ which characterizes the damping of the elementary excitation 17 .
Under the slowly varying envelope approximation the Maxwell equation for the wave-mixing field E M propagating along the z-axis (forward direction) can be written as 10,18 Mathematically, Eq. (3) can be formally integrated and inserted into the right side of Eq. (4) from which the propagation properties of the wave-mixing field can be numerically evaluated. For mathematical simplicity, and for the purpose of demonstrating the key underlying physics, we seek without the loss of generality a first-order solution of Eq. (3) that is adiabatic with respect to optical response but non-adiabatic with respect to atomic center-of-mass motion. The non-adiabaticity with respect to the atomic CM motion reflects the fact that U D cannot be treated as a static trap potential, as discussed before. We emphasize, however, that we have solved Eqs. (3,4) numerically without any approximation and obtained the same results. With the above approximations we obtain from Eq.
. Using Eq. (5) to construct the polarization source term for the SFG field, we obtain the Maxwell-Bogoliubov equation where the Bogoliubov fundamental excitation spectrum ω B (q; U D ) and the quantum gas structure factor S(q; U D ) are given by In deriving the above results we have defined 2 , enforced the total optical-matter wave phase matching in the forward direction, and also neglected far-off resonance contributions.
Under the lowest-order approximation in the phonon regime, the above expressions for the excitation spectrum and the structure factor of the quantum gas become ω ω ω µ µ ω , and ( ; ) Clearly, when U D is neglected Eq. (7) reduces to the well-known Feynman variational approximation 1 for the density excitation spectrum of superfluid 4 He at T = 0. We emphasize that Eqs. (6,7) are obtained within standard nonlinear optics formalism 3 , and are therefore completely unrelated in anyway to the local density approximation treatment.
Scientific RepoRts | 6:25690 | DOI: 10.1038/srep25690 Bogoliubov excitation spectrum and condensate structure factor. Equation (7) predicts a novel and surprising feature never before seen in nonlinear optics. With a red-detuned pump δ 2 < 0 and U D = |Ω M | 2 /δ 2 < 0, the Bogoliubov excitation spectrum for elementary excitations in the wave-mixing process is dynamically red-shifted, resulting in a dynamic onset and cut-off in the well-known "static" Feynman relation (Fig. 2a). Correspondingly, the frequency of the generated field ω M will be dynamically blue-shifted from it original frequency (since ω ω ω ω ω . Accompanying this dynamic change in the Feynman relation is an abrupt drop in the quantum gas structure factor in the small quasi-momentum transfer regime (Fig. 2b), resulting in strong suppression of the forward light-wave-mixing and coherent propagation growth process. Indeed, this forward suppression is much more severe and abrupt than predicted by the usual "static" Feynman relation of single spin. The range of q < q c (here ≈ is the critical q at which the Bogoliubov dispersion becomes imaginary) forms the region in which wave propagation is forbidden. We note that such cut-offs in the excitation spectrum have been predicted for a Spin-Orbit Coupled (SOC) spinor Bose condensate 19,20 where spin-flip interactions introduce unstable branches which result in such a forbidden regime. In our case, however, the multi-optical wave-mixing process preserves the single-spin state since the cut-off is introduced by nonlinear optical process that are spin preserving. The dynamic feature associated with the wave generation and propagation in such a spin-preserving process has no correspondence with the usual SOC processes which are, in general, instantaneous. Figure 3 displays contour plots of the Bogoliubov excitation spectrum ω B (q; U D ) and the quantum gas structure factor S(q; U D ) as functions of the optical-dipole potential induced by the generated field and the quasi momentum transfer in phonon regime using Eq. (7). In this small phonon regime where the static Feynman relation dominates the generated field propagates co-linearly with the pump laser. The modified Feynman relation results in a much stronger suppression of the forward wave-mixing gain.
Dynamical evolution of the forward-generated field. The consequences of the Bogoliubov frequency red-shift can be further investigated and verified numerically by integrating the Maxwell-Bogoliubov equation (6) Figure 4. Intensity distribution of the forward-generated field in the phonon regime. Here, we integrate Eq. (6) using Eq. (7). Left column (side-view, center-cut-view, and top-view): the effect of U D is neglected from Eq. (7). The middle plot clearly shows the linear behavior near the center, as expected from the well-known Feynman relation in the phonon regime. Right column (side-view, center-cut-view, and top-view): the effect of U D is included in Eq. (7). The presence of a non-Feynman onset and cut-off with a red-detuned pump and SFG field propagating in the forward direction can be clearly seen in the middle plot.
Scientific RepoRts | 6:25690 | DOI: 10.1038/srep25690 for the SFG field under the condition of total optical-matter wave phase-matching, i.e., ∆ − = k q 0 and ω ω ∆ − = 0 B . In Fig. 4 we show the transverse distribution of the intensity |Ω M | 2 obtained by direct numerical integration of Eq. (6) using Eq. (7). The initial condensate wave function is assumed to have a transverse Thomas-Fermi distribution, i.e., Ψ = − n r r (1 / ) 0 2 0 2 0 2 where n 0 ≈ 10 11 / cm 3 is the peak condensate density and r 0 is the transverse Thomas-Fermi radius. The initial condition for the generated field is assumed to be π Ω /2 M (0) = 3 kHz (corresponding to one initial photon with a pulse duration of 200 μs traveling along the long axis of the condensate having a diameter of 10 μm). We take δ 2 /2π = −1 GHz, μ = 600 Hz, k L = 8.06 μm −1 , κ = n 10 0 7 (cm · s) −1 , γ/2π = 10 kHz, and L = 0.02 cm. The presence of the dynamic non-Feynman onset and cut-off can be clearly seen (plots in the right column) when compared with the results where the small, non-adiabatic, and dynamic effects arising from the internally-generated field are neglected (plots in the left column). Note that in this forward wave generation direction, which is the most efficient wave-mixing and propagation direction in a normal gas, the generated field is suppressed much more strongly than the linear behavior predicted by the well-known Feynman relation. The dynamic non-Feynman onset and cut-off lead to a unique suppression in the structure factor and the coherent propagation gain of the quantum gas that has no correspondence in the nonlinear optical response of normal gases and solid-state materials.
Discussion and Conclusion
Nonlinear optics of quantum gases is a fascinating new research field in which many new unexpected effects occur that might otherwise be strictly forbidden in normal gases or solid-state materials. Fundamental changes to the single-spin Feynman relation and the nonlinear optical response shown in this work exemplify the novelty of this new research direction within the discipline of nonlinear optics 21 . The exotic new effects and features shown in this study significantly enrich our fundamental understanding of the nonlinear optical response of these intriguing materials referred to as quantum gases. Indeed, none of these novel effects can be obtained by the so-called "matter-wave grating" or "matter-wave superradiance" theory which is fundamentally incapable of explaining any requisite details of light-matter wave-mixing processes in quantum gases 21 .
|
2018-04-03T02:03:22.891Z
|
2016-05-09T00:00:00.000
|
{
"year": 2016,
"sha1": "4cf769d4df26defcc9df74befd3d1ff0e252afcd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep25690.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cf769d4df26defcc9df74befd3d1ff0e252afcd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
264081792
|
pes2o/s2orc
|
v3-fos-license
|
What caused the unseasonal extreme dust storm in Uzbekistan during November 2021?
An unseasonal dust storm hit large parts of Central Asia on 4–5 November 2021, setting records for the column aerosol burden and fine particulate concentration in Tashkent, Uzbekistan. The dust event originated from an agropastoral region in southern Kazakhstan, where the soil erodibility was enhanced by a prolonged agricultural drought resulting from La Niña-related precipitation deficit and persistent high atmospheric evaporative demand. The dust outbreak was triggered by sustained postfrontal northerly winds during an extreme cold air outbreak. The cold air and dust outbreaks were preceded by a chain of processes consisting of recurrent synoptic-scale transient Rossby wave packets over the North Pacific and North Atlantic, upper-level wave breaking and blocking over Greenland, followed by high-latitude blocking over Northern Europe and West Siberia, and the equatorward shift of a tropopause polar vortex and cold pool into southern Kazakhstan. Our study suggests that the historic dust storm in Uzbekistan was a compound weather event driven by cold extreme, high winds, and drought precondition.
Introduction
On 4-5 November 2021, an extreme dust storm hit large parts of Uzbekistan, Tajikistan, and Turkmenistan, causing property damage, socioeconomic disruption, and a surge of respiratory illnesses.The event has been described in the media as the worst dust storm ever recorded in Uzbekistan.Tashkent, the most populous city of Central Asia, reportedly suffered extremely high concentrations of fine particulates (PM 2.5 ) resulting in an increase of acute respiratory problems and ambulance service calls [1].Poor visibility and hazardous weather also caused automobile accidents and power outages in surrounding regions [1].
Climatologically, dust outbreaks in Central Asia are most common between late spring and early summer due to frequent cold intrusion from high latitudes and sufficiently dry and exposed soils during the early growing season [2,3].The 2021 November dust event was a rare occurrence during the boreal cold season, when dust emission is usually suppressed by seasonally high precipitation and possible early onset of snow cover [4].The event was reportedly triggered by a cold air outbreak (CAO) linked to the Siberian High, which was found to extend abnormally westward to the Caspian Sea from its typical center of action during winter 2021 [5].The CAO reportedly triggered record snowfall and persistent cold extremes across China during 6-8 November 2021 [6,7].In addition, Central Asia suffered severe drought and record high temperatures in 2021, which may have enhanced the soil erodibility and susceptibility to wind erosion [8].
While past studies shed some light on the meteorological aspect of the unseasonal dust storm in Uzbekistan, several key questions remain unanswered, including: (1) how intense was the dust storm from a climatological perspective?(2) What atmospheric processes triggered the cold air and dust outbreaks?and (3) how did the regional hydroclimate contribute to the dust outbreak?This study presents observational evidence of the record-breaking aerosol burden and particulate pollution following the dust outbreak (section 3.1), and investigates the atmospheric dynamics (section 3.2) and hydroclimate preconditions (section 3.3) associated with such an unseasonal extreme dust event in Central Asia.
Data and methods
Two long-term observations are used to evaluate the dust event intensity in Tashkent: Hourly PM 2.5 concentration and Air Quality Index reported from the U.S. Embassy in Tashkent, and daily average coarse-mode aerosol optical depth (AOD 550 nm) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) level 2 deep-blue and dark-target merged AOD and fine mode fraction [9].Surface synoptic observations and ERA5 reanalysis are used to investigate the precursory atmospheric processes leading up to the dust outbreak.Specifically, we identify two upper-level dynamical features related to persistent high-impact surface weather: atmospheric blocking and recurrent synoptic-scale transient Rossby wave packets.Blocks are defined as regions with persistent negative anomalies of 500-150 hPa vertically averaged potential vorticity (PV) exceeding −1.3 PVU (1 PVU = 10 −6 Km 2 s −1 kg −1 ) and a spatial overlap of at least 70% between successive 6 hourly time steps for at least 5 days [10,11].Recurrent Rossby wave packets can repeatedly pass and amplify at the same longitude in the same phase, resulting in recurring ridging or troughing patterns and persistent cold or hot extremes [12,13].The strength of the transient Rossby wave packets is described by a 'R' metric, which is a timeand wavenumber-filtered signal derived from the Hovmöller diagram of 250 hPa meridional winds averaged over 35 • N-65 • N [14].
In addition, we examine the role of tropopause polar vortices (TPVs) in the CAO development using the TPVTrack method [15].TPVs are subsynopticscale, coherent tropopause-based vortices with typical radii of 100 to 1000 km and lifetimes of days to months, characterized by a local minimum of dynamic tropopause potential temperature, a cyclonic PV anomaly in the Arctic lower stratosphere, and a lowered tropopause often to the 500 hPa level or below [16][17][18].The equatorward advection of TPVs and associated cold pool can trigger intense CAOs in the midlatitudes [19][20][21].Finally, we assess the drought severity and impact on the soil erodibility using three long-term datasets, including the Climate Research Unit Time Series monthly precipitation and potential evapotranspiration (PET) [22], European Space Agency Climate Change Initiative blended passive-active microwave soil moisture product [23,24], and MODIS Climate Modeling Grid global level 3 monthly Normalized Difference Vegetation Index (NDVI) product [25].The microwave soil moisture product represents the top 2 cm of the soil and is thus closely related to the inter-particle cohesion and wind erodibility of top soil [26].
Observed extreme dust burden in Tashkent
According to geostationary satellite observations, the initial dust outbreak began around 03:30Z (08:30 AM) 4 November 2021 from an agropastoral area in the Ordabasy District of southern Kazakhstan (see figure S1 for details).The source area consisted of a mix of semiarid steppe (used as natural pasture) and rainfed croplands near the lower Arys River.Three hours later, MODIS onboard Terra detected a coneshaped dust plume advancing south (figure 1(a)) and sweeping across several highly populated regions the next day (figure 1(b)), including the Fergana Valley, Gissar Valley, and the foothills of Tien Shan-Pamirs mountains.Located 150 km downwind, Tashkent experienced record-breaking particulate concentration and persistent unhealthy air quality following the dust outbreak (figure 1(c)).The U.S. Embassy at Tashkent observed a peak PM 2.5 concentration of 978 µg m −3 on 5 November 2021 (figure 1(c)).It was the highest PM 2.5 ever observed across all U.S. Embassy locations in Central Asia (figure S2).The Centre of Hydrometeorological Service of Uzbekistan (Uzhydromet) reported even higher PM 2.5 levels in excess of 2000 µg m −3 or 30 times the permissible level in Tashkent [27].The extreme dust burden was also observed by the MODIS coarse-mode AOD record.Particularly, the dust storm caused the highest AOD in Tashkent during the boreal cold season (November-April), as well as the second highest annual mean AOD since 2001 (figure 1(d)).
Atmospheric dynamics of cold air and dust outbreaks
On 4 November 2021, surface synoptic observations near the dust source revealed a reversal and rapid rise of sea level pressure, a sudden drop of temperature, and a shift from calm condition to sustained northerly winds (figure 2(a)), which collectively indicated the passage of a cold front.Persistent wind gusts of The atmospheric blocking and CAO mechanisms are further examined using the dynamic tropopause potential temperature maps shown in figure 3. A high-amplitude ridge developed over northeastern Canada on 27 October 2021 (day −8), initiating RWB and blocking near Greenland (B1 in figure 3).A split flow pattern developed around the blocking ridge and the October 2021 nor'easter located equatorward off the Mid-Atlantic coast (W).The meridional PV gradient intensified downstream in association with the superposition of the polar and subtropical jets (circled area in figure 3(a); see figure S3 for cross section along 45 • W), driven by the simultaneous poleward transport of an anticyclonic PV anomaly in the warm sector of the nor'easter along the Gulf Stream and the equatorward incursion of a cyclonic PV anomaly from a TPV over the Lincoln Sea.A deep extratropical cyclone with a central pressure of 963 hPa (L) developed in the left exit region of the superposed jet, likely due to the strengthened ageostrophic transverse circulation in the jet exit and the polar cyclonic PV anomaly [29].The warm sector of the extratropical low featured a poleward ascending flow of low PV air which favored upperlevel ridging and block onset over Northern Europe (B2 in figure 3(b)) [30,31].As the amplifying ridge built into the Barents Sea, the Ural and West Siberia region experienced RWB, blocking onset, and formation of an elongated high PV trough (or PV streamer) downstream (figure 3(c)).The deepening trough was embedded with an equatorward shifted TPV which was situated near the Kara Sea two days earlier (see the TPV trajectory in figure 3(d)).The TPV experienced significant stretching and formed via splitting a new cyclonic coherent tropopause disturbance (CTD) over southern Kazakhstan (figure 3(d)).
The TPV and CTD were dynamically similar, with the difference being that the TPV formed and spent much of its lifetime in polar regions, which allows a tropospheric-deep cold pool to form within and underneath the TPV [18,32].Indeed, figure 3(e) shows that both the TPV and CTD featured a lowered tropopause to around 500 hPa and strong wind circulations surrounding them.The cold air triggering the CAO was associated with the TPV, as indicated by the isentropes bending upward into the TPV as well as the extremely low 1000-500 hPa thickness (figure 3(e)).The CTD formation in southern Kazakhstan, however, facilitated the deep penetration of the TPVassociated cold pool into the midlatitudes.At the ground level, the highest winds developed just behind the surface cold front (figures 3(e) and (f)), and ahead of rapid low-level anticyclogenesis (anticyclonic center denoted as H in figures 3(c) and (d)).Further analysis reveals that the anticyclonic center rose to 1050.6 hPa on 4 November at a rate of 8 hPa d −1 .The explosive anticyclogenesis and attendant sustained postfrontal northerly flows contributed to the intense dust outbreak in southern Kazakhstan and subsequent rapid dispersion to large parts of Central Asia (figure 2(b)).The tropopause-level diagnosis suggests that the cold air and dust outbreaks were associated with blocking development over the Ural and West Siberia regions via a planetary wave train propagating from the North Atlantic.The wave propagation was supported by a strong meridional PV gradient at the Atlantic jet entrance region, but was followed by wave breaking directly downstream and poleward of the jet exit near Iceland (figure 3(a)).The upper-level wave breaking and blocking onset are closely related to changes in the Atlantic jet position, storm track, and leading patterns of the North Atlantic winter climate variability [34][35][36][37].During the prior month of the dust storm (October 2021), frequent blocking accompanied by strong positive 500 hPa geopotential height anomalies occurred in the vicinity of Greenland, and to a lesser extent, over Northern Europe and West Siberia (figure 4(a)).Consistent with the high-latitude blocking, the eddy-driven polar jet shifted well southward into the subtropics over the central and eastern Atlantic (figure 4(a)).Based on the 1951-2020 climatology, extreme blocking conditions occurred near Greenland during October 2021, resulting in the second highest October Greenland Blocking Index (GBI) (2.3) since 1950 (figure 4(c)).The extreme positive GBI and associated equatorward shifted jet position corresponded to an exceptionally negative pattern of the North Atlantic Oscillation (NAO), as indicated by the second lowest October NAO score (−2.0) since 1950 (figure 4(c)).Extreme negative NAO and Greenland blocking favor enhanced meridional airmass transport contributing to more frequent CAOs over Eurasia and North America [38][39][40][41].
Hydroclimate precondition and enhanced soil erodibility
While the CAO and attendant high winds triggered the dust outbreak onset, land surface conditions play an important role in modulating the location and intensity of dust emission, as indicated by the highly localized source activation despite the powerful frontal system (figure 1(a)).Notably, hydroclimate precondition, including soil moisture and vegetation, affects the aeolian sediment availability by increasing the soil inter-particle cohesion, sheltering dry or exposed surfaces, and reducing the near surface wind momentum through drag partition [4,42].Drought is an important precondition for increasing wind erosion in semiarid areas, and has been linked to previous intense dust periods in Central Asia [42,43].
Central Asia suffered widespread precipitation declines in 2021, especially in the high mountains where most of the annual precipitation is received during the cold season and released as snowmelt runoff during the warm season (May-October) (figure 5(a)) [44,45].As the headwater area for the Arys river and surrounding agropastoral regions in southern Kazakhstan, western Tien Shan-Pamirs (boxed area in figure 5 ).In addition to drought, land use intensification from agricultural and pastoral productions may have further enhanced the sediment availability and localized dust emissions, as previously observed in southern Kazakhstan [43].
The recent prolonged drought of Central Asia occurred during a 'triple-dip' La Niña lasting for the past three winters of 2020-2023 (figure 5(e)).This unprecedented event continued the predominant multi-year La Niña and La Niña-like conditions since the turn of the century.A recent study suggested that La Niña events are associated with belowaverage precipitation during the cold season, belowaverage soil moisture and vegetation cover during the following warm season, and consequently aboveaverage dust burden over Central Asia [43].Previous studies also showed an increasing ENSO influence on the hydroclimate in Central Asia since the 1990 s, due to an increasing frequency of Central Pacific La Niña events characterized by anomalously cold central Pacific and warm western Pacific resulting in an enhanced zonal sea surface temperature (SST) gradient across the west Pacific (figure S4) [43,[46][47][48][49][50][51].The seasonally persistent tropical thermal anomalies and zonal SST gradients foster coherent Rossby wave responses over East Africa and Central and Southwest Asia, often as part of a global zonal band of upper level anticyclonic anomalies leading to widespread precipitation reductions across the Northern Hemisphere midlatitudes (figures S5 and S6).The growing oceanic forcing of precipitation modifications over Central Asia has been largely attributed to the rapid warming and expansion of the tropical Indo-Pacific Ocean [52,53].
Discussions and conclusions
The dust storm of 4-5 November 2021 in Uzbekistan, which originated from an agropastoral area of natural pasture and rainfed croplands in southern Kazakhstan, was an unseasonal extreme event with significant impact on regional air quality and socioeconomic activity.Our analysis suggests that this dust storm was a 'preconditioned' compound event caused by an extreme CAO and attendant postfrontal northerly winds (the driver) and a prolonged drought (the precondition) associated with a multi-year La Niña event (the driver of the precondition) [54].The cold air and dust outbreaks were preceded by a succession of planetary-and synoptic-scale processes during the prior week, including recurrent transient synoptic-scale Rossby wave packets over the North Pacific, upper-level wave breaking and blocking near Greenland, southward shifted polar jet and extratropical cyclogenesis over the North Atlantic, and high-latitude blocking over Northern Europe and West Siberia.This type of Atlantic-origin wave train has been previously identified as a primary mechanism of CAOs in Asia [55][56][57].Apart from the planetary waviness, the extreme CAO event was associated with the equatorward advection of an Arctic TPV and cold pools.While this case study is not sufficient to establish a causal link between TPV and CAO, equatorward shifted TPVs have been frequently associated with intense CAO events in mid-and low-latitude regions.For example, 40% of the most intense CAOs in the Fram Strait were associated with Arctic TPVs [19], while TPVs were involved in 85% CAOs in eastern North America [21].
A prolonged agricultural drought-characterized by persistent below-average precipitation and aboveaverage PET-desiccated the drylands of Central Asia, thereby creating a favorable precondition for dust uplifting from the dry, exposed soils.The severe drought was linked to an unprecedented 'triple-dip' La Niña event through teleconnection effect on the wintertime circulation and precipitation in Central Asia.Recent studies suggested that multi-year consecutive La Niña events, such as the recent occurrences of 2010-12, 2016-18, and 2020-23, may become more frequent under greenhouse warming [58].The heightened risk of prolonged drought, combined with rising temperatures and atmospheric evaporative demand, may continue to aggravate the surface water availability, soil erodibility, and dust outbreaks in Central Asia.
X Xi et alFigure 1 .
Figure 1.(a), (b) MODIS/Terra true color images on 4 and 5 November 2021.Black triangles denote the weather station at Arys, Kazakhstan.Open squares denote the geographic domain for computing the regional mean AOD over Tashkent.(c) Hourly PM2.5 and Air Quality Index reported at the U.S. Embassy in Tashkent.Horizontal red lines are the unhealthy for sensitive groups, unhealthy, very unhealthy, and hazardous levels.(d) Daily average coarse-mode AOD and PM2.5 in Tashkent.Observed values on 5 November 2021 are marked (AOD in black square; PM2.5 in red square).
Figure 3 .
Figure 3. (a)-(d) Potential temperature (shading) and winds on the 2 PVU surface (or dynamic tropopause) at 06 UTC of preceding days of the dust storm.Major features are Greenland blocking (B1), Ural blocking (B2), October 2021 nor'easter which eventually became Tropical Storm Wanda (W), jet superposition (black circle), extratropical cyclogenesis (L), rapid anticyclogenesis (H), tropopause polar vortex (TPV), and coherent tropopause disturbance (CTD).The white dotted line in (d) denotes the TPV trajectory at 06 UTC between 30 October and 4 November 2021.(e), (f) Vertical cross sections of wind speed (shading) and potential temperature (gray contours every 5 K) along the A-A' and B-B' black lines in (d).The thick black contour denotes the dynamic tropopause.The thick blue contour denotes the 1000-500 hPa thickness.The approximate locations of surface cold front and TPV-associated cold pool are shown.The weather station at Arys, Kazakhstan is denoted as a black triangle in all panels.
(a) received the lowest precipitation since 1990 (figure 5(e)).Meanwhile, the PET anomaly revealed persistent above-average atmospheric water demand (figure 5(b)), which exacerbated the already limited surface water.As a result, both the soil moisture and NDVI anomalies indicate widespread agricultural drought during the 2021 warm season (figures 5(c) and (d)).The severe drought reportedly caused massive crop failure and livestock deaths in Central Asia [8].A closer look at the dust source region (boxed area in figures 5(c) and (d)) reveals prolonged drought conditions since 2019, which could result in a cumulative risk of desertification and wind erosion (figure 5(e)
|
2023-10-14T15:23:55.418Z
|
2023-10-12T00:00:00.000
|
{
"year": 2023,
"sha1": "4f84d91b67a5c2169632b59e9059ed8c5ada348a",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1748-9326/ad02af/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b80e775f182eb898053752949bbebfe41b2b529c",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119220033
|
pes2o/s2orc
|
v3-fos-license
|
Global fitting of single spin asymmetry: an attempt
We present an attempt of global analysis of Semi-Inclusive Deep Inelastic Scattering (SIDIS) $\ell p^\uparrow \to \ell' \pi X$ data on single spin asymmetries and data on left-right asymmetry $A_N$ in $p^\uparrow p \to \pi X$ in order to simultaneously extract information on Sivers function and twist-three quark-gluon Efremov-Teryaev-Qiu-Sterman (ETQS) function. We explore different possibilities such as node of Sivers function in $x$ or $k_\perp$ in order to explain"sign mismatch"between these functions. We show that $\pi^\pm$ SIDIS data and $\pi^0$ STAR data can be well described in a combined TMD and twist-3 fit, however $\pi^\pm$ BRAHMS data are not described in a satisfactory way. This leaves open a question to the solution of the"sign mismatch". Possible explanations are then discussed.
I. INTRODUCTION
Single transverse spin asymmetries (SSAs) are a rich source of information on internal partonic structure of the nucleon [1]. The exploration of the underlying mechanisms has led us to realize that the SSAs are sensitive probes of the parton's transverse motion. There are two different yet related QCD factorization formalisms to incorparate such transverse components of the parton's momentum and to describe the observed asymmetries: the transverse momentum dependent (TMD) factorization and the collinear twist-three factorization approaches.
For processes such as single inclusive hadron production in proton-proton collisions, p ↑ p → hX, which exhibits only one characteristic hard scale, the transverse momentum P 2 h⊥ ≫ Λ 2 QCD of the produced hadron, one could describe the SSAs in terms of twist-three quark-gluon correlation functions [2][3][4][5][6][7]. One of the well-known examples is the so-called Efremov-Teryaev-Qiu-Sterman (ETQS) function. Phenomenological extractions were performed in different papers [8,9]. On the other hand, for processes such as Semi-Inclusive Deep Inelastic Scattering (SIDIS) which possesses two characteristic scales, photon's virtuality Q and P h⊥ of the produced hadron, one can use a TMD factorization formalism [10,11] in the region Λ 2 QCD < P 2 h⊥ ≪ Q 2 and describe asymmetries with TMD functions. One of the most important TMDs is the Sivers function f ⊥q 1T [12,13] which describes sin(φ h − φ s ) modulation in SIDIS on transversely polarized target [14]. Sivers functions have been extracted from SIDIS experimental data by various groups [15][16][17][18][19].
These two formalisms are closely related to each other, and have been shown to be equivalent in the overlap region where both can apply [20][21][22]. The relevant functions -the Sivers function and the ETQS twist-three function -are connected through the following relation [23][24][25] 1 : where the subscript "SIDIS" emphasizes that the Sivers function is probed in the SIDIS process. Color gauge invariant nature of TMDs manifests itself in the fact that TMDs are process dependent and an important consequence of this process-dependence is a prediction [27]: i.e., the Sivers function measured in SIDIS and Drell-Yan (DY) processes are exactly opposite to each other. Experiments are actively planning to measure and verify such a prediction. Some preliminary phenomenological estimates on the SSAs of DY production [28][29][30] and solid theoretical developments [31,32] have been achieved.
Recently it was found that left-hand-side (LHS) and right-hand-side (RHS) of Eq. (1) have opposite signs if the corresponding functions are extracted from phenomenological studies of different experimental data [25], particularly the RHS f ⊥q 1T (x, k 2 ⊥ ) from the SIDIS data, while LHS T q,F (x, x) from pp data. We will refer this finding as "sign puzzle" or "sign mismatch". Whether it reflects the incompatibility of SIDIS and pp data within the current theoretical formalism, or reflects the inconsistency of our formalism itself, is a very important question and needs to be further explored both theoretically and experimentally. On the experimental side, the measurement for the SSAs of single inclusive jet and direct photon production [25], and the single lepton production from a W -boson decay [33,34] in pp collisions, the single inclusive jet and hadron production in ℓp collisions (without identifying the final-state lepton) [35][36][37] could be very helpful. The study of hadron distribution inside a jet could also be useful [38][39][40][41].
In this paper we will try to make some first attempt on the theoretical (phenomenological) side: we will attempt to make a global fitting of both SIDIS and pp data with a more flexible functional form for the Sivers function (and the ETQS function), to see if we are able to describe all the data within our current theoretical formalism. Our naive starting point is based on the observation that the SIDIS and pp data typically covers slightly different kinematic region, in either momentum fraction x and/or the transverse components. Thus a sign-changing functional form in these kinematic space might be just needed to cure the "sign mismatch". One such possibility, e.g., a node in x region has already been indicated in Ref. [42]. We will consider SIDIS data from HERMES and COMPASS, and proton-proton data from STAR and BRAHMS. Let us emphasize that this is a first attempt to use simultaneously the TMD and collinear twist-three factorization formalisms in a global analysis of the spin asymmetry.
The rest of our paper is organized as follows. In Sec. II we recall the basic formalisms needed to describe SIDIS data for semi-inclusive hadron production at low P h⊥ , and proton-proton data for inclusive hadron production at high P h⊥ . In Sec. III we first introduce our more flexible parametrized functional form for the Sivers function, describe our fitting procedure. Particularly we explore the possibility of a node in x region, and investigate whether it can help resolve the "sign mismatch" problem. At the end of this section we briefly comment on the possibility of the node in k ⊥ space. We conclude our paper in Sec. IV.
II. BASIC TMD AND COLLINEAR TWIST-3 FACTORIZATION FORMALISMS
In this section we review the basic formulas for the spin asymmetries in both SIDIS and proton-proton processes. We start with the semi-inclusive hadron production at low P h⊥ in SIDIS, e(ℓ) + A ↑ (P, s ⊥ ) → e(ℓ ′ ) + h(P h ) + X, which can be described by the TMD factorization formalism. The differential cross section for the so-called Sivers effect reads [43]: where σ 0 = α 2 xB y Q 2 1 + (1 − y) 2 with α the fine structure constant, q = ℓ − ℓ ′ with q 2 = −Q 2 , and the usual SIDIS variables are defined as The Sivers asymmetry can be defined as the sin(φ h − φ s ) modula: where the subscript U stands for unpolarized lepton beam, and T for the transverse polarization of the target nucleon. In terms of structure functions one has The structure functions depend on x B , Q 2 , z h and P 2 h⊥ , and can be written as [43,44] whereĥ ≡ P h⊥ /|P h⊥ |, f ⊥a 1T is the Sivers function, f a/A and D h/a are TMD parton distribution function (PDF) and fragmentation function (FF), respectively. All our definitions on the TMD functions and these expressions are consistent with the Trento convention [45], which have been used in the experiments [46,47].
On the other hand, for single inclusive hadron production at high P h⊥ in p ↑ p collisions, A ↑ (P, s ⊥ ) + B(P ′ ) → h(P h ) + X, the spin-averaged cross section dσ ≡ [dσ(s ⊥ ) + dσ(−s ⊥ )]/2 is usually written in the collinear factorization formalism as, where a,b,c runs over all parton flavors, S = (P + P ′ ) 2 , f a/A (x) and f b/B (x ′ ) are the collinear PDFs, and D h/c (z) is the collinear FF. H U ab→c are the well-known unpolarized hard-part functions for partonic scattering [48,49].ŝ,t, andû are the usual partonic Mandelstam variables, for the final hadron of transverse momentum P h⊥ and rapidity y we obtainŝ The commonly used Feynman-x F can be written as x F = 2P h⊥ √ S sinh(y). Note that the partonic x, x ′ and z are integrated over in Eq. (9).
The spin-dependent cross section d∆σ(s ⊥ ) ≡ [dσ(s ⊥ )−dσ(−s ⊥ )]/2 is given by the collinear twist-three factorization formalism: where ǫ αβ is a two-dimensional anti-symmetric tensor with ǫ 12 = 1 (and ǫ 21 = −1), T a,F (x, x) is twist-tree ETQS functions, and H Sivers ab→c (ŝ,t,û) are the relevant hard-part functions which have been given in Refs. [8,50]. The subscript "Sivers" here is to remind that there are other types of contributions to the SSAs for the inclusive hadron production. What is written in Eq. (11) is only the so-called soft gluon pole contribution [8], and there could be soft fermion pole contribution [6], and also the contribution from the twist-three fragmentation function [7]. Nevertheless, the extensive phenomenological study of the single inclusive hadron production has been performed for the soft gluon pole contribution [8,9], which indicates the soft fermion pole contribution is relatively small at least in the forward region where the asymmetry is the largest [9]. Our study in the current paper will also concentrate on the soft gluon pole contribution, for which the relevant twist-three function -ETQS function T a,F (x, x) -has a close relation to the Sivers function as in Eq. (1) and thus we are able to perform a global analysis for both SIDIS and proton-proton data. We will comment on the contribution of the twist-three fragmentation function in the end of the next section.
The SSA, A N , is given by the ratio of spin-dependent and spin-averaged cross sections The absolute sign of A N depends on the choice of frame and the coordinate system. In the center-of-mass frame of the incoming hadrons A and B, a convenient coordinate system (consistent with the experimental convention) is: the polarized nucleon A moves along +z, unpolarized B along −z, spin s ⊥ along y, and transverse momentum P h⊥ along x-direction, respectively. In this frame which should be used in Eq. (11).
III. GLOBAL FIT OF THE SPIN ASYMMETRY: AN ATTEMPT
So far all the phenomenological studies on the spin asymmetries in the market have been separated into two isolated parts. On one side, people use TMD factorization formalism to describe the SIDIS data for hadron production at low P h⊥ , and they solely concentrate on SIDIS data, and do not include proton-proton data in the global fitting. The Sivers functions have been extracted as a result of such studies. On the other side, collinear twist-3 factorization formalism is used to describe the proton-proton data for single inclusive hadron production at high P h⊥ , and only proton-proton data are analyzed without inclusion of SIDIS data in the global fitting. The so-called ETQS functions have been extracted from such studies. However, as we emphasize in our introduction, Sivers and ETQS functions are closely related. Thus in this section we attempt to perform a global analysis of both SIDIS and proton-proton data on the spin asymmetries. We will use TMD formalism to describe the SIDIS data in terms of the Sivers function. From the parametrization of the Sivers function, we obtain the functional form for ETQS function through Eq. (1). Then we use the collinear twist-3 formalism to describe proton-proton data in terms of our obtained ETQS function. In this way, we hope a single parameterization for the Sivers function could help us achieve a global fitting of both SIDIS and proton-proton data. We first introduce our parametrization for both Sivers and ETQS functions, then we present and discuss the results from our global fitting. We explore the possibility of node in x in details, and briefly comment on the possibility of node in k ⊥ at the end of this section.
A. Parametrization for the Sivers and ETQS function
Following Refs. [17,19], we parametrize both the spin-averaged PDF f a/A (x, k 2 ⊥ ) and FF D h/a (z, p 2 T ) with a Gaussian form for the transverse components: such that they reduce to the usual collinear PDF f a/A (x) and FF D h/a (z) once integrated over the transverse momentum. The Gaussian widths k 2 ⊥ = 0.25 GeV 2 and p 2 T = 0.20 GeV 2 [17]. The Sivers function f ⊥q 1T (x, k 2 ⊥ ) in SIDIS process will be parametrized as where the extra k ⊥ -dependence h(k ⊥ ) is given by with M the nucleon mass, and M 1 a fitting parameter. The x-dependent part N q (x) will be parametrized as Compared with the previous SIDIS fits in Refs. [17,19], the new ingredient lies in the factor (1 − η q x), which is inspired from DSSV global fitting for the helicity PDFs [51,52]. This is a simplest form that can allow a node in the x space: if η q > 1, we will have a node for x ∈ [0, 1]; on the contrary, if η q < 1, then no node in the region x ∈ [0, 1]. In our fit, to satisfy the positivity bound for Sivers function, we have to require |N q (x)| < 1. To achieve this, we make the following substitution: in Eq. (18) and allow N q to vary only inside the range [−1, 1], this enforces the positivity of Sivers function in x ∈ [0, 1]. Now through the relation between T q,F (x, x) and the Sivers function f ⊥q 1T (x, k 2 ⊥ ) in Eq. (1), we could thus obtain a parameterized form for T q,F (x, x) as In other words, once a parametrization of the quark Sivers function is given, we automatically have a parametrized form for ETQS matrix element T q,F (x, x). With this in hand, one will be able to make a simultaneous fit of both SIDIS at low P h⊥ data and pp inclusive hadron production at high P h⊥ . As a first attempt, we will only consider u and d quark flavors, and include only pion data (π ±,0 ) in our fit. Thus we have N q , α q , β q , and η q for both u and d quarks, and M 1 as our parameters, in total 9 parameters, to be determined by fitting the experimental data.
B. Description of the data and discussion
As we have emphasized in last subsection, the TMD factorization formula of Eq. (6) will be used to describe HERMES experiment proton target data [46] and COMPASS experiments data [47] on deuteron target. The twist-3 factorization formula Eqs. (9,11,12) will be used for A N data from STAR [53] and BRAHMS [54] experiments. We use GRV98LO for the unpolarized PDFs [55], and DSS parametrization for the unpolarized FFs [56]. In our theoretical formalism we choose the factorization scale to be equal to the renormalization scale: µ = Q for SIDIS and µ = P h⊥ for proton-proton data.
The results we obtain for the 9 free parameters by fitting simultaneously HERMES and COMPASS data sets on the Sivers asymmetry A sin(φ h −φs) UT , and the STAR and BRAHMS data sets on the SSAs A N for both charged and neutral pions, are presented in Table I. The extracted first moments of the Sivers functions for both u and d quark flavors are plotted in Fig. 1. One can see that η d = 0 thus no node for d quark Sivers function. While η u = 2.8 > 1, there is a node for u-quark Sivers function that is located at x node = 0.36. This value is at the border of the region probed by SIDIS experiments and thus in principle the node cannot be excluded. Table I Let us now turn to the actual description of the experimental data. Our rather large χ 2 with χ 2 /d.o.f. = 3.6 has already indicated that the overall quality of our fitting is quite poor. In Fig. 2 the result of the fit compared to π + HERMES [46] and COMPASS [47] data as a function of x B . For π − asymmetry and z h and P h⊥ dependences the description is comparable to that of Fig. 2. In other words, our description of SIDIS data is satisfactory, χ 2 /#data ∼ 1.5. [47] data on π + production as a function of xB.
In Fig. 3, we compare the fit with the STAR π 0 data as a function of x F for √ S = 200 GeV at y = 3.3 (a) and y = 3.7 (b), respectively. The solid curves correspond to the scale µ = P h⊥ . The description is reasonably good, though slightly worse than those for SIDIS data. We have also explored the theoretical uncertainty coming from the scale µ through its variation by a factor 2 up and down relative to the default values, and they are plotted as dashed and dotted curves in Fig. 3. This uncertainty is indeed quite large as one might expect since we are using the leading order formalism. The improvement could be achieved once the next-to-leading order calculations are performed [57][58][59][60][61]. In Fig. 4, we compare the fit with the BRAHMS π + and π − data at forward angle θ = 4 • at √ S = 200 GeV. It is clear from these figures that our fitted parametrization for the Sivers functions (or the ETQS functions) is not compatible with the BRAHMS π + and π − data, and even the signs for A N are opposite. It is worth pointing out that the previous measurements for the charged pion production in pp collisions (e.g., those from E704 [62,63]) do have the consistent signs with BRAHMS. This finding is consistent with the heart of the "sign mismatch" paper [25]. Our starting point for the possibility of node in x is based on the fact that SIDIS and STAR data probe slightly different x region: x = x B < 0.3 for SIDIS, while x > ∼ 0.3 in the integration for x F > ∼ 0.3 for STAR data. Thus a node in x can describe both SIDIS and STAR data rather well. However, the BRAHMS has a x region x ∼ x F ∈ [0.15, 0.3] which overlaps with the SIDIS data. Thus the node in x can not be a solution for the "sign mismatch" problem. Our failed attempt of the global fitting of both SIDIS and proton-proton data has once more confirmed that these data are not compatible with each other, if we consider only the twist-three contributions from the polarized nucleon. It indicates that there should be a sizable and more important contribution from the twist-three fragmentation in the produced hadrons [7]. Even though we fail to cure the "sign mismatch" problem from the node in x scenario, the concept that the Sivers function does not need to have the same sign in the whole kinematic region (either x or k ⊥ ) has important implications, especially when it comes to check experimentally the sign change of the Sivers function from SIDIS to DY processes. In Fig. 5, we show the calculation of DY asymmetry for RHIC kinematics at √ S = 200 GeV as a function of x F . The solid curve corresponds to the calculations using the Sivers function with a node in x from Table. I, and the dashed curve is the calculation based on the Sivers function from [19] which has no node in x. One can see that the prediction changes drastically in case node is present, however in the region of 0 < x F < 0.25 the sign of the asymmetry is consistent and dictated by the Sivers function constrained from SIDIS measurements. Regardless of possible nodes this region is safe for measurement. In the future DY experiments, the Q 2 range will also be different, to have a solid prediction, one of course also needs to include the effect of the evolution [31,32,64]. Prediction of Drell-Yan asymmetry for RHIC kinematics p ↑ p → ℓ + ℓ − X, 0 < y < 3. Solid line corresponds to Sivers function with a node from this work and dashed line to Sivers function without node from Ref. [19]. The same convention for the hadronic frame and asymmetry is used as in Ref. [30].
C. Exploration of node in k ⊥ : the simplest study The Sivers function with a node in k ⊥ has also been suggested as a solution to the "sign mismatch" problem in Ref. [25]. The main idea comes from the fact that HERMES and COMPASS SIDIS data are mostly relevant for the extraction of the Sivers function f ⊥q 1T (x, k 2 ⊥ ) at relatively modest Q 2 ∼ 2.5 − 3.5 GeV 2 . Since the TMD factorization formalism is valid only for k ⊥ ≪ Q, the data thus constrain the function and its sign only at a very low k ⊥ ∼ Λ QCD . However, to obtain a functional form for ETQS function T q,F (x, x), one needs to integrate over the full range of k ⊥ in Eq. (1). Since currently we have assumed a Gaussian form for the k ⊥ -dependence which has the same sign for the whole k ⊥ region, the k ⊥ -integration will have the same sign of low k ⊥ part. However, if somehow the high k ⊥ region has opposite sign to the low k ⊥ part, this might alter the sign of the k ⊥ -moment in the integration and thus lead to the correct sign of T q,F (x, x). See Fig. 6 for an illustration. In this subsection we explore such a possibility. Talking about the k ⊥ -dependence, it is important to recall again that the relation in Eq. (1) is subject to the ultraviolet (UV) subtraction and the adopted factorization scheme. To avoid such a problem, as a natural extension to the usual Gaussian form, we choose k ⊥ -dependence as a difference between two Gaussian functions with slightly different widths. This is the simplest case which allows a node in k ⊥ , and we will explore whether this simple extension works in practice. The Sivers function will now be parametrized as follows: where we use the usual x-dependence (without node) for simplicity However, the k ⊥ -dependence h(k ⊥ ) will be changed to (from Eq. (17)) One has to choose M 2 > M 1 , thus the low-k ⊥ will be positive, i.e., follow the same sign like the usual Sivers function as in Eq. (16), and the k ⊥ -shape follows Fig. 6. Using Eq. (1), we could derive the functional form for the ETQS function, we have In order that the sign of T q,F (x, x) is altered, we should have We could also derive the expression for the Sivers asymmetry in SIDIS process, and find where the widths P 2 h⊥i (i = 1, 2) is defined as In order that the asymmetry follows the same sign at low P h⊥ like before, one requires where γ(z) −1 = 1 + z 2 h k 2 ⊥ / p 2 T . Thus we have three requirements from Eqs. (25), (28) plus M 2 > M 1 . One also need to take into account the fact that the Sivers asymmetries measured by both HERMES and COMPASS do not change sign up to P h⊥ ∼ 1 GeV. All these requirements have constrained the allowed parameter space for M 1 and M 2 to a very limited (small) region. For an illustration, see Fig. 7 for a typical P h⊥ = 0.5 GeV and z h = 0.5. This region gets even smaller if P h⊥ increases and/or z h decreases. From such a simple study, we find that our simplest extension to allow a node in k ⊥ seems not to be a natural solution to the "sign puzzle". Of course, other types of k ⊥ dependence which also has node in k ⊥ might still be possible 3 . At the end, we want to emphasize again that there is the important UV regularization issue, which is out of the scope of our current study. To finish this section, we study the possibility of a node in x or a node in k ⊥ in the Sivers function. The simplest extensions to contain such a node seem not to be working for both cases. This strongly suggests that there could be a sizable contribution from the twist-three fragmentation in the single inclusive hadron production [7], and we hope future experimental study could give us clear answers.
IV. CONCLUSIONS
In this paper, for the first time, we make an attempt for a global fitting of both SIDIS and proton-proton data on the spin asymmetries. We use a TMD factorization formalism to describe the SIDIS Sivers asymmetry for the hadron production at low P h⊥ , and the collinear twist-three factorization formalism for the proton-proton data on the single inclusive hadron production at high P h⊥ . We adopt a more flexible functional form for the Sivers function in order to simultaneously describe both SIDIS and proton-proton data. By including only the contribution from the so-called soft gluon pole ETQS function in the polarized nucleon, we find that we are not able to describe well all the data. While all SIDIS data and STAR data on π 0 production can be explained, node in x does not account for BRAHMS proton-proton data for the π + and π − production. We have explored the possibility of a node in x or a node in k ⊥ for 3 We have also explored another k ⊥ -dependent form (1 − η k 2 ⊥ ) and we find that the allowed parameter space for M 1 and η is again very small. the Sivers function. Our simplest extensions to allow a node for both cases seem not able to cure the "sign mismatch" problem. If leaving behind the UV regularization issue for the relation between TMDs and collinear functions, we will conclude that there could be a sizable contribution from the twist-three fragmentation function for the single inclusive hadron production in proton-proton collisions. We hope the future experiments could give us clear answers.
A side-effect (lesson) learned from our study is that one should be very careful in extrapolating the Sivers function (or any other TMDs which do not need to be positive) to the region where there is no experimental measurements or constrains. The size and the sign of the functions in these region should be carefully measured in the future experiments. For example, careful analysis of SIDIS data at large values of x B is needed in order to rule out or confirm the possible node of the Sivers function. Also measurement of π ± A N at larger values of x F is needed in order to confirm that the node (in x) is not compatible with BRAHMS data. Careful analysis of k ⊥ dependence of TMDs is also needed.
|
2012-01-26T02:12:25.000Z
|
2012-01-26T00:00:00.000
|
{
"year": 2012,
"sha1": "d18616cd714738261c74e2aeb5fe53410b7c4592",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.85.074008",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d18616cd714738261c74e2aeb5fe53410b7c4592",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
15192163
|
pes2o/s2orc
|
v3-fos-license
|
Nonergodicity and Central Limit Behavior for Long-range Hamiltonians
We present a molecular dynamics test of the Central Limit Theorem (CLT) in a paradigmatic long-range-interacting many-body classical Hamiltonian system, the HMF model. We calculate sums of velocities at equidistant times along deterministic trajectories for different sizes and energy densities. We show that, when the system is in a chaotic regime (specifically, at thermal equilibrium), ergodicity is essentially verified, and the Pdfs of the sums appear to be Gaussians, consistently with the standard CLT. When the system is, instead, only weakly chaotic (specifically, along longstanding metastable Quasi-Stationary States), nonergodicity (i.e., discrepant ensemble and time averages) is observed, and robust $q$-Gaussian attractors emerge, consistently with recently proved generalizations of the CLT.
Introduction. -During recent years there has been an increasing interest in generalizations of the Central Limit Theorem (CLT). This theorem -so called because of its central position in theory of probabilities -has ubiquitous and important applications in several fields. It essentially states that a (conveniently scaled) sum of n → ∞ independent (or nearly independent) random variables with finite variance has a Gaussian distribution. Understandingly, this theorem is not applicable to those complex systems where long-range correlations are the rule, such as those addressed by nonextensive statistical mechanics [1,2]. Therefore, several papers [3][4][5][6][7][8][9][10] have recently discussed extensions of the CLT and their corresponding attractors. In this paper, following [5,6], we present several numerical simulations for a long-range Hamiltonian system, namely the Hamiltonian Mean Field (HMF) model. This model is a paradigmatic one for classical Hamiltonian systems with long-range interactions which has been intensively studied in the last decade (see, for example, [6,[12][13][14][15][16][17][18][19][20][21][22], and references therein). In [5] it was shown that the probability density of rescaled sums of iterates of deterministic dynamical systems (e.g., the logistic map) at the edge of chaos (where the Lyapunov exponent vanishes) violates the CLT. Here we study rescaled sums of velocities considered along deterministic trajectories in the HMF model. It is well known that, in this model, a wide class of out-of-equilibrium initial conditions induce a violent relaxation followed by a metastable regime characterized by nearly vanishing (strictly vanishing in the thermodynamic limit) Lyapunov exponents, and glassy dynamics [15][16][17]. We exhibit that correlations and nonergodicity created along these Quasi-Stationary States (QSS) can be so strong that, when summing the velocities calculated during the deterministic trajectories of single rotors at fixed intervals of time, the standard CLT is no longer applicable. In fact, along the QSS, q-Gaussian Pdfs emerge as attractors instead of simple Gaussian Pdfs, consistently with the recently advanced q-generalized CLT [4,5,9], and ensemble averages are different from time averages.
Numerical simulations. -The HMF model describes a system of N fully-coupled classical inertial XY spins (rotors) s i = (cos θ i , sin θ i ) , i = 1, ..., N,with unitary module and mass [12,13]. These spins can also be thought as particles rotating on the unit circle. The Hamiltonian is given by where θ i (0 < θ i ≤ 2π) is the angle and p i the conjugate variable representing the rotational velocity of spin i. The equilibrium solution of the model in the canonical ensemble predicts a second order phase transition from a high temperature paramagnetic phase to a low temperature ferromagnetic one [12]. The critical temperature is T c = 0.5 and corresponds to a critical energy per particle U c = E c /N = 0.75. The order parameter of this phase transition is the modulus of the average magnetization per spin defined as: Above T c , the spins point in different directions and M ∼ 0. Below T c , most spins are aligned (the rotators are trapped in a single cluster) and M = 0. The out-of equilibrium dynamics of the model is also very interesting. In a range of energy densities between U ∈ [0.5, 0.75], special initial conditions called water-bag (characterized by initial magnetization M 0 = 1 and uniform distribution of the momenta) drive the system, after a violent relaxation, towards metastable QSS. The latter slowly decay towards equilibrium with a lifetime which diverges like a power of the system size N [14][15][16].
In this section we simulate the dynamical evolution of several HMF systems with different sizes and at different energy densities, in order to explore their behavior either inside or outside the QSS regime. For each of them, following the prescription of the CLT, we construct probability density functions of quantities expressed as a finite sum of stochastic variables. But in this case, following the procedure adopted in ref. [5] for the logistic map, we will select these variables along the deterministics time evolutions of the N rotors. More formally, we study the Pdf of the quantity y defined as where p j (i), with i = 1, 2, ..., n, are the velocities of the jth-rotor taken at fixed intervals of time δ along the same trajectory. The latter are obtained integrating the HMF equations of motions (see [15] for details about these equations and the integration algorithm adopted ). The quantity < p j >= (1/n) n i=1 p j (i) is the average of the p j (i)'s over the single trajectory. The product δ×n gives the total simulation time. Note that the variables y's are proportional to the time average of the velocities along the single rotor trajectories. In the following we will distinguish this kind of average, i.e. time average, from the standard ensemble average, where the average of the velocities of the N rotators is calculated at a given fixed time and over many different realizations of the dynamics. The latter can also be obtained from eq.(2) considering the y's variables with n = 1 and < p j >= 0. In general, although the standard CLT predicts a Gaussian shape for sum of n independent stochastic values strictly when n → ∞, in practice a finite sum converges quite soon to the Gaussian shape and this, in absence of correlations, is certainly true at least for the central part of the distribution [24]. Typically we will use in this section a sum of n = 50 values of velocities along the deterministic trajectories for each of the N rotors of the HMF system, though larger values of n were also considered.
In the following we will show that, if correlations among velocities are strong enough and the system is weakly chaotic, CLT predictions are not verified and, consistently with recent generalizations of the CLT, q-Gaussians appear [3][4][5]. The latter are a generalization of Gaussians which emerge in the context of nonextensive statistical mechanics [1,2] and are defined as being q the so-called entropic index (for q = 1 one recovers the usual Gaussian) , β another suitable parameter (characterizing the width of the distribution), and A a normalization constant (see also ref. [10] for a simple and general way to generate them). In particular we will show in this section that: (i) at equilibrium, when correlations are weak and the system is strongly chaotic (hence ergodic) standard CLT is verified, and time average coincides with ensemble average (both corresponding Pdfs are Gaussians, either in the limit n → ∞ or δ → ∞); (ii) in the QSS regime, where velocities are strongly correlated and the system is weakly chaotic and nonergodic, the standard CLT is no longer applicable, and q-Gaussian attractors replace the Gaussian ones; in this regime ensemble averages do not agree with time averages.
For all the present simulations, water-bag initial conditions with initial magnetizazion M 0 = 1, usually referred as M1, will be used. In general, several different realizations of the initial conditions will be performed also for the time average Pdfs case, but only in order to have a good statistics for small values of N (for N=50000, on the contrary, only one realization has been used: see fig.7(b)). Finally, to allow a correct comparison with standard Gaussians (represented as dashed lines in all the figures) and q-Gaussians (represented as full lines), the Pdf curves were always normalized to unit area and unit variance, by subtracting from the y's their average < y > and dividing by the correspondent standard deviation σ (hence, the traditional √ n scaling adopted in Eq. (2) is in fact irrelevant).
The case N=100. We start the discussion of the numerical simulations for the HMF model considering a size N = 100 and two different energy densities, U = 0.4 and U = 0.69. In the first case no QSS exist, while in the second case QSS characterize the out-of-equilibrium dynamics and correlations formed during the first part of the dynamics decay slowly while the system relaxes towards equilibrium [15,16]. With N = 100 this relaxation takes however a reasonable amount of time steps, thus one can easily study also the equilibrium regime. The situation is illustrated in fig. 1, where we show the time evolution of the temperature -calculated as twice the average kinetic energy per particle -for the two energy densities considered, starting from M 0 = 1 initial conditions. As expected QSS are clearly visible only in the case U = 0.69, although a small transient regime exists also for the case U = 0.4 [15]. N=100 and U=0.4. Here we discuss numerical simulations for the HMF model with size N = 100 and U = 0.4. In this case it has been shown in the past that the equilibrium regime is reached quite fast and is characterized by a very chaotic dynamics [12,13].
In fig. 2 a transient time of 40000 units has been performed before the calculations, so that the equilibrium is fully reached (see fig.1). In (a) we consider the ensemble average of the velocities, i.e. the y variables defined as in (2) with n = 1, at t = 40000 and taking 1000 different realizations of the initial conditions (events). The Pdf compares very well with the Gaussian curve (dashed line), as expected at equilibrium. On the other hand, we consider in (b), (c) and (d) the Pdfs for the variable y with n = 50 and with different time intervals δ over an increasing simulation time at equilibrium. As previously explained, this procedure corresponds to performing a time average along the trajectory for all the rotors of the system. In this case only the central part of the curve exhibits a Gaussian shape. On the other hand, Pdfs have long fat tails which can be very well reproduced with q-Gaussians (full lines). If one increases the time interval δ going from δ = 100 (b), to δ = 200 (c) and finally to δ = 1000 (d), the tails tend to disappear, the entropic index q of the q-Gaussians decreases from q = 1.45 ± 0.05 towards q = 1 and the Pdf tends to the standard Gaussian. This means that, as expected, summed velocities are less and less correlated as δ increases (see also ref. [5]) and therefore the assumptions of the CLT are satisfied as well as its prediction. Notice that n = 50 terms and a time interval δ = 1000 are sufficiently large to reach a Gaussian-shaped Pdf. This situation reminds similar observations in the analysis of returns in financial markets [24], or in turbulence [25].
N=100 and U=0.69. Let us to consider now numerical simulations for the HMF model with size N = 100 and More precisely, n = 5000 in (e) and n = 50000 in (f). It is clear that, both for δ → ∞ and n → ∞, the Pdfs shape tends to a Gaussian. U = 0.69. In this case a QSS regime exists, but its characteristic lifetime is quite short since the noise induced by the finite size drives the system towards equilibration rapidly. However strong correlations, created by the M 1 initial conditions, exist and their decay is slower than in the case U = 0.4. In fig. 3 we show in (a) the Pdf of the velocities calculated at t = 100 (i.e. at the beginning of the QSS regime). An ensemble average over 1000 realizations was considered. The Pdf shows a strange shape which remains constant in the QSS, as already observed in the past [14], and which differs from both the Gaussian and the q-Gaussian curves. On the other hand, we show in (b) the Pdf of the variable y with n = 50 and δ = 40, i.e. calculated over a total of 2000 time steps after a transient of 100 units, in order to stay inside the QSS temperature plateaux (see fig.1). In this case the system is weakly chaotic and non ergodic [15,16] and the numerical Pdf is reproduced very well by a q-Gaussian with q = 1.65±0.05.
Although in this case we have used differents initial conditions also for time averages, these results provide a first indication that ensemble and time averages are inequivalent in the QSS regime. Note that, due to the shortness of the QSS plateaux, for N = 100 it is not possible to use greater values of δ or n in the numerical calculations of the y's.
In fig.4 we repeat the previous simulations for N = 100 and U = 0.69, but adopting a transient time of 40000 steps, in order to study the behavior of the system after the QSS regime. The ensemble average Pdf (over 1000 realizations) of the single rotor velocities at the time t = 40000 is shown in (a) and indicates that equilibrium seems to have been reached. In fact the agreement with the standard Gaussian is almost perfect up to 10 −4 . In the other figures we plot the time average Pdfs for the variable y with n = 50 and for different time intervals δ, as done for U = 0.4. More precisely δ=100 in (b), δ=1000 in (c) and δ=2000 in (d). Again it is evident a strong dependence of the Pdf shapes on the time interval δ adopted. In fact initially (b) the Pdf is well fitted by a q-Gaussian with a q = 1.65 ± 0.05, however increasing δ, in (c) and (d), the central part of the Pdf becomes Gaussian while tails are still present and can be well fitted by q-Gaussians with values of q that tend towards unity. However, at variance with the U = 0.4 case, in this case not even a time interval δ = 2000 is sufficient to reach a complete Gaussian-shaped Pdf down to 10 −4 : evidently the strong correlations characterizing the QSS regime decay very slowly even after it, making the equilibrium shown by the ensemble average Pdf in (a) only apparent. This means that full ergodicity, i.e., full equivalence between ensemble and time averages, is reached, in this case, only asymptotically.
The last statements are confirmed by panels (e) and (f) of fig.4, where the effect of increasing the number n of summed velocities, keeping fixed the value of δ, has been investigated. More precisely δ=100 and n = 5000 in (e) and n = 50000 in (f). As expected, the increment of n makes the Pdf closer to the Gaussian, essentially because the total time over which the sum is considered increases (for n = 50000 we cover a simulation time of 5 × 10 6 ) and therefore correlations become asymptotically weaker and weaker, thus finally satisfying the prediction of the standard CLT In order to study in more details the ensemble-time inequivalence along the QSS regime in the next subsection we will increase the system size and discuss numerical results for N = 5000 and N = 50000.
N=5000 and N=50000 at U=0.69. In fig.5 we show the time evolution of the temperature for the cases N = 5000 and N = 50000 at U = 0.69, always starting (as usual) from the M1 initial conditions. It is evident that, for both systems, the length of the QSS plateaux is very much greater than for N = 100.
We discuss first numerical simulations done inside the QSS for N = 5000 and U = 0.69. In fig.6 we show in (a) the ensemble average Pdf of velocities calculated over 1000 realizations at t = 100, i.e. at the beginning of the QSS regime. Its shape, constant along the entire QSS, is clearly not Gaussian and looks similar to that of fig.3 (a). In panels (b-d) we show the effect of increasing the number n of velocity terms in the y sum on the time average Pdfs, calculated using a fixed value of δ = 100. An average over 200 different realizations of the initial conditions was also considered in order to have good statistics. In this case only for n = 1000 a q-Gaussian, with q = 1.45 ± 0.05, emerges. This is most likely due not to the effective number of n used but, consistently with fig.6, to the fact that when choosing a large n one is averaging over a larger interval of time and thus considers in a more appropriate way the average over the entire QSS regime. In any case the observed behavior goes in the opposite direction to the prescriptions of the standard CLT and to the trend shown in panels (e-f) of fig.4. Indeed, increasing n, the Pdf tails do not vanish but become more and more evident, thus supporting even further the claim about the existence of a non-Gaussian attractor for the nonergodic QSS regime of the HMF model. Moreover, the results of fig.6 confirm the robustness of the q-Gaussian shape along the entire QSS plateaux and the inequivalence between ensemble and time averages in the metastable regime.
Let us now definitively demonstrate this inequivalence considering the case N=50000 at U=0.69. In fig.7 (a) we plot the ensemble average Pdf of the velocities calculated (over 100 different realizations) at t = 200, i.e. at the beginning of the QSS regime, and after a very long transient, at t = 250000 (full circles). In panel (b) we plot the time average Pdf for the normalized variable y with n = 5000 and δ = 100, after a transient of 200 time units and over a simulation time of 500000 units along the QSS. It is important to stress that in this case only one single realization of the initial conditions has been performed, realizing this way a pure time average. The shape of the time average Pdf (b) results to be again a robust q-Gaussian, with fig.7(a) (that is also very robust over all the plateaux), thus confirming definitively the inequivalence between the two kind of averages and the existence of a q-Gaussian attractor in the QSS regime of the HMF model. These results indicate that standard statistical mechanics based on the ergodic hypothesis cannot be applied in this case, while a generalized version, like the q-statistics [1,2] is likely more suitable [16].
Conclusions. -The numerical simulations presented in this paper strongly indicate that dynamical correlations and ergodicity breaking, induced in the HMF model by the initial out-of equilibrium violent relaxation, are present along the entire QSS metastable regime and decay very slowly even after it. In particular, considering finite sums of n correlated variables (velocities in this case) selected with a constant time interval δ along single rotor trajectories, allowed us to study this phenomenon in detail. Indeed, we numerically showed that, in the weakly chaotic QSS regime, (i) ensemble average and time average of velocities are inequivalent, hence the ergodic hypothesis is violated, (ii) the standard CLT is violated, and (iii) robust q-Gaussian attractors emerge. On the contrary, when no QSS exist, or at a very large time after equilibration, i.e., when the system is fully chaotic and ergodicity has been restored, the ensemble average of velocities results to be equivalent to the time average and one observes a conver- gence towards the standard Gaussian attractor. In this case, the predictions of CLT are satisfied, even if we have only considered a finite sum of stochastic variables. How fast this happens depends on the size N , on the number n of terms summed in the y variables and on the time interval δ considered. These results are consistent with the recent qgeneralized forms of the CLT discussed in the literature [3][4][5][6]9], and pose severe questions to the often adopted procedure of using ensemble averages instead of time averages. Nonergodicity in coupled many particle systems goes back to the famous FPU experiment [26], but in our case is due to the long-range nature of the interaction. More recently, nonergodicity was found in deterministic iterative systems exibiting subdiffusion [11], but also in real experiments of shear flows, with results that were fitted with Lorentzians, i.e., q-Gaussians with q = 2 [23]. The whole scenario reminds that found for the leptokurtic returns Pdf in financial markets [24], or in turbulence [25], among many other systems, and could probably explain why q-Gaussians appear to be ubiquitous in complex systems. Finally, we would like to add that, although it is certainly nontrivial to prove analytically whether the attractor in the nonergodic QSS regime of the HMF model precisely is a q-Gaussian or not (analytical results, as well as numerical dangers, have been recently illustrated in ref. [8] for various models), our numerical simulations unambiguously provide a very strong indication towards the existence of a robust q-Gaussian attractor in the case considered. This opens new ways to the possible application of the q-generalized statistics in long-range Hamiltonian systems which will be explored in future papers. * * * We thank Marcello Iacono Manno for many technical discussions and help in the preparation of the scripts to run our codes on the GRID platform. The numerical calculations here presented were done within the TRIGRID project. A.P. and A.R. acknowledge financial support from the PRIN05-MIUR project "Dynamics and Thermodynamics of Systems with Long-Range Interactions". C.T. acknowledges financial support from the Brazilian Agencies Pronex/MCT, CNPq and Faperj.
|
2007-09-03T08:20:54.000Z
|
2007-06-27T00:00:00.000
|
{
"year": 2007,
"sha1": "4d27e6d7ad0bfb0184bb1d7cb4367799e586bbb3",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0706.4021",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d27e6d7ad0bfb0184bb1d7cb4367799e586bbb3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
69738051
|
pes2o/s2orc
|
v3-fos-license
|
Notating disfluencies and temporal deviations in music and arrhythmia
Expressive music performance and cardiac arrhythmia can be viewed as deformations of, or deviations from, an underlying pulse stream. I propose that the results of these pulse displacements can be treated as actual rhythms and represented accurately via a literal application of common music notation, which encodes proportional relations among duration categories, and figural and metric groupings. I apply the theory to recorded music containing extreme timing deviations and to electrocardiographic (ECG) recordings of cardiac arrhythmias. The rhythm transcriptions are based on rigorous computer-assisted quantitative measurements of onset timings and durations. The root-mean-square error ranges for the rhythm transcriptions were (19.1, 87.4) ms for the music samples and (24.8, 53.0) ms for the arrhythmia examples. For the performed music, the representation makes concrete the gap between the score and performance. For the arrhythmia ECGs, the transcriptions show rhythmic patterns evolving through time, progressions which are obscured by predominant individual beat morphology- and frequency-based representations. To make tangible the similarities between cardiac and music rhythms, I match the heart rhythms to music with similar rhythms to form assemblage pieces. The use of music notation leads to representations that enable formal comparisons and automated as well as human-readable analysis of the time structures of performed music and of arrhythmia ECG sequences beyond what is currently possible.
Introduction
Expressive music performance and cardiac arrhythmia share many common traits. An important one is that each can be viewed as the result of deviations from, or deformations of, an underlying pulse. Their origins suggest that the rhythms of such data streams can be encoded effectively using common music notation. I propose to treat these time displacements as bona fide rhythms and to represent them via a literal application of music notation. Music notation encodes abstract concepts of proportional relations among duration categories, and higher-level organizing constructs such as figural and metric groupings, and is typically used for abstract or conceptual, rather than literal, representations of time. The literal application (and consequent reading) of common music notation is counter to its common use by composers or performers. I co-opt the existing notation system to create symbolic representations that can enable formal comparisons and computer and human analysis of time sequences. This work was inspired by a transcription of sight-reading wherein the rhythm notation captured both musical and cognitive disfluencies. The transcription, created by ear with the aid of the original score and audio and Musical Instrument Digital Interface (MIDI) recordings, demonstrated the possibility of notating serendipitous rhythms. In this article, I will use computer tools to rigorously mark onset times and obtain quantitative measurements of durations. The tempi and quantizations are chosen to maximize transcription precision, and the accuracies of the rhythm transcriptions are evaluated quantitatively against the original timings. Figural and metric groupings and any metric modulations are chosen manually, although this too can be automated, as will be discussed.
The technique is first illustrated using contrasting music examples that contain extreme timing deviations introduced in performance. The results highlight the gap between the score and performance, made all the more obvious by the use of the same notation to communicate the difference between the rhythms. Next, I apply the same transcription process to electrocardiographic (ECG) recordings of cardiac arrhythmias. While such rhythm transcription can be applied to any arrhythmia, I demonstrate the method on three excerpts of ECG recordings of atrial fibrillation. The notation shows clearly the rhythmic groupings and patterns such as repetitions and transformations through time. These are patterns normally obscured in the predominant ECG analysis approaches, which mainly consider individual beat morphology or features aggregated over windows of time.
Similarities between cardiac and music rhythms are further made tangible by matching the heart rhythms to music with similar rhythms to form musical assemblages. I apply the notation and retrieval-and-assemblage process to the atrial fibrillation ECG recordings, leveraging the innate rhythmic similarities to music with mixed meters, a siciliane, and a tango to generate the mirror pieces.
The precise and symbolic representation of performers' idiosyncratic timings and of atrial fibrillation's capricious rhythms points to a host of new computational analysis approaches for characterizing and comparing such time sequences. Once a formal representation for time structure exists in symbolic form, any number of encoding schemes can be employed to transform the symbols to machinereadable formats. These encoding schemes facilitate fast searches for specific rhythms and repeated patterns, allowing for data summarization and comparisons between sequences. Further analyses of the transcribed rhythms could reveal hierarchical structure in the time series data. Quantitative techniques can be devised to measure the distance, for example, between transcriptions of different performances of the same work. Finally, the symbolic representation lends itself to large-scale pattern search and categorization, for inferring performance style, characterizing arrhythmia subtypes, or predicting diagnostic outcomes.
The remainder of the article is organized as follows: first I review some related work on transcription, music performance, and mappings of cardiac information to music; next, I briefly review the sight-reading transcription that inspired this work; then, I present the transcriptions of the three performed music cases, followed by the three arrhythmia ECG transcriptions, and conclusions and discussions. An appendix documents the evaluations of the various transcriptions.
Transcription
The practice of transcription has an illustrious history. Olivier Messiaen's (1956) use of birdsong in his compositions, such as Oiseaux Exotique, Vingt Regards, and others, is well known. The tradition of transcribing birdsong predates Messiaen by many centuries. Hold (1970) gives a detailed account of composers' and naturalists' attempts at music representations of birdsong from 1240 onwards-these include staff, orthochronic, and graph notation, and the sound spectrograph.
Transcription has been especially useful for the study and reproduction of extemporaneous performances. Early examples can be found in Vaclav Pichl's transcriptions of Luigi Marchesi's (1792) elaborate note embellishments in four performances of the aria "Cara negl'occhi tuoi" and the rondo "Mi dà consiglio" in Nicola Antonio Zingarelli's opera Pirrore d'Epiro, as described in Berger (2016). The Charlie Parker Omnibook (1978), a collection of transcriptions of the jazz saxophonist's compositions and improvised solos, remains a staple in jazz studies. Historic performances like Keith Jarrett's Köln Concert (1975) have been painstakingly notated for re-performance. Other transcriptions include those of Coltrane (1999), Duke Ellington (1971), Bill Evans (1967), and Ferdinand "Jelly Roll" Morton (1986), as reviewed in Tucker (1982).
Ethnomusicologists and composers transcribe field recordings of folk songs for analysis and as seed material for new compositions, respectively. Béla Bartók (1942) carefully converted to music notation the melodies of recorded songs in the Milman Parry Collection for re-use in compositions; see the discussion in Frigyesi (1985).
More recently, the push toward an empirical study of vernacular music has led to the transcription of traditional Chinese clapper music and rap music for systematic analysis; see Sborgi Lawson (2012) and Ohriner (2016), respectively.
This work was motivated in part by Practicing Haydn , in which a sight-reading of a Haydn sonata movement is transformed into a performable score via transcription, complete with all the repetitions, pauses, starts, and stops. The transcription of sight-reading raises the following question: If the blundering mishaps and accidental discontinuities of a first encounter with a score can be recorded using music notation, what else might be amenable to such treatment? Here, we show that expressive performance and arrhythmia sequences can also be subject to such transcription processes, although the goal is not to produce a performable score, even though that is a desirable side effect, and the transcription will use a computer-assisted process that aims to minimize discrepancies between original and transcribed sequences.
Related to transcription of expressive performance, there exists a long tradition of scholarly work on the representation of intonation, dynamics, and pacing in expressive speech, dating back to Steele (1775). Recent forays into the transcription of speech melody and timing have turned to the direct use of music notation in a literal fashion. Leveraging commonalities between expressive speech and music, Simões and Meireles (2016) and Meireles, Simões, Ribeiro, and de Medeiros (2017) explored the use of music notation to represent the melody and rhythms of spoken language. In this work, the transcriptions are treated as literal expressions of the spoken pitches and durations. They are uniformly presented in 4/4 time, without regard to meter, although the authors propose that they will obtain natural metrical groupings from the stresses in speech in future work.
This article applies music transcription to unconventional contexts of expressive performance and cardiac arrhythmia, with the goal of recording these temporal processes and experiences, which are not normally preserved in writing. Special attention will be given to precise representation of timing and durations, and to the figural and metrical groupings that may be inferred from the patterns.
Representing performed music
The first application addresses the representation of performed music. The increased emphasis on music as performance rather than music as writing or notation, combined with the growth of computer tools for analysis of performed music, brings to a head issues of representing performed music. Anyone who has tried to generate a sound file from a digital score soon discovers that a performance rendered from the digital score is vastly different from that realized in practice. Thus, the information encoded in the score is insufficient for creating or re-creating a convincing rendition of the music. This is because, as Frigyesi (1993, pp. 60-62) points out, notation was not conceived to be transcription; it represents only an abstraction of the temporal experience, and not the actual rhythms of a performance. It is worth noting that the view of transcription as literal and notation as abstract is not universal. For example, Busoni considers the transfer of a piece from one instrument to another, the transfer of music concept to notation (composition), and the transfer from notation to performance to be different forms of transcription, see Knyt (2010, p. 111). The transcriptions in this article will use notation literally to encode time and other structures.
Music as performed differs from the information denoted on the page for many reasons. In performed speech, the measure of syllables and words in a text fall far short of the timings of the delivery. In music playing that aspires to the rhetorical style of spoken language á la Adolph Kullak, see Cook (2013, p. 74), performed rhythms privilege the cadences and pacing of speech over the music script. Owing to the relative nature of loudness and other constraints, notated dynamics are also frequently not what they seem: Kosta, Bandtlow, and Chew (2014) showed that notes marked pianissimo can sound objectively louder than notes marked fortissimo, depending on context. Furthermore, music performance practice often requires that performers deviate from the written score in prescribed ways. For example, in the French tradition of notes inégales, notes of equal duration are deliberately lengthened or shortened in performance, see Houle (2000, p. 86). In many folk traditions, notated pitches may be lavishly ornamented in practice, as shown in a study of Yang, Chew, and Rajab (2013), which compares erhu and violin performances of a Chinese folk piece.
However, what if actual performed rhythms were transcribed literally and precisely using conventional notation so as to make the nuances of performance concrete through writing? The encoding would still be incomplete as the symbolic representation would not capture fine details, such as the exact shapes of note articulations, transitions, and within-note embellishments. Some degree of approximation of the precise timings will be inevitable to ensure the readability of the transcription; even if the transcription could provide notation to millisecond precision, there are limits to what the human ear can distinguish as two separate time instants, see Bartlette, Headlam, Bocko, and Velikic (2006), Chafe, Caceres, and Gurevich (2010), and Chew et al. (2004). Nonetheless, a faithful transcription would give a much more accurate (literal) representation of the temporal experience, and would allow for direct comparison with the original score as a measure of the distance between the abstract representation and actual experience.
Notation of and for performance has taken many forms. Philip (1992) has explored the notation of nuances in expressive timing in less quantitative ways. Moving away from conventional music notation, Bamberger's (2000) Impromptu software uses a number of representations, such as pitch contours, rhythm bars, and piano roll notation to allow users to manipulate properties of tune blocks. Many other graphical notations exist, including those of Farbood (2004) and Hope and Vickery (2015). As an intermediary between conventional notation and digital sound, OpenMusic (Bresson & Assayag, 2011), a programming environment designed for composition and analysis, allows notes to be positioned on staff lines in continuous locations indicating the times at which they sound, with impact on the readability of the score. A goal here will be to explore the limits of using conventional music notation to represent continuous time, bearing in mind that not all locations on the continuous time axis are equally likely, given the pulsebased origin of the input.
The first set of examples takes on this challenge to transcribe the actual timings of recorded expressive performances. In so doing, it provides, in a sense, a written record of the performers' creative work. The three cases span music from a variety of Western music traditions: the Vienna Philharmonic Orchestra's performances of Johann Strauss II's The Blue Danube, a traditional New Year's concert encore piece; Maria Callas' rendition of Giacomo Puccini's operatic aria "O Mio Babbino Caro," and Marilyn Monroe's sultry rendition of "Happy Birthday" on the occasion of John F. Kennedy's 45th birthday.
The body of scientific research on performance practice has expanded rapidly, aided by recent software tools, such as Sonic Visualiser by Cannam, Landone, and Sandler (2010). It is now possible for any motivated person with modest computer literacy to be able to extract beat and loudness information from a recorded performance. Because of the visual and compact nature of information portrayed on graphs, the scientific study of music performance and music expression predominantly uses graphs of timing, tempo, or loudness data extracted from audio recordings-see, for example, Todd (1992), Cheng and Chew (2008), and Chew (2016). There has scarcely been any attempt to notate these captured rhythms, owing partly to the gulf between event-based (notation) and signal-based (audio) representations, and partly to the difficulty of representing free rhythms, which will be discussed in upcoming paragraphs.
An exception can be found in the work of Beaudoin and Senn, described in Beaudoin and Kania (2012), in which the exact timings and intensity levels of Martha Argerich's recording of Chopin's "Prelude in E minor," Op. 28/4, is transcribed in standard notation as the framework for a series of pieces based on transformations on Chopin's original material called E´tudes d'un Prélude. Another is the work of Grønli, Child, and Chew (2013), in which Chew's sight-reading of the finale movement of Haydn's Piano Sonata in Ef, Hob XVI:45 is meticulously transcribed by Child for re-performance. The resulting composition, Practicing Haydn, was created for and premiered at Grønli's solo art show at the grand opening of the Kunsthall Stavanger, and at other venues.
Musica humana
The second application lies in the domain of cardiac arrhythmia. The belief that music is inherent in the beating of the pulse was widely held in the Middle Ages. This was a specific instance of the more general idea that music is inherent in the rhythms of the human body-Boethius' Musica humana, which is complementary to Musica instrumentalis (music of sounding instruments) and Musica universalis (music of the spheres), see Chamberlain (1970). Siraisi (1975) provides a rich survey of academic physicians' detailed writings on the nature of the music of pulse in the 14th and 15th centuries. Since then, arts and medicine have diverged and developed along separate paths. Today, with the parallel development of annotation and visualization tools like WFDB by Silva and Moody (2014) and LightWAVE by Moody (2013) for ECG data, there is ample evidence that the human pulse would be amenable to modern musical treatment and analysis, as the fields are poised to collide again.
As a step toward facilitating this reconnection, the second set of examples applies transcription to represent, using conventional music notation, the rhythms of arrhythmia. Music representation of cardiac information is not new, although it has been used primarily to describe heart sounds. Renà Laennec (1826), the inventor of the stethoscope, used mainly onomatopoeic words to depict sounds he heard in the process of auscultation; however, on one occasion in 1824, he resorted to music notation to augment his word description of a venous hum. Segall (1962) points to this as the first symbolic representation of the sound of a heart murmur, with many more graphical notations to follow. More recently, Field (2010) used music notation to systematically transcribe signature heart sounds and murmur patterns in the teaching of cardiac auscultation to medical students to aid the diagnosis of heart valve disorders.
I will also use music notation to represent heart rhythms, but now focusing on the transcription of recorded rhythmic sequences of abnormal electrical activity in the heart. Conditions resulting from abnormal electrical conduction differ from those due to valvular disorders; the input will also be the ECG trace instead of sound.
Conventional ways to represent ECG data tend to focus on individual beats: their morphology (features of the waveform) or categorical labels, such as N (normal) and V (ventricular activity); or, frequency-domain characteristics like heart rate variability that aggregate features over larger windows of time. Counter to this trend, Bettermann, Amponsah, Cysarz, and van Leeuwen (1999) used a binary symbol sequence from African music theory to represent elementary rhythm patterns in heart period tachograms. Syed, Stultz, Kellis, Indyk, and Guttag (2010) consider motific patterns based on short strings of the categorical labels, and Qiu, Li, Hong, and Li (2016) have studied the semantic structure of symbols labeling parts of the waveform.
In the context of the transcription exercise, the notated rhythms are next matched to existing music with similar rhythms, and new compositions generated by collaging together appropriate parts of the selected piece. Since the chosen music already has the same or a very similar rhythmic structure, the collage gives pitch to the rhythms in ways that reinforce and make more readily perceptible the inherent time structures. If the pitch structures are in dialog with the time structures, the collage can add a layer of complementary structures. In either case, the temporal experience of arrhythmia can be made visceral through the performance of the resulting music.
Composing with rhythm templates is not new. Composer Cheryl Frances-Hoad's piano piece Stolen Rhythm (2009) takes the notated rhythms of the finale movement in Haydn's Piano Sonata in Ef, Hob XVI:45, and assigns new pitches to them. The computer program MorpheuS also takes rhythms from existing pieces and sets new notes to them in ways that preserve the repetition patterns, and tonal tension profiles of the template piece, see Herremans and Chew (2017). Practicing Haydn, in effect, creates a collage by traversing Haydn's piece through a series of repetitions and pauses.
Heartbeat data have been used as a source for music composition or synthesis. The most common approach is to use heart rate variability indices, which are based on statistical aggregation over longer time spans. There is also a tendency toward direct data sonification. In the Heartsongs CD by Davids (1995), produced as part of ReyLab's Heartsongs Project, heartbeat intervals were averaged over 300 beats to remove local fluctuations and mapped to 18 notes on a diatonic scale to create a melody. Yokohama (2002) maps each heartbeat interval to MIDI notes, so that an intervallic change such as a premature beat triggers a more significant change in pitch. In Ballora, Pennycook, Ivanov, Glass, and Goldberger (2006), heart rate variability data is mapped to pitch, timbre, and pulses over a course of hours for medical diagnosis; in Orzessek and Falkner (2006), heartbeat intervals are passed through a bandpass filter and mapped to MIDI note onsets, pitch, or loudness. The Heart Chamber Orchestra (Votava & Berger, 2011) uses interpretations of its 12 musicians' heartbeats, detected through ECG monitors; relationships between them influence a real-time score that is then read and performed by the musicians from a computer screen. All but one of these studies-Yokohama (2002)-have focused on heartbeat data from non-arrhythmic hearts.
In this article, transcription serves as a means to represent the rhythms of arrhythmia using conventional music notation. Current analyses of ECG data predominantly use representations based on beat morphology and representations in the frequency domain. The music notation captures local rhythmic patterns that are lost in single-beat and frequency-based approaches. Three different excerpts, short summaries, showing a heart in different states of atrial fibrillation are chosen from a continuous 18-hour recording from a three-lead Holter monitor. The rhythms of the different states of the arrhythmia are made apparent in the extracted musical rhythms and collage pieces.
Transcription process
Conventional music notation is the representation of choice for the transcriptions in this article. The examples addressed in this article tend toward the practice of free rhythm, which is common in both folk and art, religious and secular traditions. Free rhythm is "the rhythm of music without pulse-based periodic organization" Clayton (1996, p. 329). The analysis of free-rhythm music remains an open challenge, with a major hurdle being the difficulty of representing free rhythms in writing. Existing efforts to notate free rhythms typically avoid time signatures or bar lines, sometimes simply arranging note heads on a horizontal timeline so as to avoid the implications of pulse or meter in staff notation. The reason regular staff notation is adopted here is that the examples are all pulse-based; this includes both the performed music as well as the cardiac arrhythmia time series. While they generally lack periodic structure and do not possess ordinary metrical organization, owing to pauses and pattern repetitions, local grouping structures do emerge, which allow for the assignment of changing time signatures and indications of figural groupings through note beaming or phrase markings. In line with notations used in contemporary compositions, the transcription of Practicing Haydn, and indeed those for the performed durations and arrhythmia sequences make copious use of changing meters (as in Stravinsky's compositions), and metric modulations (as in Elliot Carter's works).
A main objective in the transcription process is to minimize the difference between the recorded and the transcribed sequence. When a design goal is to minimize transcription error, there exists the possibility of making the notation enormously complex to achieve the highest possible accuracy. To counter this, an important and competing aim is to ensure that the proportional durations in the notation are readily readable by human eyes, so that the transcription could serve as a source for visual analysis or performance. In this way, the notation is suitable as input for computer analysis as well as for visual inspection. Metric modulations are kept to a minimum, and utilized only when they lead to simpler proportional durations. To satisfy the second goal, when a human performer plays the notated transcription, it should be possible to reproduce the original rhythms without the aid of a click track, as in https://vimeo.com/226516952, although clearly it would be easy to choose to deviate significantly from the score. Thus, the labyrinthine density of notation like that employed by Brian Ferneyhough as a conduit for his complex composition process is avoided in favor of simpler forms. In the case of Ferneyhough's intricate notation, designed to serve the purpose of lifting players beyond hackneyed readings of the score, some, such as Marsh (1994), have argued for simpler representations that directly reflect specific interpretations of the music. Distinct from Marsh's goals, even though a secondary constraint of the transcription process here is human readability, the transcriptions do tend to make the score more complex in order to incorporate recorded nuances.
Here, metric modulations that use irrational proportions, as in Conlon Nancarrow's Study 33, are strictly avoided. One might argue that even Nancarrow's irrational proportions can be closely approximated using conventional notation (rational durations) for playability, see Callender (2014).
Rhythmic disfluencies
The transcription of performed rhythms and the rhythms of cardiac arrhythmias draws inspiration from Practicing Haydn by Chew, Child, and Grønli (2013). Practicing Haydn originated as an idea by Grønli and Child to create a musical piece that sounds like musicians warming up and practicing before a concert. The result was a transcription of the serendipitous rhythms of a sight-reading of a Haydn sonata movement for re-performance, complete with all the repetitions, starts, and stops. The premiére of the piece took place concurrently at the grand opening of the Kunsthall Stavanger in Norway by Chew and at Performa13 in New York City by pianist Elaine Kang-see videos at https:// kunsthallstavanger.no/en/exhibitions/practicing-haydn.
Three selections from the transcription are given in Figures 1 to 3. Each figure shows a snippet from the original Haydn score and a snippet from Child's transcription of Chew's sight-reading of the corresponding bars. The transcribed segments are inevitably longer than the original score, owing to repetitions, and to pauses and hesitations.
An interesting side effect of the exercise is that the transcription serves as a record of not only the musical but also the cognitive disfluencies. Unexpected events provide moments for pause. The transcribed sight-reading in Figure 1(b) shows a pause at the end of the second bar just before an unfamiliar turn in the 16th-note sequence; the excerpt in Figure 2(b) documents the hesitations just before the introduction of figural or directional changes; the pickup into the 2/4 bar re-starts an unexpected figure that was not fully apprehended on the first play in the preceding 3/8 bar; a similar re-start can be observed in Figure 3(b). Repetitions help refine harmonic direction: in Figure 1(b) the trill is repeated to reinforce its harmonic function; in Figure 2(b), the final tonic chord is elongated to balance the length (and emphasis) of the preceding dominant chord.
That sight-reading is associated with hesitation and fumbling is not particularly remarkable. What is interesting here is that these behaviors are clearly documented in the transcriptions. They may be obvious to a casual listener, but the fact that they show up clearly in the transcriptions means that it is possible to automate the detection process to enable large-scale analysis of disfluent or rhythmically irregular behavior.
Choreographed rhythms
When a score is interpreted by a human musician, the performed timings and durations are more often than not different, sometimes significantly so, from the notation in the score. Some of these deviations will be due to human inconsistency, but in skilled performance, the bulk of it can be ascribed to deliberate shaping of time, called rubato, either according to established convention or individual idiosyncracy. Cook (1987) encodes rubato using note and bar durations, and percentage deviation from the norm. Repp (1992) represents rubato in melodies using eighthnote durations (longer-duration notes are subdivided equally into eighth notes) and show the durations to frequently follow the shape of a quadratic curve. This method of representing tempo rubato persists to today and can be found, for example, in the work of Spiro, Rink, and Gold (2016).
This section seeks to represent several different kinds of timing deviations in music performance. Curve fitting, where present, is done with the Matlab spline function and the precisely quantified durations transcribed to common music notation.
Viennese waltz
The Viennese waltz is a prototypical example of music in which there is systematic disparity between notated and performed rhythms. The social context and bodily movements (steps and twirls) behind this dance form is explored in McKee (2011). For the musicians performing, the three beats of a Viennese waltz are typically played unequally, normally with the first beat shortest followed by the third, and with the second beat longest, although exceptions exist. Assuming a steady pulse, this could be interpreted as the second beat being early, a deviation from its prescribed onset time, giving the impression that the third beat is late due to the resulting larger gap between the second and third beats. Figure 4 shows, using graphs and music notation, how the notated and performed rhythms differ in Johann Strauss II's The Blue Danube.
Figure 4(a) shows Strauss' original notated durations. To extract the performed durations from recorded performances, quarter-note beat onset times (in seconds), fo 1 ; o 2 ; . . . ; o N g where N is the total number of onsets, were annotated using Sonic Visualiser (Cannam, Landone, & Sandler, 2010) and checked aurally as well as visually using the audio waveform and spectral information. Beat durations are derived from these onset times, d i ¼ o iþ1 À o i , and graphed in Matlab; data interpolation was done using the Matlab spline function. Figure 4(b) shows the resulting graphs generated from performances of The Blue Danube by the Vienna Philharmonic with Herbert von Karajan (1987) and with Georges Prêtre (2010). These particular recordings were chosen because they present two contrasting interpretations of The Blue Danube, which show as divergences in the graphs and the rhythm transcriptions. These differences are all the more interesting because they were created by the same orchestra, albeit 23 years apart, performing under different conductors. In the plot, Figure 4(b), arrows mark the downbeat of each bar. Note that if the music, the score shown in Figure 4(a), was performed literally, exactly as notated, the plot would show only horizontal lines, indicating that all quarter notes have identical durations. This is clearly not the case, as both lines show strong oscillations with varying amplitudes.
The transcription of the performed durations can be described in the form of a heuristic. Suppose the second beat is considered to be early; then the third beat is the one that most closely resembles a full beat duration. Thus, we obtain the baseline beat duration from the third beat, which is assumed to be two or three eighth notes in length. The duration of the third beat served as the reference from which proportional relationships are derived for the preceding beats. Simple ratios-such as 3:2, 2.5:2, 1.5:2, and 2:3-were preferred over more complicated ones. Whether the third beat was two or three eighth notes in length is determined by which interpretation most closely approximated the simpler ratios. Suppose that e i is the duration of the eighth note when the third beat is i eighth notes long, that is, e i ¼ d b 3 =i where b j is the beat index of the j-th beat, d b j is the duration of the b j -th beat, and n b j is the closest whole number or simple fraction when d b j is divided by e i . Whether the third beat is two or three eighth notes long is determined by arg min i¼2;3 P j¼1;2;3 k d b j =e i À n b j k. Note that this technique can be extended to consider other duration categories for the third beat, such as 2.5 eighth notes.
After obtaining the duration ratios, the tempo for any contiguous set of beats, from i to j, is then computed from the total duration, d ¼ P j k¼i d k , and total quantized score durations, n ¼ P j k¼i n k , thus the tempo when a beat is two eighth notes long is given by T ¼ 30 Á n=d beats/min. The resulting notation is shown in Figures 4(c) and (d). The duration category for the third beat can also be chosen to keep the tempo as unchanging as possible from one bar to the next. For example, the third bar in Figure 4(d) could have been notated with f1.5, 3.5, 2.5g eighth-note durations at basically the same tempo as the preceding bars. However, a metric modulation (change in tempo) gives a simpler notation that keeps the upbeat a quarter note. This simpler notation, which is more friendly to the human eye and requires only a subtle tempo slow-down, is preferred. Any rhythm transcription necessarily requires some degree of quantization. The original rhythms exists on the real timeline, while the notation is categorical in nature. A transcription thus maps real numbers to duration categories. We are interested in the discrepancy between the real number and the categorical representation. We measure the accuracy of a transcription by the root-mean-square error (RMSE) between the transcribed inter-onset intervals as compared to the inter-onset intervals in the recorded performances. Ifê i is the duration of the eighth note given by the tempo at the i th beat, the RMSE is given by ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi For the examples shown in Figure 4, the RMSE is 19.1 ms for the Karajan excerpt and 24.9 ms for the Prêtre excerpt, respectively. The numbers are given in Table 1 and graphed in Figure 18 in Appendix A. The small errors show the very slight degree of approximation introduced in the transcription process.
The difference between the heuristic described here and a more conventional approach, such as transcribing the rhythms by ear, is that the heuristic can be readily translated into a computer program to automate the transcription process. Manual transcription is possible and often desirable for small examples, but scalability is an important consideration for rhythm transcription to be deployable on a large scale for analysis. The method also has the advantage of providing rigorous control over and monitoring of the quantization error.
Many automatic or semi-automatic methods for rhythm transcription exist, see Ycart, Jacquemard, Bresson, and Staworko (2016) and Nakamura, Yoshii, and Sagayama (2017), as well as algorithms for supporting tempo detection (Grosche & Müller, 2011) and tempo change (Quinton, 2017). However, frequency-based methods such as those described by Grosche and Müller (2011) and Quinton (2017) still lack the reactivity required for the frequent meter changes needed to encode free rhythms. The stateof-the-art methods described and evaluated in Nakamura et al. (2017) are designed to recover a score like the one originally written by the composer from the performance. They thus remove the temporal information that is inserted during performance. The work that comes closest to addressing the kind of transcription problem at hand is the interactive rhythm quantification software of Ycart et al. (2016), designed for use by composers as part of Open Music. At present, the software needs further optimization to make it comparable to human efficiency. The method given in this section has the advantage of having been optimized for the special case of the Viennese waltz, which, as it stands, lacks generality. Adaptations tailored to other music styles will be described in the following sections.
In Figure 4, apart from knowing at a glance that the performances are not metronomic, the information embedded in a transcription gives not only the degree of variation, but also the metric modulations, proportional time relations, the precise timing of each note, the distribution of time across the pulses in each bar, and patterns of stress. Figure 4(b) shows Karajan's more steady (from bar to bar) Viennese waltz rhythm compared with Prêtre's more variable rhythm, which has the short-long gesture growing larger then shrinking. This observation is reinforced in the notation of Figures 4 (c) and (d). Karajan's performance could be captured with a constant 5/8 meter after the initial slow start; Prêtre's performance had to be notated with many more metric modulations-with the beat rate slowing to make the larger gestures that then compress to speed up-and proportional duration category changes. In both cases, the performed durations show marked, and notatable, differences from the original score. The notation thus records the ways in which the score itself is changed and re-shaped by the performer; this is made more obvious by the fact that the original composition and the performed rendition are encoded using the same notational conventions.
Operatic aria
The performer has greatest latitude in creating freesounding rhythms in solo performance. Among soloists, opera singers are well known for their flexible interpretation of notated rhythms. This second example examines the notation of extreme timing deviations or pulse elasticity in a solo performance of an operatic aria. Figure 5 shows the duration of each eighth note in an excerpt of "O Mio Babbino Caro" from Giacomo Puccini's opera, Gianni Schicchi, when performed by Kathleen Battle, Maria Callas, and Kiri te Kanawa. The performed timings of these three recordings have been graphed and analyzed in Chew (2016). The analysis is briefly described here before Callas' performance is singled out for transcription.
As before, the eighth-note beat onsets were annotated and overlaid on the audio signal for inspection in Sonic Visualiser and evaluated aurally for correctness; the eighth-note durations were then obtained from the annotated onsets. This was done for recordings by each of the three sopranos, and the durations plotted on the same graph. To allow for comparisons between the three recorded performances, the three sets of data were plotted in score time, that is, with eighth-note count as the x-axis. Again, interpolation between consecutive duration points was done using the Matlab spline function. The vertical dotted gridlines in the background mark the first eighth note of each bar. The corresponding solo melody is shown beneath the graph itself.
The first thing to note is the large degree of variation in eighth-note durations over the course of even this short excerpt. The baseline eighth-note duration hovers around 0.5 s, indicating that the underlying tempo is approximately notes to a beat, this translates to a languid pulse of 40 beats per minute. The longest eighth-note duration, corresponding to the highest point in Kiri te Kanawa's plot, exceeds 5 s, extending almost to 5.5 s. This is a remarkable more than ten-fold increase from the baseline duration. It is such extreme timing deviations that challenge conductors and collaborative artists to virtuosic feats of prediction and adaptation.
While there is a fair degree of commonality in where each performer chooses to invoke the most significant of these excursions from the underlying pulse grid, the ways in which they navigate these and other transitions form unique and often recognizable signatures of the performer or a performance. These time perturbations are the result of practiced choreography to influence the perceived musical context and impose structure on the musical text, to create emphases, and to elicit the desired emotion response from the listener. Such timing variations form the core evidence of the work behind each performance. Thus, it is helpful to be able to see this work represented concretely using an encoding familiar to any musically literate viewer.
In the special case where the stretched pulses coincide with the music structures to elicit a feeling of a roller coaster at the crest of a hill, they are called tipping points, see Chew (2016) or Chew (2017). In Figure 5, cue balls are perched atop each tipping point in Maria Callas' performance. Not all extreme timing deviations are tipping points, and not all tipping points are signaled by a generous use of time. For an empirical study on what generates tipping points, see Naik and Chew (2017). Figure 6 focuses on the excerpt within the rectangular box in Figure 5. Figure 6(a) shows the composer's original notated durations. Figure 6(b) shows Maria Callas' performed durations in performance (or real) time; listen to this excerpt at https://vimeo.com/127507105. Here, the durations of each eighth note are not plotted against an index of eighth-note counts, but are plotted at the time at which they occur to allow for synchronization with the audio. For a discussion on score time versus performance time, see Chew and Callender (2013). This plot is also interpolated using a spline function. Figure 6(c) shows a transcription of Callas' recorded performance. The transcription process is straightforward. Because the excerpt begins with a fairly steady (in performance) eighth-note sequence, an underlying pulse grid can be established quickly for the proportional durations. On the repeat of the phrase, the duration of the three-eighthnotes sequence is longer. Hence, the notation was greatly simplified by invoking a metric modulation, from 86 eighth notes per minute to 60 eighth notes per minute, which provided a new unit pulse length rather than persisting with the same unit pulse. The RMSE between the performed and transcribed durations for the notes of the excerpt shown in Figure 6 is 87.4 ms; the precise details are shown in Table 2 and Figure 19 in Appendix A. From both the graph and the transcription, it is clear that many notes are elongated beyond their notated durations for emphasis and expressive effect. Additional time has been inserted for breaths and to segment the phrases and subphrases. Musically, the first big elongation (the tipping point marked by the leftmost red cue ball in Figure 5) is part of a big ritenuto at the end of the first four bars, the second tipping point stretches the octave leap up to the top Af, and the third tipping point provides a pause (and breath) before the final ritardando at the last two bars. Unlike conventional ways of marking these expressive gestures, such as by using labels like ritenuto and ritardando, the graph (Figure 6(b)) and notation (Figure 6(c)) show the details of exactly which eighth notes are lengthened, by how much, and which ones not. The notation additionally show the glissandi, marked by distinct note pairs connected by small slurs in the second and fifth bars in Figure 6(b). As indicated by the metric modulation, the two subphrases are performed at different tempi, with the second one a step slower than the first. Visual inspection of Figures 6(c) and 6(a) makes apparent the marked difference between Callas' performance and the score.
"Happy Birthday"
To show that the literal notation of expressive timing is not confined to classical singing, but also to vernacular forms, we turn our attention to Marilyn Monroe's (1962) sultry rendition of "Happy Birthday," performed and recorded live in Madison Square on the occasion of the U.S. President John F. Kennedy's 45th birthday. Figure 7(a) shows the conventional notation for "Happy Birthday." Figure 7(b) shows a graph of the instantaneous tempo at each syllable in Monroe's rendition of the tune. For greatest precision, the onsets of every syllable were annotated using Sonic Visualiser and checked against the audio signal and spectrogram of the audio signal. Each "Happy" gave the approximate duration of an eighth note and tempo for each new subphrase; sometimes the tempo had to be changed at "to you," depending on the rate at which the words were sung. The RMSE between the performed and transcribed durations for the notes shown in Figure 7 is 51.5 ms; the details are shown in Table 3 and Figure 20 in Appendix A. Because the syllables map to a variety of duration categories in the score, it is not straightforward to generate a graph of eighth-note durations in a score or real time. Instead, the instantaneous tempo is plotted in real time (as opposed to score time) at the instance of the onset of each syllable. The instantaneous tempo at each syllable, T s , is computed as a function of the onset times, fo i g (in seconds), and the corresponding syllable's notated duration, fs i g (in beats), and is given by T s ¼ 60 Á s i =ðo iþ1 À o i Þ. Note that here, local minima signal the pauses or momentary slow-downs, while the peaks mark the fastest-sung syllables. Because Marilyn's singing of "Happy Birthday" is almost speech-like in its flexible interpretation of time, the transcription led to the simplest notation when invoking multiple metric modulations, practically at every two-word subphrase.
The breathlessness of Marilyn's singing is marked by the many pauses she takes, which show up as local minima in the plot in Figure 7(b). The many pauses break the usual flow of the melody as well as the phrases in the text. For example, there is a short breath break after almost every instance of the words "Happy birthday." These breath breaks register as rests in the notation of the performed durations. Portamenti in the sung melody are labeled with slurs in the third, sixth, and last bars. The tempo changes frequently, practically every time the voice re-enters following a breath break. The first "to you" pushes forward (poco accel.), the second "to you" holds back-the glissando arrives at the note before the "you" is vocalized, almost imperceptibly. In the next phrase, the octave leap builds to the climax, which is at a stately 60 beats/min but accelerates to the end of "Mister President" before the final "Happy birthday." These three examples demonstrate how performed durations, both carefully sculpted ones and those due to chance, can be captured through transcription using conventional music notation. The following section extends this practice to the transcription of heart rhythms in ECGs of cardiac arrhythmias.
Abnormal heart rhythms
This section considers the transcription of arrhythmic heartbeats using conventional music notation. When the normal electrical activity in the heart is disrupted or altered, arrhythmia results and the heart can beat irregularly, or excessively fast or slow. The ECGs of a heart in sinus (normal) rhythm can make for decidedly uninteresting transcriptions, but the abnormal heart rhythms of arrhythmia are much more varied, offering the potential for producing highly musical rhythm transcriptions.
Take, for example, the trigeminy rhythm, an abnormal heart rhythm in which every third beat is a premature ventricular contraction. Each premature ventricular contraction is followed by a full compensatory pause (a skipped beat) because the heart is still in its refractory period and cannot respond to a stimulus to initiate the next beat. Premature ventricular contractions tend to occur in repeated patterns, aptly named bigeminy (every other beat), trigeminy (every third beat), quadrigeminy (every fourth beat), and so on. Figure 8 shows a trigeminy rhythm and its transcription. One can imagine extensions to the other premature ventricular contraction rhythms, such as the bigeminy and the quadrigeminy. Note the resemblance of the trigeminy rhythm-regular beat, early beat followed by a compensatory pause, regular beat-to a prototypical Viennese waltz rhythm. The onset of each beat is given by the peak, also known as the R of each QRS complex, in the signal of the upper graph. Given that the standard chart speed is 25 mm/s and a three-beat period is 56 mm on the chart, a beat is ð56=25Þ=3 s long and the tempo is given by 60=ðð56=25Þ=3Þ % 80 beats/min. This demonstrates how the tempo is computed in the examples to follow. The RMSE between the R-R intervals and the transcribed durations for Figure 8 is 24.8 ms; the details are shown in Table 4 and Figure 21 in Appendix B.
The next transcription examples are also derived from surface ECGs; they comprise short summaries from a continuous 18-hour recording taken using a three-lead Holter monitor and show interesting states of atrial fibrillation. Figure 9 shows the first ECG excerpt and the corresponding transcription of the rhythm. In this example, beats that are slightly more prominent (having greater voltage change) in the ECG are given tenuto-staccato articulation markings; R-R intervals that are slightly short of the full value of a beat duration are marked staccato and R-R intervals that are slightly longer than the full beat value are marked tenuto. The meter changes are assigned to group beats with similar morphology, such as the six wide complex beats in the middle of the sequence, and repeated rhythm patterns, such as the 3 : 2 : 2 pattern. The RMSE between the R-R intervals in the ECG trace and the transcribed durations is 34.6 ms; the details are shown in Table 5 and Figure 22 in Appendix B.
Mixed Meters
The toggle between 7/8, subdivided as 3 : 2 : 2, and 5/8, subdivided as 3 : 2, meters is reminiscent of the third (2004). The composer describes the piece as a buoyant dance built around the 7/8 pattern: three beamed eighth notes, eighth note þ eighth note rest, eighth note þ eighth note rest. Highlighted in the transcription are the patterns of three onsets separated by three-eighth-note and two-eighth-note intervals that are part of the the 3 : 2 : x rhythmic motif. Note that the notation makes these patterns readily discernible. Figure 10 shows the score of a short composition based (strictly) on the rhythm. It is made up of fragments cannibalized from Penta Metrics, movement III, sometimes transposed so as to fit with the local tonal context. For example, the first bar corresponds to the first bar of Penta Metrics III, the third bar corresponds to the second bar, and the ending chord is identical to that in Larsen's piece. In between, Bars 2, 4, 5, and 6 are a mix of material, chords and descending octaves, from the sequences in Bars 57 to 60 and Bars 42 to 44, shown in Figures 11(b) and 11(a), respectively.
A video comparing the ECG and rhythm transcription, and the collaged Mixed Meters can be viewed at https:// vimeo.com/257248109. Figure 12 shows the second ECG excerpt and the corresponding transcription of the rhythm. For this example, the most prominent peak in the ECG sequence is highlighted with an accent on the corresponding note, and the wide complex beat with a tenuto mark; other ways to differentiate these waveform details are also possible. The notation in the middle section is simplified by invoking a metric modulation from 94 beats/min to 126 beats/min. This makes the new quarter note 3/4 the value of the previous quarter note, a 25% reduction in time for a beat or a 33% increase in tempo or beat rate. A slight acceleration (marked poco accel.) indicates that the duration of the second beat in the penultimate bar is slightly shorter than the tempo might suggest; the acciaccatura tied to the final note prompts the early onset of the final note, achieving the effect of shortening the penultimate note. The changing meters are chosen to accommodate the different grouping structures. The RMSE between the R-R intervals in the ECG trace and the transcribed durations for Figure 12 is 53.0 ms; the details are given in Table 6 and Figure 23 in Appendix B.
Siciliane
This excerpt is slower than the first one, and the long half note in the second bar requires a melodic profile that will fit with this temporal structure. The melody that comes to mind is that of the "Siciliane" in Johann Sebastian Bach's Flute Sonata No. 2 in Ef major, BWV 1031, and the piece provides the material for the short composition shown in Figure 13. The original rhythm of the "Siciliane" is lyrical and straightforward, and close to the atrial fibrillation rhythm, but not the same. The melodic profile fits the transcribed rhythm well. Small adjustments are made to the melody so that it fits the rhythm. Figure 14 shows the original melody and the one that has been tweaked to fit the transcribed rhythm. The new melody uses Bars 1, 2, and 4 of the original melody. A passing note was added in the third bar of the modified melody to fit the transcribed rhythm, and two notes from the last bar were inserted to provide a bridge to the concluding bar. An animation showing the correspondence between the ECG and the rhythm transcription, and between the modified Siciliane and the ECG can be viewed at https://vimeo. com/221351463. Figure 15 shows the third and final ECG excerpt and the corresponding transcription of its rhythm. As before, the most prominent peaks in the ECG are assigned accent marks; the wide complex beats are given tenuto marks, as are notes of duration slightly longer than their notated values. The RMSE between the R-R intervals in the ECG and the transcribed durations is 40.1 ms; the details are given in Table 7 and Figure 24 in Appendix B.
Tango
Immediately apparent in the transcription are the 3 : 3 : 2 rhythmic pattern, characteristic of the tango, and variations on this pattern, 2 : 3 : x. Capitalizing on the tango reference, the material for the short composition draws from a cadenza-like piano solo in Astor Piazzolla's Le Grand Tango for cello and piano (1982). The original excerpt from Piazzolla's piece that provided material for the short composition in Figure 16 is given in Figure 17. In the modified score, the third iteration of the descending sequence is reduced to fit the 7/8 bar by removing the triplet figure. A bridge bar is inserted before material from the first and third bars are combined to reach the concluding bar, which also draws from material in the third bar but with a different finish.
A video showing the ECG, rhythm transcription, and adapted Tango can be viewed at https://vimeo.com/ 257253528.
Conclusions and discussions
Having traversed a variety of transcription examples ranging from extreme rhythmic flexibility in performance and the natural flounderings of sight-reading (extreme in a different sense), to the dance-like rhythms of premature ventricular contractions and atrial fibrillation, it is time to reflect on what it means to be able to turn these rhythms accurately to music notation.
A symbolic representation can be used to encode knowledge that can serve as input to machine analysis of these time sequences, thus opening up new approaches for analyzing performed music and arrhythmia sequences. Further work needs to be done to gauge the stability of the transcriptions. Distance metrics can be devised to quantify distances between notations created by different transcribers for the same time sequence to determine consistency. Some key applications of the representation include large-scale deployment of motif detection, similarity assessment, and style classification. For example, after transforming heart period tachograms to elementary rhythm patterns, Bettermann et al. (1999) used a hierarchical pattern scheme to compute the predominance and stability of rhythm pattern classes. Further analyses of the transcribed rhythms could reveal hierarchical structure, like that in Lerdahl and Jackendoff (1996).
The main challenge, for both music performance and cardiac arrhythmia, lies in determining what it is we wish to represent. What are the essential structures of the information streams? What do they mean? Which of these structures are variable and subjective and which are fixed and invariant?
Ideally, transcription should reveal the essential background structure of the temporal experience . . . In that sense, transcription is a form of analysis in itself. The difficulty of transcribing free rhythm may result from the inadequate nature of the notational system but, at the same time, it signals a deeper analytical problem. Graphic signs can be easily invented once it is clear what we want to represent. Frigyesi (1993, pp. 60-62) The changing meters, metric modulations, and detailed note groupings and subgroupings-for example, through the beaming of notes-lead to a number of questions. What can be notated but is not? What is notated and why? Which structures are the result of subjective interpretation and which are fundamental to the temporal sequence? The reality of the metrical groupings implied in each transcription needs to be further tested, for example, by comparing them with expert annotations. More features can be incorporated and larger numbers of transcriptions made to better understand the kinds of patterns that emerge. The transcriptions of abnormal heart rhythms can also serve as new sources of natural-sounding musical rhythms.
The transcriptions of the atrial fibrillation excerpts reveal the vast differences between experiences of irregular heartbeats at different times of the day. Mixed Meters was recorded in the evening at 20:07:45, the Siciliane and the Tango in the late afternoon, at 16:52:59 and 17:39:26, respectively. The rhythms differ not only in rate but also in rhythmic content. Conventional ways of describing atrial fibrillation as simply a condition with irregular heartbeats due to fibrillation in the upper (atrial) chambers of the heart fails to capture the finer features of these time-varying rhythmic structures. It may be that, as for musical styles, information encoded in these rhythmic patterns can be used to distinguish between different forms or phenotypic subtypes of atrial fibrillation, which may be helpful for disease stratification with impact on medical diagnostics and therapeutics.
Acknowledgments
This article is inspired by personal experiences with music performance and cardiac arrhythmias. I am grateful to Dr. Edward Rowland and Professor Pier Lambiase and their respective clinical and catheterization laboratory teams for treating and curing my arrhythmias; Dr. Jem Lane for sharing the story of his Christmas party quiz where he made his colleagues guess arrhythmia types by playing them music of different tempi-this prompted me to create more precise and tangible connections between musical and abnormal cardiac rhythms; and Matron Carolyn Brennan who likens atrial fibrillation to free jazz. Dr. Zongbo Chen helped retrieve my data for the early transcription experiments. Last but not least, Professor Peter Child and Lina Viste Grønli concocted and included me in the Practicing Haydn project, which started me on this marvelous journey.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article. Figure 19. Difference between performed and transcribed durations for "O Mio Babbino Caro" as performed by Maria Callas.
Appendix B: Precision of ECG rhythm transcriptions
This section contains tables and graphs documenting the difference between the R-R intervals derived from the ECG traces and the transcribed durations. The squared error between the two are given, as well as the RMSE, in the tables, and stem plots of the difference are given in the figures. Table 4 provides the numbers for the error calculations for the trigeminy example and Figure 21 the corresponding stem plot. Table 5 provides the numbers for the error calculations for the atrial fibrillation excerpt that formed the basis of Mixed Meters, with the corresponding stem plot in Figure 22. Table 6 gives the numbers for the atrial fibrillation excerpt that became Siciliane, with the corresponding stem plot in Figure 23. Table 7 gives the numbers for the atrial fibrillation excerpt for the Tango, with the corresponding stem plot in Figure 24. Not reflected in the numbers and graphs are the effects of the accents and articulation markings incorporated in the transcriptions that mark amplitude (voltage) changes, waveform morphology, or slightly elongated or shortened durations. Figure 21. Difference between R-R intervals in ECG and transcribed durations for trigeminy example. Figure 23. Difference between R-R intervals in ECG and transcribed durations for atrial fibrillation excerpt Thu 16-52-59 Couplet 563 ms (Summary of event) 1 min HR 83 beats/min (Siciliane).
|
2019-02-19T14:07:16.446Z
|
2018-09-20T00:00:00.000
|
{
"year": 2018,
"sha1": "2c8bcc19d1575a8f5f0774cd7833f23e7efad81e",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2059204318795159",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "5292f315fbf856ec53c1faa6cf569678c57d4e90",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
49591688
|
pes2o/s2orc
|
v3-fos-license
|
Soft Biomimetic Fish Robot Made of Dielectric Elastomer Actuators
Abstract This article presents the design, fabrication, and characterization of a soft biomimetic robotic fish based on dielectric elastomer actuators (DEAs) that swims by body and/or caudal fin (BCF) propulsion. BCF is a promising locomotion mechanism that potentially offers swimming at higher speeds and acceleration rates, and efficient locomotion. The robot consists of laminated silicone layers wherein two DEAs are used in an antagonistic configuration, generating undulating fish-like motion. The design of the robot is guided by a mathematical model based on the Euler–Bernoulli beam theory and takes account of the nonuniform geometry of the robot and of the hydrodynamic effect of water. The modeling results were compared with the experimental results obtained from the fish robot with a total length of 150 mm, a thickness of 0.75 mm, and weight of 4.4 g. We observed that the frequency peaks in the measured thrust force produced by the robot are similar to the natural frequencies computed by the model. The peak swimming speed of the robot was 37.2 mm/s (0.25 body length/s) at 0.75 Hz. We also observed that the modal shape of the robot at this frequency corresponds to the first natural mode. The swimming of the robot resembles real fish and displays a Strouhal number very close to those of living fish. These results suggest the high potential of DEA-based underwater robots relying on BCF propulsion, and applicability of our design and fabrication methods.
Introduction
A s an emerging field, soft robotics has been the focus of major research efforts. 1,2 Soft robots, that is, robots composed of compliant materials, offer important advantages over conventional rigid robots, such as simplified body structure and control, 3,4 together with high robustness and versatility. 5,6 One promising application of soft robotics is biomimetic underwater robots, wherein the high mobility and efficiency of aquatic animals could be achieved, 7 by approximating their natural movements with the theoretically infinite number of degrees of freedom offered by soft-bodied robots. In addition to underwater applications such as inspection and environmental monitoring, biomimetic underwater robots could also serve as a platform to address biological questions related to the biomechanics and control of living fish. [8][9][10] Within this context, researchers have recently developed soft underwater robots based on different actuation technologies, such as, ionic polymer-metal composites, lead zirconate titanate, shape memory alloys, fluidic elastomer actuators, and dielectric elastomer actuators (DEAs). [11][12][13][14][15][16] Among these soft actuation technologies, DEAs [17][18][19] show promising features for biomimetic underwater robots. DEAs are compliant (typical elastic modulus of *1 MPa), fast (response time <200 ls with suitable material choice 20 ), efficient (theoretically maximum 90% of electromechanical efficiency 17 ), and exhibit large actuation strokes (>85% of linear strain 21 ). When immersed in water, dielectric elastomers show very little water absorption (up to 3.5% of own weight in 365 days 22 ). In DEA devices, it has been reported that a cell stretcher interfacing liquid can function 24 h, 23 and an underwater robot can swim >3 h. 14 The latter consists of integrated power source and controller, demonstrating feasibility of selfcontained DEA underwater robots. DEAs consist of a dielectric elastomer membrane sandwiched between two compliant electrodes. The application of high voltage (typically >1 kV) induces opposite charges on the electrodes, resulting in an electrostatic attractive force (Maxwell pressure), which squeezes the elastomer membrane in the thickness direction and generates an area expansion.
Based on DEAs, researchers developed a jellyfish robot, 13 a ray robot, 14 and a bimorph swimmer. 15 We focus in this article on a fish-shaped robot, consisting of a body and a caudal fin, as one morphology of DEA-based underwater robots. Fish swimming is mainly divided into two types: body and/or caudal fin (BCF) propulsion and median and/or paired fin (MPF) propulsion. 7 Although MPF locomotion offers maneuvering and stabilization, BCF locomotion enables swimming at higher speeds and acceleration rates with the most efficient movement (specifically, in case of thunniform mode). Therefore, employment of BCF locomotion, that is, a body and a caudal fin, can be a promising design approach for DEAbased underwater robots wherein high mobility and efficiency are expected. Also, given the diverse morphologies of fish and the fact that most of them generate thrust by BCF propulsion, the robots employing such a swimming mechanism could benefit from more design flexibility in terms of geometries and sizes. However, development of DEA-based underwater robots with BCF propulsion has not been attempted yet. For this reason, their designing principle, fabrication method, and performance characteristics are missing.
In this article, we report a model, fabrication method, and characterization of a DEA-based BCF fish robot consisting of a body and a caudal fin. This work is an expansion of a preliminary conference article, 24 where first DEA-based swimming robots have been presented. In this article, we included a mathematical model, based on the Euler-Bernoulli beam theory, for predicting the natural frequencies of the robot in water, from which we can set the range of driving frequencies.
Since the beating amplitude of the robot is comparable with the width of its body, we used a model able to describe large deformations. To validate the model, the outputs are compared with the characterization results of the fabricated robot. The fabrication process, which consists in laminating the silicone layers enabling the insulation of the high-voltage electrodes, is based on authors' preliminary results. 24 In the current robot design, we reduced the number of layers from 5 to 4 by shaping each layer with the same geometry. The robot is characterized with fixed-free boundary conditions (the head is cramped whereas the tail is free to move) as well as tethered freeswimming condition. In the fixed condition, the tail amplitude and thrust force are measured. As for the tethered freeswimming condition, the swimming speed is measured and the Strouhal number is estimated. We observe that the model shows natural frequencies similar to the peaks of the measured thrust force and swimming speed. We also observe that the swimming locomotion of our robot resembles to nature; a Strouhal number very close to that of real fish represents it quantitatively.
Materials and Methods
Structure and swimming mechanism of the robot The structure of the soft fish robot consists of four silicone elastomer layers that are laminated: two uniaxially prestretched DEAs sandwiching a body made of two silicone layers, forming an antagonistic configuration as shown in Figure 1a. The head part of the robot is made of a poly (methyl methacrylate) (PMMA) plate and two polyethylene terephthalate (PET) films. In this configuration, the highvoltage DEA electrodes are encapsulated between the silicone layers and the DEA elastomers, and are electrically insulated. The DEA electrodes on the groundside are instead exposed to the surrounding water. Thanks to this feature, the robot structure and fabrication process have been simplified, while enabling the analytical modeling of its dynamics, as described in the next section. Figure 1b(i) shows a top view of the robot in nonactivated state. The robot shape is straight due to the equal prestretch in the two DEAs. In the robot, as also schematically represented in Figure 1a, DEAs are placed only on the body while there is no actuation part on the caudal fin, so that the latter passively deforms like that of real fish. When the DEA on one side is activated ( Fig. 1b(ii)), it releases the internal stress of the prestretch and elongates, whereas the other one contracts, resulting in a global bending motion of the body. The body contraction moves the caudal fin that is deformed by the reaction force of the surrounding water. The recoil forces on the body and the caudal fin lead to net thrust pushing the robot forward. By actuating each DEA periodically ( Fig. 1b(ii, iii) (i) The unactuated robot shape is straight due to the two DEAs that are equally prestretched. (ii, iii) When actuating each DEA periodically and sequentially, the recoil forces on the body and the tail lead to net thrust pushing the robot forward. DEAs, dielectric elastomer actuators. Color images available online at www.liebertpub.com/soro the robot continuously generates the thrust force, leading to steady swimming in the forward direction.
Model and design
Researchers experimentally have shown that an efficient thrust performance occurs around the first resonant frequency. 25,26 Therefore, designing the soft fish robot addressing the matching between the structural natural frequency and the range of driving frequencies can be a reasonable approach. Moreover, several works have been done in robotics, exploiting the natural modes of vibration of the robotic structure to mimic fish-like swimming motions. 27,28 Specifically, the work presented by El Daou et al. 28 employed the second vibration mode.
In this context, the mathematical modeling of the fish robot in this study serves to extract the natural frequencies of the structure as a function of the design parameters, to match the chosen driving frequency range. As shown in Figure 2, the fish is modeled as a beam of constant thickness h and variable width b x ð Þ, where x is the coordinate along the longitudinal axis of the body. The geometry of the robot is inspired by the profile of a trout and we used a formula proposed in Ref. 29 to compute the target shape given the design parameters; where l is the total length of the robot. In this study, we set l as 150 mm. As for the target range of the structural natural frequencies, we considered a driving frequency range from 0 to 3 Hz, as trout of length similar to l show steady swimming in this range. 30 We describe the deformation of the structure through the Euler-Bernoulli beam theory. 31 where K x ð Þ is the bending stiffness, w x, t ð Þis the out-of-plane displacement, . s x ð Þ is the mass density per unit length, H x, t ð Þ is the function of hydrodynamic forces, and s x, t ð Þ represents the structural damping of the body. In addition, dots Á refer to differentiation with respect to time t, whereas apices ¢ refer to differentiation with respect to x. Equation (1) is a partial differential equation (PDE) with nonconstant coefficients and it is valid to describe the deformation of the fish if we have L@b x ð Þ@h, where L is the length of the structure. The coefficients . s x ð Þ and K x ð Þ vary with x due to the nonconstant with b x ð Þ. As for the mass density, we define it as where q is the density of the solid material, in this case a silicone elastomer, whereas h BODY and h DEA are the thicknesses of the body silicone layers and the DEA layers, respectively. To simplify the model, we neglect the mechanical effect of the electrode layers due to their thickness being much smaller than the other layers. We also neglect the prestretch of the DEAs because of their low ratio and equilibrium configuration. As for the stiffness, we accounted for the different Young's moduli between the body layers and the DEA layers using the method developed by Timoshenko,32 which consists in using an equivalent cross section with a homogeneous Young's modulus, correspondent to the higher one (in our case the body layers), where the width of the layer with lower Young's modulus is virtually reduced to account for the minor stiffness, according to So, the resulting stiffness can be computed as We model the oscillations of the fish with its head clamped, so the beam becomes a cantilever with fixed-free boundary conditions, that is Following the methodology proposed by Aureli et al., 33 we can rewrite Equation (1) in frequency domain as where M x ð Þ ¼ ð Þ is the nondimensional ratio between the mass densities of the fluid, in this case water, and the solid; Y is the complex nondimensional hydrodynamic function, which depends on the frequency parameter b 2pl and on the local Keulegan-Carpenter number k j ; l is the dynamic viscosity of the surrounding fluid, i.e., water.
For the scope of this work, the model has to only estimate the natural frequencies of the fish robot swimming in water in clamped mode. For this reason, we decided not to solve the full Equation (6) and damping effects, which would result in additional mathematical complexity that lies outside the scope of this article. We instead first of all extract the natural frequencies of the beam in vacuum. We then use the well-known inviscid approximation proposed by Sader to compute the correspondent natural frequencies in water 34 : where q f is the density of the fluid. Equation (7) is derived for beams with constant width b, so we approximate it by using a reference value b extracted from b x ð Þ. The modal analysis in vacuum for the undamped beam is conducted by taking Equation (6) with the boundary conditions Equation (5) and Even with these assumptions, due to the variable width b x ð Þ, both . s x ð Þ and K x ð Þ are not constant with x and even nonlinear functions of x, so the resulting PDE (8) cannot be solved analytically. We chose to apply the Galerkin method, projecting the solutions of Equation (8) where a i are scaling factors that guarantee that multiplying by / i x ð Þ and integrating in the domain x 2 0, L ½ , which in matrix form can be written as With where q i x ð Þ are the weights correspondent to the ith mode / i x ð Þ. Equation (12) represents an eigenvalue problem. The eigenvalues x of x are the natural frequencies of the undamped fish in vacuum, whereas the eigenvectors q x ð Þ are the correspondent sets of weights. We computed numerically all the integrals in v ij by choosing the number of shape functions for the projection of the solution as m ¼ 10. The first six natural frequencies in vacuum obtained from the model are given in Table 1, and the specification of the robot and material parameters used are summarized in Table 2.
As for the computation of the natural frequencies in water, from the theory on underwater vibrations of beams, we expect the values of the frequencies to decrease due to the hydrodynamic added mass of water. As shown by Sader's formula (7), the ratio between natural frequencies in fluid and natural frequencies in vacuum depends on the ratio between the densities of the fluid and densities of solid materials. Considering that in our case we used silicone elastomer, whose density is very close to that of the fluid, we expected a high decrease of natural frequencies in water for our fish robot. Sader's formula is an inviscid approximation, which is reliable in case the oscillatory Reynold's number Re ¼ pq f fb 2 =2l is Re!1. Usually for beams in transverse vibration, the reference length used in Re is the width, which in our case is variable which we can see that the hypothesis of inviscid fluid is satisfied. Therefore, we used Equation (7) to estimate the natural frequencies in water. From empirical observations, we chose to set as reference width b in Equation (7) the minimum width, so b ¼ b min and we obtained the natural frequencies in water given in Table 3.
Fabrication
The fabrication process of the robot is mainly divided into four steps: casting silicone elastomer layers, patterning electrodes, bonding of the silicone layers, and wiring of electrical connections. Figure 3a-f shows the fabrication steps. In this study, two different silicone elastomers were used: Nusil CF19-2186 and Dow Corning Sylgard 184. The former was used for the DEAs and the latter was used for the robot body. First, the silicone CF19-2186 was mixed with the manufacturerrecommended ratio for 1 min at 2000 rpm using a planetary mixer (Thinky ARE-250). The uncured silicone mixture was blade casted on a PET film using an applicator coater (Zehntner ZUA2000) and variable gap applicator (Zehntner ZAA2300), and cured in oven at 80°C for 1 h. After curing, the DEA membrane (thickness of *100 lm) was separated from the PET film and suspended in a PMMA frame with a silicone adhesive foil (Adhesives Research ARclear 8932EE) while being stretched uniaxially with a ratio of 1.25. Subsequently, electrodes made of a mixture of carbon black and soft silicone were patterned on both sides of the DEA membrane, using the pad-printing method. The details of the electrode composition and the pad printing are available in the literature. 35 After the patterning of the electrodes, the robot body layer, with a thickness of *250 lm, was chemically bonded to the DEA, using oxygen plasma surface activation (Diener electronic Zepto plasma system); the insertion of ethanol droplets in the bonding interface helps in removing air bubbles. 36 Once the DEA was fully bonded to the body layer, a hole was punched for the electrical connection. This sample was then again chemically bonded to another sample, which has different electrode shape, so that the connections of the high-voltage electrodes of the two DEAs do not overlap each other. Figure 3g shows details of the alignment of the electrodes and the connections. The entire part was then cut off from the frame with the desired shape, followed by attaching the head parts consisting of a PMMA plate and the PET film. Finally, the wiring was made using a conductive silver epoxy (Amepox ELECTON 40AC), polyimide tape, and a liquid silicone adhesive (Dow Corning Sylgard RTV-734). The mass of the assembled robot is 4.4 g.
Experimental setup
The fabricated robot was characterized in both fixed and tethered swimming conditions. All the characterizations were performed in a water tank with dimensions of 50 cm (L) · 40 cm (W) · 12 cm (H), filled with tap water. The robot was activated through a high-voltage converter (EMCO Q50) and a microcontroller board generating high-voltage sine waves. The range of voltage and frequency used in this study was 0-5 kV and 0-3 Hz, respectively. In the fixed swimming condition, the head part of the robot was mounted to a load cell (Applied Measurement Limited UF1) to measure the thrust force, and to a PMMA plate to observe the tail amplitude. The tail amplitude refers to the peak-to-peak displacement of the tip of the caudal fin in steady state oscillation. A CMOS camera was used to record the actuated deformations of the robot to assess the tail amplitude by image processing. The thrust force was measured by averaging the sensor value for 10 s. In the tethered swimming condition, the swimming speed was measured using a CMOS camera and a scale. Each measurement was repeated three times at every driving frequency or voltage step, and the average value was reported. Thin copper wires with a diameter of 36 lm were used to drive the robot to minimize the mechanical resistance during swimming. The tail amplitude as a function of the applied voltage at the driving frequency of 0.25 Hz is presented in Figure 4a. The amplitude increases almost linearly with the voltage, and a maximum amplitude of 49.1 mm is observed at 5 kV. Figure 4b shows the plots of the tail amplitude and the thrust force as functions of the driving frequency at the applied voltage of 5 kV. While the amplitude decreases smoothly with the frequency, the force shows a similar trend but peaks at 1.25 and 2.75 Hz, respectively. These peaks suggest the presence of resonance modes, and are visible in their shape at those corresponding frequencies, as shown in Figure 4c. In this figure, the inset graphs show simulated resonance mode shapes. At 1.25 Hz, the deformation of the robot is analogous to the second mode shape. Similarly, at 2.75 Hz, the third mode shape appears. The results also suggest that there would be the first mode whose shape is similar to that of 0.25 Hz, at a frequency around this value. These peak frequencies (0.25, 1.25, and 2.75 Hz) are close to the model result (0.17, 0.83, and 2.31 Hz) and take higher values, as indicated in Figure 4b. The difference between the model and experiments may be because of three main causes. First is the presence of the electrode layers that can make the bending stiffness of the structure higher and, therefore, increase the natural frequencies. The second is the stiffening of the silicone elastomers due to the oxidation by the oxygen plasma surface activation, 33 which should again result in higher values of the frequencies. Finally, the third is the use of Sader's formula (7) to map the natural frequencies of the robot in vacuum to those in water introduces an additional source of error since the formula was derived for beams with constant width.
The tail amplitude does not show peaks at those frequencies. This suggests that the generation of the thrust force does not depend only on the amplitude of the tail beat, but rather on the whole body deformation, which shows a large excitation in correspondence to the resonance frequencies. The observations in real fishes support our sight that subcarangiform swimmers, that is, trout fishes from which we obtained the robot geometry, use half of their body to generate thrust force and not only the tail. 7 The measured thrust force also shows a decreasing trend as the frequency is increased. A possible reason is the reduction of the tail amplitude toward higher frequency. One potential solution for compensating the force reduction is to implement a variable stiffness element made of phase change materials into the robot. Thanks to this Color images available online at www.liebertpub.com/soro element, the body stiffness could be modulated to shift the resonance frequencies, leading to larger tail amplitude and thrust force at higher frequencies. In the shapes of the robot presented in Figure 4c, especially that correspondent to 1.25 Hz, the body shows a drumhead shape, due to the nature of the DEAs that elongate also in the in-plane direction perpendicular to the longitudinal axis (i.e., head-tail axis). This phenomenon may be an additional reason for the discrepancy between the experimental data and the model, which does not include this effect. The drumhead shape may also have a negative influence on the thrust force. If so, one solution to prevent this effect would be to adjust the prestretch ratio of the DEAs. It is known that DEAs deform perpendicularly with respect to the direction of the prestretch. 38 Therefore, prestretching of the DEAs also in the width direction can be beneficial. Therefore, prestretching of the DEAs also in the width direction can be beneficial. Figure 5a shows a sequence of the robot swimming under the tethered condition at the driving frequency of 0.75 Hz with an applied voltage of 5 kV (see also Supplementary Video S1; Supplementary Data are available online at www. liebertpub.com/soro). We observed that the swimming motion exhibited by the robot resembles real fish. Figure 5b presents the swimming speed at 0.75 Hz as a function of the applied voltage. The swimming speed increases with the applied voltage. During swimming, the head of the robot is moving due to the recoil forces that create a moment about its center of mass. Therefore, unlike our assumption, the robot structure is no longer considered as a perfect cantilever in the tethered swimming condition. This is obvious in Figure 5a where the head of the robot is rotating. The power consumption of the robot is measured to be 0.92 W. However, this will be greatly reduced by using a powering strategy wherein electric charges on the DEA capacitors are collected at each cycle. Throughout the experiments, the robot did not experience dielectric breakdown. Yet, breakdown failure of the device would appear when applying a voltage beyond its breakdown strength or as a consequence of fabrication errors. Figure 5c shows the swimming speed as a function of the driving frequency at the applied voltage of 5 kV. The swimming speed has a peak value of 37.2 mm/s (0.25 body length/s) at 0.75 Hz, and shows a trend different from the thrust force that has peaks at 1.25 and 2.75 Hz. The difference of the peak positions results from the change of boundary conditions that shifts the value of resonance frequencies. We assume that the first mode appears at 0.75 Hz, given the shape shown in Figure 5c inset, which is the same as that observed for the first natural frequency in the clamped configuration (Fig. 4c). In Figure 5c, interestingly the swimming speed takes a negative value at 3 Hz and the robot swims backward. This effect may also result from the boundary conditions, since the head assumes an amplitude larger than the tail at the corresponding vibration mode.
To compare the swimming of our robot with real fish, we estimate the Strouhal number defined as where f is the driving frequency, A is the tail amplitude, and U is the swimming speed. It is known that the swimming of various species of fish (thunniform, subcarangiform, and carangiform) corresponds to a Strouhal number in a specific range of 0.25 < St < 0.40. 7 We found that the tail amplitude in the tethered swimming condition at 0.75 Hz to be 23.5 mm by estimating from Figure 5c inset, resulting in the Strouhal number of the robot to be St = 0.47, which is very close to the range already mentioned for real fish. However, it should be fair to mention that such a range of St is known to be valid in a range of Reynolds number Re between 10 4 and 10 6 (Re = LU/ m, where L is a characteristic length and m is the kinematic viscosity of water). Our robot has Re of 5.6 · 10 3 , slightly lower than the range, so it is uncertain whether the St obtained is still valid.
Conclusion and Future Work
We have presented modeling, designing, fabrication, and characterization of a fish type DEA-based soft biomimetic underwater robot that swims by BCF propulsion. The mathematical model used to compute the natural frequencies of the structure showed values similar to the experimental results. The robot exhibited swimming motion resembling real fish, as also quantitatively estimated by the Strouhal number. These results suggest that the high potential of DEA-based underwater robots relies on BCF propulsion and the applicability of our design and fabrication methods. Our future work will consist in expanding the mathematical model to the tethered swimming condition. Specifically, the model should not consider the robot head as a fixed boundary, but should represent it as a point mass with free boundary condition. In this future model, the stiffening effects from the presence of the electrode layers and the oxidation due to oxygen plasma bonding will also be included. Subsequently, we will work on characterizing robots in different size scales and swimming modes to understand how far our model and building method are applicable.
|
2018-07-12T06:15:10.768Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "21159856f6a0f5ce019c31c4a02a23f1ece06e2a",
"oa_license": "CCBY",
"oa_url": "https://www.liebertpub.com/doi/pdf/10.1089/soro.2017.0062",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "21159856f6a0f5ce019c31c4a02a23f1ece06e2a",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
17414915
|
pes2o/s2orc
|
v3-fos-license
|
Comparing two strategies of dynamic intensity modulated radiation therapy (dIMRT) with 3-dimensional conformal radiation therapy (3DCRT) in the hypofractionated treatment of high-risk prostate cancer
Background To compare two strategies of dynamic intensity modulated radiation therapy (dIMRT) with 3-dimensional conformal radiation therapy (3DCRT) in the setting of hypofractionated high-risk prostate cancer treatment. Methods 3DCRT and dIMRT/Helical Tomotherapy(HT) planning with 10 CT datasets was undertaken to deliver 68 Gy in 25 fractions (prostate) and simultaneously delivering 45 Gy in 25 fractions (pelvic lymph node targets) in a single phase. The paradigms of pelvic vessel targeting (iliac vessels with margin are used to target pelvic nodes) and conformal normal tissue avoidance (treated soft tissues of the pelvis while limiting dose to identified pelvic critical structures) were assessed compared to 3DCRT controls. Both dIMRT/HT and 3DCRT solutions were compared to each other using repeated measures ANOVA and post-hoc paired t-tests. Results When compared to conformal pelvic vessel targeting, conformal normal tissue avoidance delivered more homogenous PTV delivery (2/2 t-test comparisons; p < 0.001), similar nodal coverage (8/8 t-test comparisons; p = ns), higher and more homogenous pelvic tissue dose (6/6 t-test comparisons; p < 0.03), at the cost of slightly higher critical structure dose (Ddose, 1–3 Gy over 5/10 dose points; p < 0.03). The dIMRT/HT approaches were superior to 3DCRT in sparing organs at risk (22/24 t-test comparisons; p < 0.05). Conclusion dIMRT/HT nodal and pelvic targeting is superior to 3DCRT in dose delivery and critical structure sparing in the setting of hypofractionation for high-risk prostate cancer. The pelvic targeting paradigm is a potential solution to deliver highly conformal pelvic radiation treatment in the setting of nodal location uncertainty in prostate cancer and other pelvic malignancies.
Background
Prostate cancer is the most common malignancy to afflict the Canadian male population. It is estimated that approximately 20700 men were diagnosed with prostate cancer in 2006 and approximately 4200 will die of this disease [1]. Standard curative treatment for high-risk prostate cancer [2] is a radical course of radiation treatment with long-term androgen suppression therapy [3,4]. A recently completed RTOG (Radiation Therapy Oncology Group) prospective randomized phase III trial shows that whole pelvic nodal irradiation improves biochemical disease-free survival in patients with a high-risk (>15%) of positive pelvic lymph nodes from prostate cancer based on tumour stage, PSA, and Gleason grade [5].
This radiation treatment usually consists of sequential phases using shrinking fields. Traditionally, the first phase consists of five daily fractions each week to the whole pelvis including the prostate gland and pelvic lymph nodes at risk using a four-field box technique. The usual prescribed doses range from 44 to 50.4 Gy in 1.8-2.0 Gy fractions. The remainder of the radiation treatment is given to a reduced boost volume targeting the prostate gland (± seminal vesicles) using the same fractionation schedule to a radical total dose. Androgen suppression therapy can be given in neo-adjuvant, concurrent, and/or adjuvant form with the radiation [3,4]. Unfortunately, the use of conventionally planned whole pelvic radiotherapy to treat the whole pelvis results in toxicity to normal structures such as the small bowel, rectum, and bladder.
Recent studies have illustrated a steep dose response relationship through escalating the total dose to approximately 80 Gy (1.8-2.0 Gy per fraction) in intermediate and high-risk prostate cancer patients. The increasingly higher doses also intensifies toxicities to the organs at risk (OARs) which can be partially overcome by using advanced planning techniques such as IMRT or a concomitant boost approach (6)(7)(8)(9)(10)(11)(12)(13)(14). However, dose escalation has not typically been performed in conjunction with pelvic nodal radiation. The pelvic dose bath may make it difficult to safely dose escalate the prostate gland while respecting normal tissue constraints to the OARs.
Recent literature suggests that prostate cancer may be different than other malignancies in terms of its slow proliferation rate. Labeling indexes can be extraordinarily low, with most reports suggesting levels below 1%, and longer potential doubling times with a median T pot value of 40 days (range 15 to 170) [15]. Traditionally, an alpha:beta ratio of 10 Gy is used to calculate the biologically equivalent dose (BED) for acute toxicity and tumour response. Current studies are predicting an alpha:beta ratio of 1.5 Gy (range 0.8-2.2) for prostate carcinoma, below the classic alpha:beta ratio of 3 to 4 Gy for rectal late radiation effects [16][17][18][19][20][21][22]. This gives a potential therapeutic advantage for hypofractionated RT schedules over conventional fractionation by escalating the biologically equivalent dose in a shorter period of treatment time with better tumour control and reduced rectal toxicity [18,[23][24][25]. Proposed biologically equivalent hypofractionated treatment schedules for prostate cancer have been suggested in the literature [18][19][20]24].
The aim of this comparative dosimetric analysis is to evaluate two pelvic treatment paradigms of either pelvic vessel contouring plus margin expansion (pelvic vessel targeting paradigm) or full pelvic content treatment excluding identified critical structures (normal tissue avoidance paradigm) in the setting of hypofractionated treatment of high-risk prostate cancer. Helical tomotherapy will be used as the dynamic intensity modulated radiation therapy solution for both treatment solutions. 3DCRT plans will be used for control comparisons.
Methods and materials
Patients and target/normal tissue contours A sample of ten patients were scanned on a helical CT scanner (Phillips 5000) with 3 mm slice thickness with comfortably full bladder and no bowel preparation prior to simulation. The prostate and seminal vesicles were identified and contoured on each patient (by JY) and reviewed by two clinicians (GR, GB) in order to generate consensus-based contours. The PTV1 was defined as prostate + 7.5 mm (Figure 1). The nodal target was defined by a method proposed by Shih et al [26]. The distal common iliac (2 cm superior to the common iliac bifurcation), internal iliac (4 cm distal to bifurcation of the common iliac), and external iliac vessels (to the top of the superior pubic symphysis) were outlined from L5-S1 to the top of the symphysis pubis.
The conformal pelvic vessel targeting paradigm was assessed by generating a lymph node planning target volume which was defined by a 20 mm radial expansion of the contoured vessels and tailored to respect the muscle and bony pelvis normal tissue boundaries up to 10 mm. Therefore the final PTV cpvt for conformal pelvic vessel targeting included both PTV1 and the lymph node planning target volume (Figure 1). The conformal pelvic normal tissue avoidance paradigm was assessed by generating a pelvic soft tissue target which was defined as the pelvic soft tissue volume within a standard four field box. This volume exists between the previously defined lymph node planning target volume, respecting the normal tissue boundaries of muscle and bone, and subtracting out all other identified targets such as small bowel, bladder, rectum, and femora. Therefore, the PTV cnta for conformal normal tissue avoidance was the PTV1 + lymph node planning target volume + pelvic soft tissue target ( Figure 1). In both planning cases, the simultaneous in-field boost (SIB) prostate boost volume would be PTV1.
Rectum, bladder and femoral heads were outlined using the guidelines provided by the RTOG P-0126 protocol. Specifically, the entire outer wall of the bladder is contoured, the rectum is contoured from the anus (at the level of the ischial tuberostities) for a length of 15 cm or to where the rectosigmoid flexure is identified. Femurs include the femoral head and extend inferiorly to the level of the ischial tuberosity. Small bowel was contoured in all slices where the nodal target or pelvic target was identified. All critical structures were contoured as a single volumetric structure and considered to be solid organs for dosimetric calculations. A prescription dose of 68 Gy was prescribed to 95% of the PTV1 in 25 fractions. PTV cpvt and PTV cnta were prescribed 45 Gy in the same 25 fractions for both the conformal pelvic vessel targeting and conformal normal tissue avoidance strategies, respectively.
Helical tomotherapy planning
The dynamic IMRT solution chosen for this dosimetric feasibility study was helical tomotherapy (TomoTherapy Inc., Madison, WI, USA). CT datasets and structures were transferred to the TomoTherapy planning workstation using the DICOM RT protocol. The TomoTherapy station re-sampled the CT datasets in 256 × 256 voxels with the slice thickness re-sampled to the smallest slice separation in the original CT dataset. The planning system used an inverse treatment planning process based on iterative least squares minimization of an objective function [27]. Initial precedence, importance, and penalty factors were set (Table 1) to obtain a preliminary helical tomotherapy plan. Subsequent optimization was based on an assessment of target and OAR dose-volume parameters that have not been achieved and altering the penalty factors associated with the target/OAR to drive the plan optimization. The solutions must have resulted in deliverable treatment and could not exceed 30 minutes for total treatment delivery. The dose was calculated using a superposition/ convolution approach [28,29]. Helical delivery is emulated in calculating 51 projections per rotation and the dose calculation uses a total of 24 different angles for the dose spread array of the incident 6 MV beam. The optimization algorithm is deterministic which allowed for the direct comparison of different strategies. A standardized class solution with a fan beam width of 11 mm, a pitch of 0.5, modulation factor of 3 and a dose calculation grid of approximately 4 × 4 × 3 mm 3 was used [30].
Three-dimensional conventional planning 3DCRT plans with 18 MV photons were generated using a commercial treatment planning system, Pinnacle DCM7.6c (Philips, Amsterdam, The Netherlands). The plans that were developed used a four-field technique to treat the pelvis and will serve as the control arm for this dosimetric study. For the anterior/posterior fields the superior border was at L5-S1, lateral borders 2 cm lateral to the widest point of the bony pelvic inlet, and inferior border 1.5 cm below the prostate on CT images. For the lateral fields, the anterior border was the anterior surface of the pubic symphysis, posterior border was the middle of the sacrum, including at least a posterior 0.75 cm margin on the prostate and seminal vesicle. Superior and inferior margins were identical to the anterior/posterior fields. The simultaneous in-field (SIB) prostate boost was treated with a 6 field coplanar technique targeting the prostate and proximal seminal vesicle with 1 cm margin. Shielding using 120 multi-leaf collimation (MLC) was used to shape the fields.
Statistical methodology
The dIMRT/HT plans were compared to each other and the 3DCRT in terms of a priori defined target and normal tissue dose volume histogram (DVH) and dose metric outcome characteristics ( Table 2). The a priori null hypothesis, for all comparisons, was that the mean values of DVH parameters/metrics between all three paradigms were not different. The alternate hypothesis was that the mean DVH parameters/metrics between all three paradigms were different. All main comparisons were performed using repeated measures analysis of variance (ANOVA). All two-way (between any two paradigms) Example dosimetric volumes used for this study: Target post-hoc comparisons were performed using paired Bonferroni adjusted Student's t-tests.
Target structures
The ten CT planning studies represent a wide range of potential target and normal tissue volumes (Table 3). All three planning strategies were able to cover 95% of the PTV1 with the prescription dose. Comparing one planning process to the other, there are statistically significant differences in the delivery of dose to this PTV1 (Table 4). When assessing dose homogeneity as defined as both D99-D1 and D95-D5, the conformal normal tissue avoidance solution showed the most homogeneous dose distribution compared to the other two strategies. 3DCRT delivered a higher absolute dose to the nodal target volume at all dose points (Table 5). However, both dIMRT/ HT plans were able to deliver the prescription dose to the nodal target while being significantly more homogeneous. The pelvic soft tissue target volume looks specifically at the soft tissues within the pelvic field that excludes the nodal target and the organs at risk (Table 6). Given the highly conformal nature of tomotherapy, the conformal pelvic vessel targeting approach delivered a significantly lower dose to the pelvic soft tissues, as they were not specifically targeted. As expected, the 3DCRT and conformal normal tissue avoidance strategies delivered the highest dose to the pelvic soft tissue target volume. The conformal normal tissue avoidance technique had better homogene-ity of dose compared to the 3DCRT control due to the IMRT delivery of helical tomotherapy.
Organs at risk DVH characteristics were compared for the rectum, bladder, femoral heads, and small bowel ( Table 7). The 3DCRT plan generated the highest dose to all the organs at risk. The dIMRT/HT techniques were both able to significantly spare the critical structures better than the nonconformal control. Within the two dIMRT/HT approaches, conformal pelvic vessel targeting delivered a lower dose at most dose points in comparison to conformal normal tissue avoidance.
Discussion
Intensity modulated radiation therapy (IMRT) uses an advanced planning technique that creates complex dose distributions that can deliver a radical dose of radiation to the prostate gland and treat the pelvic nodes at risk, while reducing the irradiated volume of small bowel and rectum [31]. In addition, IMRT can be used to deliver dose to the primary prostate volume while simultaneously treating the regional lymph nodes at risk to a lower dose in a single phase. This strategy, called an SIB technique has many clinical, dosimetric, and economic advantages and has been incorporated into several different anatomic sites [32][33][34][35][36][37][38][39]. Integrating the whole pelvis and prostate boost into the plan optimization from the outset may, in theory, improve the likelihood that the resulting solution will be able to meet the constraints for safe prostate dose escalation in the setting of whole pelvis treatment. By using a SIB scheme, the prostate gland can be irradiated with a [40].
Using IMRT, a conformal pelvic vessel targeting solution can be acheived to treat the prostate gland while also treating the pelvic node bearing regions if the physician can reliability identify these treatment volumes. In the area of head and neck radiotherapy, standardized and reliable anatomic maps for contouring lymph node regions are available [41,42]. However, no consensus exists for a standardized identification of pelvic lymph node anatomy exists. Currently, contouring of the pelvic vessels has been used as a surrogate for pelvic nodal regions and used to generate clinical target volumes. This is usually done by adding a 1.5 to 2 cm margin around the vessel itself to approximate the region of the perivascular lymph nodes [26]. Several potential difficulties exist with this conformal pelvic vessel targeting approach. Firstly, there is uncertainty as to the optimal margin of normal tissue around the vessels to adequately cover the lymph node bearing regions. Secondly, there can be difficulty in the tracking and visualizing of the internal iliac vasculature. Finally, there is an inability to target smaller lymphatic vessels and lymph node regions "in transit" to the larger nodal stations along the visible vessels.
An alternate strategy proposed in relation to this study is conformal normal tissue avoidance. In this solution, the goal is to identify the organs at risk (bladder, small bowel, rectum, and femoral heads) and subtract them from the pelvic target volume. The remaining volume is identified as the target for regional nodal irradiation, which contains the soft tissues of the pelvis (corresponding to the pelvis at risk that would be treated by a standard non conformal pelvic radiation field). Inversely or forward planned optimization can then be designed to treat the pelvic soft tissue target volume to a microscopic dose while limiting dose to the identified critical structures and dose escalating the prostate gland. This approach carries the advantage that the critical structures are typically easier to identify as avoidance volumes rather than the nodal target regions (which rely on vessels as a surrogate marker). The conformal normal tissue avoidance strategy would also allow treatment of smaller lymphatic vessels and lymph nodes within the pelvic soft tissues with a lower risk of under-treating important nodal regions. Problems with this approach include a modest increase in dose to the organs at risk compared to the conformal pelvic vessel targeting approach and the effect of inter-fraction organ movement. Multiple CT simulations or daily image guidance with adaptive therapy may be required to clinically implement a pelvic conformal avoidance strategy. However it is important to note that doses to the OAR's com-pare favorably to the calculated and expected doses in conjunction with 3DCRT four-field pelvic radiation.
In this paper, we attempt to incorporate hypofractionation, dose escalation, and nodal basin irradiation within a single-phase dynamic IMRT helical tomotherapy (dIMRT/ HT) solution. Two opposing strategies were studied, conformal pelvic vessel targeting and conformal normal tissue avoidance, using the unique capabilities of a TomoTherapy treatment planning and image-guidance and IMRT radiation delivery system. Even though both strategies differ in their approach to the nodal basin, both solutions delivered the prescribed dose to the prostate and vessel-defined node bearing regions. The major difference lies in the dose to the pelvic soft tissues that lie between the expanded nodal target volume and the organs at risk. Conformal pelvic vessel targeting does not specifically address these tissues and subsequently the planning system algorithm cannot use this information in developing a dosimetric plan. The dose is driven into the defined nodal target and this area essentially becomes a buffer zone where a dose gradient exists between the vessel targets and the organs at risk. As such, the planned dose is significantly less than in the conformal normal tissue avoidance paradigm where this area is specifically defined as a target. The planning system optimizes based on the importance, precedence, and penalty factors to deliver dose to the pelvic soft tissue target with no such buffer zone between it and the organs at risk. Therefore, the conformal normal tissue avoidance technique was able to deliver the microscopic dose to the pelvic tissues while having the benefit of not having to define a nodal target region based on potentially ill-defined pelvic vasculature. In addition, the concern of geometric miss associated with many conformal treatments (due to issues such as motion of the target) are minimized.
Because conformal normal tissue avoidance targets all the tissue within the pelvis aside from the organs at risk; it necessarily delivers a higher dose to the organs at risk when compared to conformal pelvic vessel targeting unless they are specifically excluded as a critical structure.
We can see this from the data in table seven, which shows statistically significant higher doses to these organs at 8/ 12 dose points. The absolute differences were about 1-4 Gy over the entire course of treatment, which may be of limited or no clinical significance in terms of differences in possible late toxicity. This potential cost to the normal tissues is necessary to deliver the dose described to the rest of the pelvis. The clinical impact of this difference in terms of acute and late effects is currently unknown.
Unfortunately, there are no defined dose limits to OARs in the setting of hypofractionated treatment of the pelvis. However, using the linear quadratic concept to calculate biological effective doses of different fractionation protocols we can compare our planned doses with the dose limits given for a large RTOG dose escalation trial ( Table 8).
The regimens proposed here for hypofractionated dose escalated treatment of the prostate gland is based on currently available data. The reliability of each radiobiologic model will limit our BED. However, even if the α/β of prostate is 3 instead of 1.5, our planned dose will still deliver a BED (2 Gy) of 78 Gy. We can see that the planned doses using both dIMRT/HT strategies are within the dose constraints given by RTOG P0126. Even so, the impact on normal tissues of a hypofractionated protocol where the overall treatment time is significantly less will need to be defined in current and future clinical trials. In Canada, a clinical trial is underway evaluating linac based IMRT and helical tomotherapy, clinically assessing a dose regimen of 68 Gy in 25 fractions to the prostate while simultaneously delivering 45 Gy in 25 fractions to pelvic tissues.
The effects of normal tissue movement are not taken into account here. While the nature of daily MVCT localization of the prostate is an inherent benefit to tomotherapy treatment, it currently does not take into account the daily movement of normal tissues. Ideally, a planning system powerful enough to develop a solution daily within the time constraints of a busy treatment facility would be the ultimate solution. However, as an interim step the concept of adding a margin for tissue movement can also be used as suggested by the ICRU. We expect that planning with a more realistic OAR volume will result in a plan that would lie between the extremes of conformal pelvic vessel targeting and conformal normal tissue avoidance presented here. Clinical investigations into the appropriate definition of the nodal targets are also under evaluation. For instance, studies into ultra-small super-paramagnetic iron oxide particles, known generically as ferumoxtran-10, have been successfully evaluated for detection of sentinel lymph nodes in various clinical trials [43][44][45]. Anatomic nodal information derived from these studies may better define the regions at risk within the pelvis to iden-tify to our treatment planning systems and subsequently drive the planning system optimization to better cover the intended targets and to continue to spare the OAR's.
The techniques developed here extend beyond the treatment of prostate cancer. Similar approaches can be used in other disease sites within the pelvis (cervix, endometrium, etc). Also, the concepts of conformal normal tissue avoidance can be generalized to wherever there is a concern over uncertainties regarding pelvic nodal target delineation and nearby organs at risk. This technical dosimetric feasibility study offers evidence that conformal avoidance, as an advanced treatment planning strategy, is a potential solution to deliver highly conformal pelvic radiation in the setting of nodal location uncertainty due to incomplete nodal mapping or abherent nodal drainage.
Conclusion
Therefore this research study has demonstrated that dIMRT/HT nodal and pelvic targeting is superior to 3DCRT in dose delivery and critical structure sparing in the setting of hypofractionation for high-risk prostate cancer. This technical dosimetric feasibility study offers evidence that conformal avoidance, as an advanced treatment planning strategy, is a potential solution to deliver highly conformal pelvic radiation in the setting of nodal location uncertainty due to incomplete nodal mapping or complex nodal drainage.
All authors have read and approved the final manuscript. Specifically, JY completed all contours, supervised treatment planning, performed interpretation of statistical analysis, and drafted/approved the manuscript. GR was responsible for the initial research idea, supervision of the project, statistical analysis, assisted in the preparation and approval of the manuscript. TC and KT performed treatment planned, assisted in the preparation and approval of the final manuscript. SY, ML, DD, and GB co-supervised the project, assisted in the interpretation of the statistical analysis, and assisted in the preparation and approval of the manuscript.
|
2018-05-30T23:57:40.871Z
|
2008-01-07T00:00:00.000
|
{
"year": 2008,
"sha1": "0884421d0ac6cc89f064e80f5037755dd9a5d15c",
"oa_license": "CCBY",
"oa_url": "https://ro-journal.biomedcentral.com/track/pdf/10.1186/1748-717X-3-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb29dd4afc59fd2feabf56e21300b582ecb22bc2",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54048017
|
pes2o/s2orc
|
v3-fos-license
|
Dzyaloshinskii-Moriya interaction and chiral magnetism in 3$d$-5$d$ zig-zag chains: Tight-binding model and ab initio calculations
We investigate the chiral magnetic order in free-standing planar 3$d$-5$d$ bi-atomic metallic chains (3$d$: Fe, Co; 5$d$: Ir, Pt, Au) using first-principles calculations based on density functional theory. We found that the antisymmetric exchange interaction, commonly known as Dzyaloshinskii-Moriya interaction (DMI), contributes significantly to the energetics of the magnetic structure. We used the full-potential linearized augmented plane wave method and performed self-consistent calculations of homogeneous spin spirals, calculating the DMI by treating the effect of spin-orbit interaction (SOI) in the basis of the spin-spiral states in first-order perturbation theory. To gain insight into the DMI results of our ab initio calculations, we develop a minimal tight-binding model of three atoms and 4 orbitals that contains all essential features: the spin-canting between the magnetic $3d$ atoms, the spin-orbit interaction at the $5d$ atoms, and the structure inversion asymmetry facilitated by the triangular geometry. We found that spin-canting can lead to spin-orbit active eigenstates that split in energy due to the spin-orbit interaction at the $5d$ atom. We show that, the sign and strength of the hybridization, the bonding or antibonding character between $d$-orbitals of the magnetic and non-magnetic sites, the bandwidth and the energy difference between states occupied and unoccupied states of different spin projection determine the sign and strength of the DMI. The key features observed in the trimer model are also found in the first-principles results.
I. INTRODUCTION
The recent discovery of chiral magnetism in lowdimensional metals 1 has opened a new vista in the research of magnetism. For a two-dimensional Mn monolayer film on W(110) it was shown that the magnetic structure was not the two-dimensional checkerboard antiferromagnetic one 2 as thought of for a long time, instead by combining spin-polarized scanning tunneling microscopy and ab initio theory a left-rotating cycloidal spin spiral was found. A right-rotating one, which would have the same energy in a conventional achiral magnet, does not exist. Since then, chiral magnetism was found in several other thin-film system Mn/W(100) 3 , and in bi-atomic Fe chains on the (5 × 1)-Ir(001) surface 4 . Chiral magnetism was recently also found in domain walls, e.g. in Fe/W(110), 5,6 Ni/Fe/Cu(001), 7 Co/Pt(111), 8 Co/Pt, 9 FeCo/Pt 10 and in the magnon dispersion of Fe/W(110) 11 . In most cases the chirality is imprinted in one-dimensional chiral spin spirals, but un-der certain conditions chirality can also appear in form of two-dimensional objects known as skyrmions, e.g. in case of Fe/Ir(111) [12][13][14] and Pd/Fe/Ir(111). 15 The chirality in these low-dimensional magnets opens completely new perspectives in domain-wall motion, spin-torques or spin transport that all together have really an impact on the further development of spintronics.
The origin of the chirality in low-dimensional itinerant magnets is caused by the presence of the spin-orbit interaction (SOI) in combination with a structure inversion asymmetry provided by a substrate on which the film is deposited. This leads to an antisymmetric exchange interaction, postulated first by Dzyaloshinskii 16 and frequently referred to as the Dzyaloshinskii-Moriya-type interaction (DMI), because Moriya 17 provided the first microscopic understanding on the basis of a model relevant to insulators. Although the microscopic models for metals are naturally different and go back to Smith 18 ,Fert and Levy 19,20 and Kataoka et al. 21 , the functional form of the antisymmetric exchange remains unchanged. If the Dzyaloshinskii-Moriya-type interaction is sufficiently arXiv:1406.0294v1 [cond-mat.mes-hall] 2 Jun 2014 strong, it can compete with the conventional isotropic exchange interaction of spins and the magneto-crystalline anisotropy, and the conventional ferromagnetic or antiferromagnetic phase is destabilized in favor of a chiral one. The isotropic exchange interaction goes back to the Coulomb interaction in combination with the antisymmetric nature of the many-electron wave function and the hopping of electrons. It is typically captured by the Heisenberg model. The Heisenberg interaction is strictly achiral and any spiral state produced by the Heisenberg interaction is symmetric with respect to left-or right chirality. Whether the Dzyaloshinskii-Moriya-type interaction is strong enough to stabilize a chiral spiral and which sign the interaction will take on, determining the chirality of the rotating structure (right-or left-rotating), is a priori unknown and depends on the details of the electronic structure.
Homogeneous and inhomogeneous chiral spirals have been investigated by Dzyaloshinskii 22 on a model level.
Surprisingly, little is known quantitatively about the Dzyaloshinskii-Moriya-type interaction in lowdimensional metallic magnets. Practically no systematic theoretical or computational results exist. Obviously, it is a chiral interaction based on spin-orbit interaction and requires the treatment of non-collinear magnetism in a broken symmetry environment, which necessitates typically the computation of small quantities in a complex geometry. In particular, this interaction is small as compared to the Heisenberg exchange, and therefore we expect chiral spirals of long wave lengths that deviate little from the ferromagnetic state. Thus, in terms of ab initio calculations it means an accurate treatment requires precise calculations of gigantic unit cells that are unattainable even with modern supercomputers. All-in-all, this makes the treatment rather non-trivial.
In this paper we shed light onto the DMI by performing calculations based on the density functional theory to a well-chosen set of model systems, namely planar freestanding zigzag bi-atomic chains of 3d and 5d transitionmetal atoms in a structure inversion asymmetric geometry. That means, we have chosen a combination of 3d elements (Fe or Co) exhibiting strong magnetism and heavy 5d elements (Ir, Pt or Au) as source of strong SOI. The asymmetric chain can be considered as a minimal model describing a film of 3d atoms on a non-magnetic substrate with large spin-orbit interaction, or a chain of 3d metals at the step-edge of a 5d substrate. But it is also a system in its own right. Recently, the magnetic properties of various bi-metallic 3d-5d chains of linear and zigzag shape have been investigated. [23][24][25] The calculations are carried out within the fullpotential linearized augmented plane wave method (FLAPW) 26,27 as implemented in the FLEUR code 28 . In order to deal with the large unit cell anticipated for chiral magnetic spirals, we treat the magnetic structure in reciprocal space by making use of the generalized Bloch theorem [29][30][31] in the absence of the spin-orbit interaction, which allows the calculation of incommensurate magnetic spirals in the chemical unit cell. The spin-orbit interaction is then treated in first-order perturbation theory in the basis of the spin-spiral solutions. The MAE is determined by separate calculations and all results are discussed in terms of the model Hamiltonians for the different spin-interactions (viz., Heisenberg, DMI and MAE).
Our findings show that without SOI all systems are ferromagnets with the exception of the Fe-Pt and Co-Pt bi-atomic chains. For these two chains, we expect a magnetic exchange spiral that is degenerate with respect to the right-or left rotation sense. Including the spin-orbit interaction we find that the hard magnetization axis is normal to the plane of the zig-zag chain and thus any spiral should be of cycloidal nature where the magnetization rotates in the plane of the zig-zag chain. The DMI depends critically on the substrate, i.e. the 5d atom of the bi-atomic zig-zag chain, the sign of the DMI flips each time, when moving from Ir, to Pt and then to Au. Among all chains, for the Fe-Pt and Co-Pt chains, the DMI is sufficiently strong to stabilize a chiral magnetic ground state of left-rotational sense.
Surprisingly little is known about the relation between the DMI interaction and the electronic orbitals that contribute to it. The rather clear nature of the electronic structure of the bi-atomic zig-zag chain invites the development of a minimal tight-binding model consisting of four relevant d-orbitals located at two 3d atoms and one 5d atom arranged in a triangular geometry. In this paper, we present the results of this simple tight-binding model, which represents the essential features of the problem elucidating the factors controlling the sign and strength of DMI in these 3d-5d transition-metal zigzag chains.
The paper is organized as follows: Sec. II describes the computational methodology required for determination of the DMI and MAE from first-principles calculations. In Sec. III, the results for the 3d-5d bi-atomic chains are presented. In Sec. IV we describe the tight-binding model for the trimer in detail and from the results we draw analogies to the infinite 3d-5d chains. Finally, we conclude our findings in Sec. V.
A. Structural optimization
We have modeled free-standing planar zigzag bi-atomic chains of 3d-5d elements, as shown in Fig. 1. For the calculations, we have used the film version of the full-potential linearized augmented planewave (FLAPW) method as implemented in the Jülich DFT code FLEUR. 28 For our one-dimensional structures, we choose a large rectangular two-dimensional unit cell of 20 a.u. along the y-direction to minimize the interaction between periodically repeated images of one-dimensional infinite chains containing one 3d and one 5d atom. Then, we optimize the lattice parameter a for the magnetic chains, corresponding to the unit-cell length in x-direction, and the FIG. 1: (Color online) Structure of the 3d-5d transition metal chains. The lattice parameter a denotes the equilibrium bond length between two consecutive 3d (5d) atoms. d represents the distance between the 3d-5d atoms and α is the angle spanned by the 5d-3d-5d atoms.
bond length d, by carrying out spin-polarized calculations applying the revised Perdew-Burke-Ernzerhof (rPBE) 32 exchange correlation functional within the Generalized Gradient Approximation (GGA). Our unit cell is perfectly embedded into two semi-infinite vacua in the ±zdirections. The muffin-tin sphere around each atom was chosen to be 2.2 a.u. for all chains. A careful convergence analysis shows that a plane-wave cutoff of 3.8 a.u. −1 and 48 k-points along the positive half-space of the onedimensional Brillouin-zone are sufficient to obtain converged structural parameters a and d in non-magnetic calculations. For completeness, we mention that in our set-up, the inversion symmetry is broken due to a lack of reflection symmetry along the xz-plane.
B. Collinear magnetic calculations
Using the optimized geometry, we calculated the energy difference between collinear states (ferromagnetic and antiferromagnetic, respectively) with 48 k-points, using the exchange correlation functional GGA-rPBE on the one hand, and the Vosko, Wilk, and Nusair (VWN) functional 33 within the local density approximation (LDA) on the other. All magnetic interactions are provided as calculated by the LDA functional as experience has shown that it gives a more realistic description of the magnetic interaction energies.
C. Spin-spiral calculations
We consider flat, homogeneous spin spirals, which are defined by two quantities: the spin-spiral wave vector q and a rotation axis. The former has three properties: (i) the direction of q corresponds to the propagation direction of the spin spiral (in our case it is limited to the x-direction due to the one-dimensional nature of the chains and we omit the vector character of q in the following), (ii) its magnitude determines the wave length λ = 2π |q| −1 of the spin spiral, and finally (iii) the sign of q determines the rotational sense of the spin spiral. If q > 0 (q < 0) we refer to a counter-clockwise (clockwise) or left (right) rotating spiral. To finalize the definition of the spin spirals, we comment that for 'flat' spin spirals all magnetic moments are rotating in one plane perpendicular to the rotation axis. There are two special q-points in the one-dimensional Brillouin-zone that deserve mentioning, q = 0 that represents the ferromagnetic alignment, and q = ±0.5 2π a that represents antiferromagnetic alignment.
We have performed self-consistent total-energy calculations of spin spirals within the scalar-relativistic approximation (i.e. without SOI) using both, the GGA-rPBE and LDA-VWN exchange-correlation functionals. In this case, we can without loss of generality choose a rotation-axis along the z-direction and exploit the generalized Bloch theorem, 34-36 which allows for a calculation of spin spirals in the chemical unit cell rather than a possibly large supercell and thus reduces the computational effort considerably. In a second step, we have estimated the effect of SOI on the spin-spiral energies in first-order perturbation-theory (cf. Sec. II D). For all spin-spiral calculations a dense k-mesh of 384 k-points has been used.
Let us first look at the case without SOI: the corresponding interaction energy between magnetic moments can be described in terms of a Heisenberg model as where the direction vector of the magnetic moment, S j , at lattice site j is parameterized by the magnetic spiral S j = (cos(j a q), sin(j a q), 0) T , and the sign of the isotropic exchange integrals J ij determines whether the magnetic interaction between the sites i and j is ferromagnetic (J < 0) or antiferromagnetic (J > 0). Nontrivial spin-spiral ground states can be formed, if the interaction between different neighbors is competing in sign and strength in a form that the mutual exchange interaction is frustrated. Such spirals are exchange spirals in opposite to chiral spirals induced by the DMI. Exchange spirals are achiral in the sense that the energies are degenerate with respect q and −q, which is reflected by the dot-product of the Heisenberg model.
D. Calculation of Dzyaloshinskii-Moriya interaction
When considering the SOI for a spin-spiral state, two more energy contributions appear: A magnetocrystalline anisotropy energy (MAE, cf. Sec. II E) and the Dzyaloshinskii-Moriya interaction, which in terms of a spin model is of the form Here, the antisymmetric exchange constants D ij are called Dzyaloshinskii-Moriya (DM) vectors, which determine the strength and sign of DMI. Due to the cross-product between the two magnetic moments, canted spinstructures of a particular handedness are favored by this energy term. The type of handedness depends on the sign of the DM vectors with respect to the spin-rotation axis. As a result, the degeneracy of spin spirals with respect to the direction of the rotation axis is lifted: E DM will become extremal for a rotation axis parallel to the DM vector. For the zigzag chains investigated in this work, the xy-plane is a global mirror plane (M : (S x , S y , S z ) → (−S x , −S y , S z )), and through plain symmetry arguments the DM vector D = (0, 0, D) is pointing along the ±z-direction, thus preferring flat spin spirals with a rotation in the xy-plane. Within our geometry, we define the chirality index C = e z ·(S i × S i+1 ) and call the magnetic structure left-handed (right-handed) for C = +1 (C = −1).
For the calculation of the energy of spin spirals including the SOI, the generalized Bloch theorem cannot be applied any more, because atoms with different directions of the magnetization can be distinguished by their spin-orbit interaction energy. One possible way could be to use large supercells in which the magnetic structure is commensurate, to large computational costs. However, due to the much smaller SOI energy compared to the total energy of the spin spiral, 37,38 we treat SOI as a perturbation to the system. This allows us to find the energy levels and the wave functions of the unperturbed system, 0 kν (q) and ψ 0 kν (q, r), for the one-dimensional Bloch vector k and band index ν, using the chemical unit cell only. Then we estimate the shift δ kν of these levels due to the SOI Hamiltonian H SO , in the basis of spin-up and spin-down states as Summing up these energy shifts over all occupied states of the unperturbed system, we find an energy correction corresponding to the Dzyaloshinskii-Moriya interaction, Because each level exhibits the symmetry δ kν (−q) = −δ kν (q), this antisymmetric behavior will be inherited by the sum, E DM (−q) = −E DM (q), and only the spin spirals with positive q must be computed. Obviously, E DM (q) is an odd function of q and for small |q|, E DM (q) Dq, where D takes the role of an effective DM vector in z-direction and is a measure for the strength of the DMI. The Dzyaloshinskii-Moriya interaction in the 3d-5d chains was calculated using a LDA-VWN functional and with a dense mesh of 384 k-points.
E. Magneto-crystalline anisotropy
The second energy contribution due to SOI is the magneto-crystalline anisotropy energy (MAE). It will generally tend to a collinear alignment of magnetic moments along the easy axis of the system, and thus competes against the DMI. To be more precise, it will compete against any non-collinear magnetic structure, since then there are always magnetic moments pointing away from the easy axis that will increase the energy of the system. Based on our results (as discussed in Sec. III), we can focus the following discussion on an easy axis (i.e. the direction with lowest energy) that is either in the x or y axis. Then, let K 1 be the difference between the energies of these two directions, and let K 2 be the difference between the energies along the z-axis and the easy axis. Any homogeneous, flat spin spiral rotating in the xy-plane will then have an average MAE per atom of In order to compute the MAE, we have performed collinear (i.e. ferromagnetic) calculations, where we chose the magnetic moments to be fixed along the three crystal axes, x, y and z, respectively. The spin-orbit interaction was included self-consistently in our calculations, using 192 k-points in the whole Brillouin zone. By comparing the total energies of the three calculations, we obtain values for K 1 and K 2 .
F. Ground state formation and inhomogeneity
Considering these three energy contributions, a chiral homogeneous spin spiral with wave vector q will be established by the DMI out of the ferromagnetic state, i.e. q = 0, only if their sum yields an energy lower than the ferromagnetic state, i.e. E 0 (q) + E DM (q) + K 1 /2 < E 0 (q = 0).
Although the ab initio calculations impose homogeneous spin spirals, the possible formation of inhomogeneous spirals can be analysed on the basis of a micromagnetic model of one-dimensionally spiralling magnetic structures developed by Dzyaloshinskii 22 with micromagnetic parameters deduced from the homogeneous calculations. In homogeneous spin spirals the angle of the magnetization direction changes by the same amount from atom to atom. If the magnetic anisotropy is strong, it seems natural that the magnetization direction along the easy axis is preferred and we expect small angles of rotation in the vicinity of the easy axis accompanied by fast rotations into the hard axes and back. Within this micromagnetic theory the degree of inhomogeneity is quantified by a parameter 39 κ = 16/π 2 · AK 1 /D 2 , whereas the micromagnetic parameters are taken from a fit of a quadratic energy form E = Aq 2 + Dq + K 1 /2 to the ab initio energy dispersion in the vicinity of the energy minimum. If κ = 0 the spiral is perfectly homogeneous, for κ = 1 the spiral separates into two collinear domains, separated by a chiral domain wall. ao is the lattice constant, do represents the 3d-5d distance and αo is the angle between the 5d-3d-5d atoms (cf. Fig. 1). The magnetic moment M on the 3d and 5d atoms are also listed.
A. Structural properties and magnetic moments
In the present paper, we have investigated zigzag 3d-5d bi-metallic chains as shown in Fig. 1. We observed, that all chains give well defined, unique minima in the total energy with respect to the lattice constant a (upper panel of Fig. 2). Table I shows the optimized geometrical properties as well as the magnetic moments of the two kinds of atoms. The values of a and d indicate that isosceles triangles are formed, similar to gold 40 and nickel 41 zigzag chains. The magnetic moment of 3d atoms is larger in Fe-5d as compared to Co-5d chains. The 5d atoms show relatively small induced magnetic moments and they depend weakly on the choice of the 3d atoms. The induced magnetization is larger for 5d atoms with smaller atomic number (Z).
We have also investigated the variation of the total magnetic moment in the unit cell 50 as a function of the lattice constant a (lower panel of Fig. 2). As before, the magnetic moments for Fe-5d chains are larger than those of Co-5d chains for a large range of a. With increasing a, the magnetic moment increases as the electron wave functions tend to become more atomic. This variation in the magnetic moment in Fe-5d chains is larger than that of Co-5d chains. In addition, the variation in the magnetic moment decreases as the lattice constant is increased in all 3d-5d chains. As the lattice constant decreases close to 2Å, the magnetic moment of Fe-5d chains decreases sharply, indicating the possibility of a magnetic transition during compression.
B. Isotropic exchange interaction
In Fig. 3, we present the calculated energies of flat, homogeneous spin spirals of 3d-5d chains in the scalarrelativistic approximation (not considering SOI). In this case, the dispersion energy is an even function of the spinspiral vector (i.e., E 0 (q) = E 0 (−q)). Our results demon- Ir 92 105 117 121 Fe-Pt 112 126 162 170 Fe-Au 119 -185 181 Co-Ir 154 -294 296 Co-Pt 146 155 182 186 Co-Au 171 168 212 213 TABLE II: (Color online) Energy difference in meV/f.u. between ferromagnetic and antiferromagnetic states for 3d-5d chains. The collinear results were obtained using a supercell approach, whereas the spin-spiral calculations were performed exploiting the generalized Bloch theorem. Some of the GGA spin-spiral calculations did not converge.
strate, that the ferromagnetic state is energetically most stable in most 3d-5d chains, except for Fe-Pt and Co-Pt, which show a (non-collinear) spin-spiral ground state. In Co-Pt, the energy at q = ±0.07 (henceforth the values of q will be given in units of 2π/a, which corresponds in this case to an angle of 25.20 • between adjacent unit cells) is lower than that of the ferromagnetic state by 4.4 meV/f.u. In Fe-Pt, the energy minimum at q = ±0.03 (corresponding to an angle of 10.79 • ) is only 1.4 meV/f.u. lower than the ferromagnetic state. In Co-Ir and Co-Au chains, the spin-spiral dispersion energies presented in Fig. 3 show a typical parabolic behavior around the ferromagnetic (q = 0) and antiferromagnetic (q = 0.5) states. For Fe-based chains and the Co-Pt chain, the shape of the dispersion energies show deviations from the pure cosine behavior with a minimum value at q = 0 and a maximum value at the zone-boundary. Obviously, exchange interactions J ij between more distant neighbors become important. For example, a dip around q ≈ ±0.3 is observed in the Fe-5d chains, with Fe-Ir having a more pronounced manifestation. However, these longer-distant interactions do not influence the magnetic ground state of these chains. Instead, the further-distant interactions in the Pt-based chains have an influence on the magnetic ground state. They compete with the ferromagnetic nearest neighbor interaction between the 3d atoms, in total leading to a local minimum at small q-values. As result we obtain for these systems an incommensurable spin-spiral ground state at q = ±0.035 for Fe-Pt and q = ±0.07 for Co-Pt.
For completeness, we compared for all chains the energy difference between the ferromagnetic (FM) and antiferromagnetic (AFM) configuration evaluated using the GGA-rPBE exchange-correlation functional with the difference obtained by the LDA-VWN functional for two types of calculations, one carried out by the spin-spiral formalism and one by collinear calculations. All energy differences have been evaluated for the ground-state geometry obtained by the GGA functional. From Table II it can be seen that the LDA-VWN functional gives significantly larger (25% to 50%) energy differences. The spin-spiral calculation and the collinear calculation for the antiferromagnetic configuration are different in one respect in that the quantization axes of the 5d atoms are rotated by 90 • with respect to the one of the 3d atoms for the spin-spiral but are parallel for the collinear calculations. However, the magnetic moment of the 5d atom is much reduced in the antiferromagnetic state due to frustration, and in turn the direction of the quantization axis has little influence on the total energy for the AFM. The frustration occurs because the moment of the 5d atom couples ferromagnetically to the moment of the 3d atoms (cf . Table I), and for an antiferromagnetic configuration any finite moment of the 5d atom would be parallel to the moment of the one 3d atom and antiparallel to the moment of the other 3d atom. The energy difference is in general larger for GGA results (14 meV/f.u. for Fe-Pt and 9 meV/f.u. for Co-Pt) than for LDA results (8 meV/f.u. for Fe-Pt and 4 meV/f.u. for Co-Pt). Thus, the total energy depends only very little on the choice of the quantization axis of the 5d atoms and a further optimization of the direction of this axis is not necessary.
C. Effect of spin-orbit interaction on magnetism
Magneto-crystalline anisotropy energy
The magneto-crystalline anisotropy energy (MAE) is extracted from the total-energy calculations of ferromagnetic states, with the magnetic moments pointing along the three high-symmetry directions. We find that for all investigated chains the z-axis is the hard-axis, K 1 < K 2 (cf. Sec. II E) and subsequently, for all chains the easy axis lies in the xy-plane (cf. Fig. 1) of the bi-atomic chains (Table III). The Fe-5d chains, except for Fe-Au, prefer an uniaxial magnetization along the x-axis. In contrast, the Co-5d chains prefer the y-axis as the easy axis, except for Co-Ir. The 3d-Pt chains exhibit the smallest in-plane anisotropy K 1 , whereas, 3d-Ir chains have a very large K 1 . The 3d-Au chains have the smallest out-of-plane anisotropy. This is a consequence of the hybridization between the spin-split transition-metal 3d states with the spin-orbit-interaction carrying 5d-states. This hybridization is smaller for Au than for Pt or Ir, because the 5d-states of Au are 3 eV below the Fermi energy, while for Pt and Ir the 5d states are crossing the Fermi energy and then the interaction of magnetism with SOI is much stronger. In general, the magneto-crystalline anisotropy energies found here are large compared to the values found for typical bulk structures, 42-44 as expected for systems with reduced dimensions. We also calculated the shape anisotropy due to the classical magnetic dipole-dipole interactions, using the magnetic moments listed in Table I. For all chains, we observed an energetically most favorable direction of the magnetic moments along the wire axis. However, the magneto-crystalline anisotropy due to spin orbit coupling is 2-3 orders of magnitudes larger than the shape anisotropy and hence dominates the magnetic anisotropy contribution to the energy of the system.
Dzyaloshinskii-Moriya interaction energy
In order to investigate the DMI, we made use of the generalized Bloch theorem applied to the magnetic state of a flat homogeneous spin spiral and have included the SOI within the first-order perturbation theory as explained in Sec. II D. The calculated energy contribution due to the DMI, E DM (q), is shown in Fig. 4 for all the 3d-5d chains, once as plain values and once in addition to the exchange energy E 0 (q). We find that for all wave vectors with |q| 0.08 (recall all wave vectors are given in units of 2π/a), the DMI energy is linear in the wave vector, E DM (q) ≈ Dq, around the ferromagnetic state, q = 0, and the sign of D, which determines the potential handedness of the magnetic structure changes sign from plus to minus and plus again when changing the 5d atom from Ir to Pt and then to Au. The E DM vary on a scale of 5-15 meV and changes sign several times for one half of the Brillouin zone, e.g. for Fe-Pt at q-values of 0.25 and 0.4. Obviously, E DM (q) does not follow the simple sin q behavior for 0 ≤ |q| ≤ 0.5, but contains additional oscillations indicating that the DM vectors D ij beyond the nearest neighbor interaction contribute significantly for larger wave vectors. Now we concentrate on the effect of DMI on the ground state. Therefore, we compare the energy minimum of E(q) = E 0 (q)+E DM (q) to the average MAE, where E 0 (q) is the isotropic spin-spiral dispersion energy (cf. insets in a. Fe-5d chains: Let us recall the results from Sec. III B: in the absence of SOI, Fe-Ir and Fe-Au chains are ferromagnetic, whereas the Fe-Pt chain shows a degenerate non-collinear ground state at q = ±0.03. For the Fe-Ir chain, the DMI lowers the energy of right-handed spin spirals around the ferromagnetic state with an energy minimum of E(q) at q = −0.02 (cf. Fig. 4a). This energy minimum is 0.5 meV/f.u. lower than the ferromagnetic state, and thus DMI is too weak to compete against the average MAE of 1.2 meV/f.u. (cf. Tab. IV). Similarly, in Fe-Au chains the DMI prefers right-handed spin spirals (cf. Fig. 4c), but the energy gain of 0.1 meV/f.u. with respect to the ferromagnetic state is too small to compete against the average MAE (cf. Tab. IV). However, in Fe-Pt chains the strong DMI lifts the degeneracy in the spin-spiral ground state in favor of the left-handed spin-spiral with a significant energy gain of 7.1 meV/f.u. compared to the minimum of E 0 (q) (cf. Fig. 4b). The energy minimum with respect to the ferromagnetic state of 8.5 meV/f.u. is an order of magnitude larger than the average MAE, leading to a left-rotating spin-spiral ground state with q = +0.05 corresponding to a wave length of 51Å or 20 lattice constants. b. Co-5d chains: In the absence of SOI, Co-Ir and Co-Au chains exhibit a ferromagnetic ground state (cf. Sec. III B), and Co-Pt a degenerate non-collinear ground state. This picture does not change when including SOI. In Co-Ir and Co-Au chains, the DMI is too weak to compete against the average MAE (cf. Tab. IV). Interestingly, the E DM vanishes for Co-Au in a relatively large region for |q| < 0.08 (cf. Fig. 4f). In contrast, the effect of DMI on the ground state in Co-Pt chains is strongest among the systems investigated in this paper (cf. Fig. 4e) and lifts the degeneracy in favor of a left-handed spin spiral at q = +0.07 corresponding to a wave length of 36Å or 14 lattice constants. The large additional energy gain of 15.3 meV/f.u. leads to an energy minimum of E(q) being 19.7 meV/f.u. lower than the ferromagnetic state (compared to an average MAE of only 0.1 meV/f.u.).
We estimate the inhomogeneity of the spin spirals in Fe-Pt and Co-Pt by extracting micromagnetic parameters A and D from fits to the energy dispersion E 0 (q) (for |q| < 0.2) and E DM (q) (for |q| < 0.05), respectively. The DMI in those chains is so strong, that the inhomogeneity parameter κ is rather tiny, κ < 0.04 for Fe-Pt and 0.004 for Co-Pt, i.e. the spirals are to a very good approximation homogeneous.
In order to investigate the effect of SOI on the strength of DMI, we have decomposed the DMI into contributions from 3d and 5d transition-metal chain atoms, collected in Table IV. We find interesting trends across the atomic species considered in our calculations: the contributions to D for a specific atomic species is always of the same sign, e.g. Fe atoms always yield a positive contribution to D independent of the 5d atom. The same holds for a spe-cific 5d atom. Furthermore, we find that for the Ir and Pt chains with their large induced 5d magnetic moments and the spin-polarized 5d states, the Dzyaloshinskii-Moriya strength is solely determined by those 5d metals. The 5d atoms contribute to the effective DM vector, D, by about one order of magnitude more than the 3d atoms. This can be different for the Au chains. Au atoms exhibit a rather small spin-polarization of basically s and p electrons and their contribution to the D vector can be of the same order as the one of the 3d metals, as our calculation shows. Also for the Co-Au chain the contributions of the Co and Au atoms to the DM vector are of similar size but opposite sign, and the total contribution cancels resulting in a D vector with size close zero, at least on a size that is at the verge of the numerical resolution.
It is worth noticing that the sign of D in the 3d-Ir and 3d-Pt zigzag chain follows exactly the sign found in respective 3d films. For Fe on Ir(111) 14 the right-rotating D leads to the nanoskyrmion structure in this system, while for Co/Pt(111) 8 a left handed D was calculated and for Co/Pt 9 and FeCo/Pt 10 left-handed chiral domain walls were observed.
The SOI affects different parts of the Brillouin zone, different bands and even different parts of a single band differently. To provide an understanding in how the electronic structure of a chain is effected by the SOI we present in the first three panels of Fig. 5(a) and 5(b) the one-dimensional relativistic band structure along the high-symmetry line Γ-X for the bi-atomic Fe and Co zigzag chains, respectively, for the same spiral magnetic state with a spin-spiral vector chosen to be q = 0.15. The effect of SOI results in a change of the energy dispersion of the Bloch states. The energy of the states (kν) is shifted with respect to the scalar-relativistic (SR) treatment, i.e. neglecting the spin-orbit interaction, of electronic structure by an amount δ kν = SOI kν − SR kν . These shifts are highlighted by dots, whose size is proportional to |δ kν |. A shift to higher (lower) binding energy, δ kν < 0 (> 0), is indicated by red (blue) dots. At the first glance, we see that the topologies of the six band structures are very similar. They are determined by exchange-split 3d states and the 5d states of Ir, Pt, and Au. They differ in the band width and the position of those states with respect to the Fermi energy. The Au d-states are all below the Fermi energy. Pt and Ir have one and two electrons, respectively, less and their d states at the edge of the 5d valence band move through the Fermi energy. For Fe and Co, the majority 3d states are all below the Fermi energy and the minority d-states cross the Fermi energy. For more details we refer to the discussion of Fig. 6.
Considering now the shifts, δ kν , we find colored dots with significantly larger radii as compared to the bands in the rest of the Brillouin zone basically located in the bands related to the 5d states. Therefore, their energy position with respect to the Fermi energy depends only on the 5d atom of the zigzag chain and not on the 3d one. We have highlighted this region of the band structure by enclosing it within a rectangle. This region of large shifts moves up towards the Fermi energy when changing from Au to Ir atoms, just as the 5d states move upwards. The actual size of the shifts depends on the hybridization between the 3d majority states and the 5d states, and this hybridization becomes smaller if the 5d states move up when changing the Au atom by Pt or Ir, while the 3d states stay at energy where they are. This energy is the same for both Co and Fe majority states and therefore the size and position of shifts depend only on the 5d metal atom of the chain.
The fourth panel of Fig. 5(a) and 5(b) shows the energy resolved DMI contribution for positive and negative shifts, δ kν , when integrated over the Brillouin zone, e DM ( , q) = ν δ kν δ( − kν ) dk, and smoothened by a Lorentzian function 1 π Γ/2 (δ −δ kν ) 2 +(Γ/2) 2 with a full width at half maximum of 0.2 eV, and plotted as function of the binding energy. We have calculated the DMI distribution for all the chains, however, in Fig. 5(a) and 5(b) only the results for Fe-Au and Co-Au chains are shown. The last panel of Fig. 5 shows the effective energy integrated DMI energy, E DM ( , q) = e DM ( , q) d , calculated from the positive and negative SOI shifts of the band structure. We observe that for both chains, Fe-Au and Co-Au, the energy resolved DMI has the largest contribution at a binding energy of around 3.5 eV. From what is said above, there is no surprise that the maximum is around the same energy for both chains, as the maximum depends basically on the 5d atom. In detail e DM ( ) are slightly different for both chains due to the difference in the hybrization of the Fe and Co 3d electrons with the Au 5d ones. Since all 5d states of Au are below the Fermi energy the integral of the energy resolved DMI contribution up to the Fermi energy for positive and negative shifts are nearly the same and E DM (E F , q) is very small. The energy resolved DMI contribution for positive and negative shifts are nearly the same and can in first approximation be thought to be rigidly shifted by about 0.6 eV. Due to this finite shift of e DM ( ) between positively and negatively shifted states, E DM ( ) oscillates as function of the band filling. We observe a rapidly oscillating function of large Dzyaloshinskii-Moriya energies of oscillating signs, particularly in the center of the Au 5d bands. For example, the first significant peak we find at about −1 eV and then large peaks at around −4 eV. When the Au atom is replaced by a 5d metal atom with less d electrons, E DM ( ) moves relative to E F . Assuming a rigid band model where the 5d band does not change upon changing 5d metal we can adopt the 5d electron number such that the Fermi energy is placed in one of those peaks. This happens actually for the Pt and Ir chains. The Fermi energy moves into the regime of the large peaks which explains the large contribution of Pt and Ir to the DM vector as discussed in Table IV and explains the sign change of D between Pt and Ir chains moving the Fermi energy by about 0.4 eV.
In this sense the E DM (E) allows a design of the strength and the sign of the Dzyaloshinskii-Moriya inter-action by selecting the number of 5d electrons such that the Fermi energy E F is in the right ball-park of the peak. To realize a chain with the optimal number of 5d electrons one may require an alloyed zigzag chain, where the 5d atom site is occupied randomly by different 5d atoms with a particular concentration. Then, additional ab initio calculations might be necessary for a fine-optimization of the composition, overcoming the assumptions made in the rigid band model.
In the discussion of Fig. 5, we have identified regions in the band structure, where SOI effects are large. To get a better understanding of the underlying microscopic mechanism, we have performed a site-, orbital-and spinresolved analysis of the scalar-relativistic band structure. In a spin-spiral calculation, the up-and down-states are calculated with respect to the local spin-quantization axis in each muffin tin sphere. The resulting contributions are shown in Fig. 6 for the Fe-Au chain for q = 0.15. The energy bands showing largest SOI effect in the band structure are mainly the Au-d xy , d xz and d yz states hybridizing with the Fe majority-states. It can be inferred that, the effective DMI contribution obtained from the positive and negative shifts is maximal where the SOI effect as well as the orbital hybridization is maximal.
A. The model
The minimal model exhibiting a non-vanishing Dzyaloshinskii-Moriya interaction that can be associ-ated with the bi-atomic zig-zag chains discussed above is an isosceles trimer made of two identical 3d-metal sites carrying no spin-orbit interaction (SOI) and one non-magnetic 5d-metal site having a large SOI. In this context, the word non-magnetic stands here for zero intrinsic on-site exchange splitting. However, hybridization with the magnetic sites will lead to a small induced spinpolarization at the non-magnetic site and thus to a small magnetic moment after the calculation. The magnetic sites will be denoted as A and B and the non-magnetic site as C, henceforth. Without loss of generality the trimer is arranged within the x-y plane (Fig. 7).
The model is based on a tight-binding description restricted to the two energetically degenerate d xz -and d yz -orbitals for the non-magnetic site and only the d xzorbital, with the same on-site energy at each magnetic site. According to our analysis in section III C 2, these orbitals are those yielding the main contributions to the Dzyaloshinskii-Moriya interaction for this specific geometry. The following 8×8 Hamiltonian in representation of the basis set (d A xz , d B xz , d C xz , d C yz ) with the superscripts denoting the site index and with the x-axis chosen as global spin-quantization axis reflects this 8-state model: where E A (= E B ) and E C are the on-site energies, t 1 and t 2 are the hopping parameters between atoms A, B with atom C, ϕ is the angle of the magnetic moments relative to the quantization axis, I is the Stoner parameter of the magnetic sites and m the corresponding magnetic moment and ξ is the spin-orbit strength. The separation into the 4×4 sub-blocks highlights the ↑↑, ↑↓, ↓↑ and ↓↓ spin-blocks of H. In the following, the model Hamilto-nian (5) will be discussed in detail.
The Hamiltonian H comprises three contributions, where H 0 contains the spin-independent hopping elements and the on-site energies of the system, H mag incorporates magnetism and H SO introduces the spin-orbit interaction. The hopping matrix elements of H 0 , t 1 and t 2 in Eq. (5), describe the electron transition between the d xz orbital at the magnetic sites and the d xz or d yz orbitals, respectively, on the non-magnetic site. We employed the Slater-Koster parametrization 45 requiring two Slater-Koster parameters V ddπ and V ddδ 51 that determine the matrix elements as: whereR x andR y are the direction cosines of the bonding vector between the sites involved in the hopping. This follows from the Slater-Koster transformations 45 for our specific geometry and choice of orbitals. Since direct hopping between the magnetic sites is not necessary to obtain a non-vanishing Dzyaloshinskii-Moriya interaction, this minimal model is restricted to t 1 and t 2 only. Obviously t 2 ∝R y and thus t 2 scales with the structure inversion asymmetry of our trimer model, i.e. t 2 becomes zero if the trimer changes from a triangular to a chain geometry. The on-site energies are denoted as E A for both magnetic sites and E C for the non-magnetic site. Note, to simplify our model they depend only on the site and not on the type of orbital.
To investigate the Dzyaloshinskii-Moriya interaction (DMI) magnetism is incorporated within the Stoner model, 46,47 extended to the description of non-collinear magnetic systems: 48,49 where σ is the Pauli vector. The exchange splitting of the electronic structure depends on the Stoner parameter I and the magnetic moment m i of the magnetic sites only, whereas no intrinsic exchange splitting is assumed at the non-magnetic site. Due to symmetry, only the rotation of the magnetic moments within the x-y plane is of interest in the determination of the DMI, as discussed in Sec. II D. Therefore, the site-dependent magnetic moment is with the plus-sign for the site A and the minus-sign for site B, respectively. ϕ is the angle of the magnetic moment within the x-y-plane with respect to the x-axis (see Fig. 7). Since the DMI is the consequence of the spin-orbit interaction (SOI), SOI has to be implemented into the tight-binding model by expressing the term σL within the atomic orbital representation. By introducing a SOI parameter ξ for the non-magnetic site and taking into account that the interaction is on-site, the SOI matrix reads where µ, ν indicate the orbitals and σ, σ are the indices of the spin. In this model the only non-zero matrix element of H SO is the spin-flip element between the d xz and d yz orbital at the non-magnetic site, d C xz ↑|σL|d C yz ↓ = i. Typically the spin-orbit interaction (SOI) is a small contribution to the entire Hamiltonian, hence it is common to calculate the SOI energy contribution and therefore also the Dzyaloshinskii-Moriya interaction within first-order perturbation theory. 37 The simple 8-state model can be easily solved by diagonalizing the Hamiltonian H of Eq. (5), which contains SOI, however to allow for a qualitative comparison with the previously presented zig-zag chain results, SOI is treated within firstorder perturbation theory. That means, the Hamiltonian H 0 + H mag is diagonalized and the eigenvalues ε n and eigenvectors |n are used to determine the contributions where f (ε n ) displays the Fermi-Dirac occupation function. This equation corresponds to Eq. (4) in the case of a finite system. Since the only non-zero matrix element in H SO is the transition between d ↑ xz → d ↓ yz and vice versa, it is the only transition which is at the end responsible for δε n and E DMI .
B. Results
For the calculations following parameters have been used, which are chosen to be reasonable values for 3d and 5d transition-metal systems: the Slater-Koster parameters V ddπ = 0.8 eV and V ddδ = −0.07 eV lead to the hopping parameters t 1 = 0.148 eV and t 2 = −0.377 eV. The on-site energies of the magnetic sites are E A = E B = 0 eV and E C = 1 eV for the non-magnetic site. A Stoner parameter of I = 0.96 eV and magnetic moments of m A = m B = 1.2 µ B lead to an exchange splitting of 1.152 eV. The spin-orbit interaction parameter ξ of the non-magnetic site is 0.6 eV. The system is occupied by 6 electrons.
First the role of magnetism for the Dzyaloshinskii-Moriya interaction (DMI) is discussed by comparing the density of states (DOS) of the ferromagnetic case (ϕ = 0) with the maximally canted case of ϕ = 45 • . The sign of ϕ as defined in Fig. 7 is chosen such that the chiral magnetic structure is stable, i.e. the DMI energy is negative. Both results are displayed in Fig. 8 and the analysis is conducted by comparing the site-, orbital-and spin-resolved DOS of the unperturbed system H 0 + H mag to get more insight into e DM , which is the sum of all contributions δε n due to the spin-orbit interaction broadened by Lorentzian functions (see also Sec. III C 2). For both, the DOS and e DM a broadening of 25 meV with full width at half maximum was used. In addition the DMI energy E DM (E) is plotted, which is the integrated value of e DM up to an energy E. So E DM (E F ) corresponds to the definition of E DMI of Eq. (13).
The DOS of Fig. 8(a) is easily understood in terms of our 8×8 model (5). The majority and minority channel consists of 4 states each. The energy distribution of the 4 states is a result of the hybridization between the d xz states at the 3d-metal sites and the d xz and d yz states at the 5d-metal site, with the bonding states at low energies and the antibonding states at energies around the Fermi energy. The energy splitting among the states results from the different hybridization between the d xz -d xz and d xz -d yz orbitals. The majority and minority states of the 3d-metal sites are shifted by an exchange splitting Im. Since the minority states are closer in energy to the states of the 5d-metal site, there the hybridization is larger. This hybridization leads also to a small exchange splitting of the states at the 5d-metal site.
If the magnetic moments of the magnetic sites are ferromagnetically aligned as in Fig. 8(a), no DMI can be observed, 52 and e DM vanishes, since an eigenstate has either pure d xz -or d yz -character of the non-magnetic site but not both, and Eq. (12) is zero. Due to the Lorentzian broadening, the eigenenergies 1 and 2 around the Fermi energy in Fig. 8(a) seem to contribute largely to the DMI. However, eigenenergy 1 exhibits only d ↓ xzand eigenenergy 2 only d ↑ yz -character. Hence, their eigenfunctions cannot contribute to DMI. In contrast, the case of ϕ = 45 • of Fig. 8(b) shows that the non-collinearity of the magnetic sites is crucial to obtain a non-vanishing DMI. The d xz -orbitals of the magnetic sites hybridize with the orbitals of the non-magnetic site differently for different spin-channels and induce a spin-polarization. As a consequence the eigenstates obtain both d xz -and d yz -character of different spin leading to a non-zero e DM .
The quantity e DM shows an interesting characteristic behavior. Each peak-like contribution comes along with an energetically slightly shifted contribution of opposite sign. This leads to sign changes in E DM if the Fermi energy is in the middle of such a feature. This explains the sensitivity of the magnitude and the sign of the asymmetric exchange depending on the substrate as presented in Sec. III C 2. Since E DM (E F ) corresponds to the DMI energy E DMI , an electron filling of 6 electrons leads to non-vanishing E DMI in Fig. 8(b), whereas E DMI vanishes for the maximum occupation number of 8 electrons in our finite system.
Beside the non-collinearity, the breaking of the inversion-symmetry is also crucial for the appearance of the DMI. Contrary to the non-inversion-symmetric trimer as displayed in Fig. 7, the inversion-symmetric trimer 53 exhibits no DMI (not shown). Here, no hybridization between the d yz -orbital of the non-magnetic site and the d xz -orbital of the magnetic sites occurs, since t 2 = 0. This again shows that the hybridization between the orbitals of the magnetic sites and the orbitals of the non-magnetic site is crucial, which can be also observed in the ab initio results of several zig-zag chains as presented in Fig. 6.
It is interesting to take a look at the DMI as function of the difference between the on-site energies E C − E A , which controls the degree of hybridization between the magnetic sites and non-magnetic site. The results are summarized in Fig. 9. The DMI becomes larger in magnitude for smaller on-site energy differences, as can be seen by comparing the magnitude of e DM in the lower panel of Fig. 9(a) and Fig. 9(b). In the case (a), the orbitals of the non-magnetic site and the magnetic sites hardly overlap due to a large on-site energy difference of 3 eV. Hence, the DMI is much smaller compared to the case (b) with an on-site energy difference of just 0.5 eV for which the orbitals are strongly hybridizing with each other. Fig. 9 demonstrates again the sensitivity of the magnitude and sign of E DMI on the details of hybridization between the strongly magnetic 3d and the heavy 5d transition-metal atom. In a simplified picture the main difference between e.g. an Fe-Pt and a Co-Ir zig-zag chain can be seen in the differences in the on-site energies, since the total number of electrons is the same for both systems. The sensitivity of the DMI on the substrate, as presented in Table IV can be verified in this simple model.
All results of this section have been calculated by adding spin-orbit interaction (SOI) within first-order perturbation theory. Although the results obtained by diagonalizing the full Hamiltonian of Eq. (5) might be numerically different as compared to the results obtained within perturbation theory, the same conclusions on the behavior of the DMI can be drawn. The difference between these treatments is expected to be much smaller for the zig-zag chains of Sec. III C 2, since there the energy shifts δE within first-order perturbation theory are an order of magnitude smaller as compared to the about the sign and the magnitude of the DMI treating the spinorbit interaction and the non-collinear alignment of the quantization axis at the different atoms A and B within first-order perturbation theory. We start from the unperturbed system, which is the ferromagnetically (ϕ = 0) aligned trimer with magnetic moments pointing along the x-axis, and without spin-orbit interaction. Under these conditions the Hamiltonian (5) block-diagonalizes into two 4 × 4 Hamiltonians for majority and minority states, i.e. H 0 + H mag (ϕ = 0) = H ↑↑ ⊕ H ↓↓ . The eigenstates of this magnetic system are denoted as |n = |n σ ⊕ 0, for the first four eigenstates, n = 1, . . . 4, which correspond to the majority states (σ =↑ or σ = 1), and |n = 0⊕|n σ for the second set of four eigenstates, n = 5, . . . 8, which correspond to the four minority states (σ =↓ or σ = −1). The four site-and orbital-dependent components of |n σ are denoted by n σ µ i , where µ i is the atomic orbital µ at site i. The matrix elements of the unperturbed Hamiltonian, H 0 + H mag (ϕ = 0), are real numbers and thus also the eigenstates |n σ can be chosen to be real. The analytical solution of the eigenenergies ε σ n and eigenvectors |n σ read: The 8 eigenvalues and eigenvectors characterized by n = (τ, i, σ) are a result of the hybridization of majority (σ = 1) or minority (σ = −1) states at site A and B with either the d xz (i = 1) or d yz orbital (i = 2) at site C, corresponding to the hopping parameter t i = t 1 or t 2 that lead to bonding (τ = −1) or antibonding (τ = +1) states. Notice, the eigenvectors have either d xz or d yz character at site C, but not both. Although we deal here with a discrete eigenvalue spectrum in a more abstract sense we interpret the first term of the eigenenergies as the center and the second term as half of a bandwidth W σ n , which is the energy difference between the corresponding bonding and antibonding state. These quantities will be used later when displaying the energy correction δε n .
First, let us evaluate the perturbed eigenvector |n under the perturbation of a small exchange field B = γe y along the y-direction, leading to a slightly canted magnetic configuration with angle ϕ on magnetic site A. The parameter γ is directly related to the angle ϕ and therefore it is connected to the degree of non-collinearity. Note that this canted configuration is different from that depicted in Fig. 7 in that we keep the spin on site B along the x-axis to simplify the notation. The corresponding perturbation of the Hamiltonian in spin-space reads: where Θ A is a step function which is zero outside atom A, and δ AA is a projection onto orbitals localized on site A, with the first part of this equality being basisindependent, and the right side being in the representation of the localized atomic orbitals. Henceforth, all equations will be given in both a basis-independent form as well as the atomic orbital representation. In the representation of the x-axis as spin-quantization axis, σ y contains only spin-flip elements, leading to changes to the unperturbed eigenstate |n σ that are of purely opposite spin-contribution σ . Hence, the following equation for |n in first-order perturbation theory with σ = σ is obtained: where ε σ n and ε σ n are the corresponding eigenvalues to the unperturbed eigenstates |n σ and |n σ and the real quantity where by n σ |σ y |n σ A we introduce a short-hand notation for the evaluation of the term n σ |σ y |n σ at site A. The prefactor σ is 1 (−1) depending whether n σ belongs to the majority (minority) spin channel.
Next, we evaluate within first-order perturbation theory the correction to the energy of state |n due to the spin-orbit interaction by substituting the perturbed state |n of Eq. (18) into Eq. (12). Neglecting higher-order terms in ϕ and taking into account that n σ |H SO |n σ = 0 leads to the following expression for the energy shift: = γξ with δL n σ n σ = Im n σ |Θ C † (L y σ y + L z σ z ) Θ C |n σ (23) where in Eq. (24) L ∓ are the angular moment ladder operators and σ ± the spin ladder operators with L − σ + corresponding to the case σ = ↑ and L + σ − to σ = ↓, respectively. For our particular situation the L y operator in the subspace of considered local orbitals vanishes, which allows to write the contribution to the energy shift due to SOI as: The sum runs over the 8 states n = (τ , i , σ ), but has only non-zero summands if the orbital-type i on site C and the spin direction σ of state n are different from the state n = (τ, i, σ). σ = 1(−1) stands for electrons of state n σ taken from the majority (minority) spin channel. According to Eq. (14), i = 1(2) corresponds to the state with a d xz (d yz ) orbital component. The product τ τ = 1(−1) if the bonding character labelled by τ of both states is the same (different). The quantities W σ n and W σ n play the role of the bandwidths, since they are the energy differences between the corresponding bonding and antibonding states of |n σ and |n σ .
We remark, that in general the energy shifts due to spin rotation away from collinear configuration at the different sites A and B are independent and should be added up in order to comprise the total shift δε n . For example, upon a simultaneous ϕ and −ϕ rotation of spins on A and B sites, respectively, as shown in Fig. 7, the δε n from Eq. (26) corresponding to a staggered B-field on both sites should be simply multiplied by a factor two, owing to symmetry. We confirmed that Eq. (26) reproduces well the energy shifts δε n obtained by diagonalization of H 0 +H mag and including then the SOI through first-order perturbation theory as presented in Fig. 8. The error for a canting angle of ϕ = 45 • is about 10%.
The DMI energy, E DMI (13), is then obtained by the summation over all the energy shifts δε n of states n that are occupied. By this, the sum over n in Eq. (26) changes to a double sum over n and n and all those combinations of states n and n in sum (26) cancel out identically if occupied both. Thus, at the end only those combinations of states contribute to the DMI energy for which the initial and final state, n and n , respectively, refer to spinflip transitions between occupied and unoccupied states that include in addition a transition between the spinorbit active states. Generalizing this thought means that for half-metallic chains, i.e. chains that have a band gap around the Fermi energy in one spin-channel, a rather small DMI energy should be expected. It also explains the small DMI vectors D for the 3d-Au chains recorded in Table IV. In case of Au chains the d-orbitals of Au that are responsible for the spin-orbit matrix elements are all occupied and do not contribute to the DMI energy, while for Ir and Pt chains we have occupied and unoccupied states 5d-orbitals, which make essential contributions to D.
The strength of δε n or E DMI , respectively, increases with increasing spin-canting angle ϕ (through γ), increas-ing spin-orbit interaction ξ and increasing hopping matrix elements t 1 and t 2 , where in particular t 2 is proportional to the degree of structural asymmetry of the trimer. Exactly this asymmetry and both spin-mixing processes together are the factors responsible for the DMI. Due to the spin canting, the eigenstates are not anymore of pure spin character but contain a spin mixture of basis components contributing to non-zero spinflip matrix elements at sites A, B, and to a superposition of d xz and d yz orbital character of different spin-character at site C and thus to the spin-orbit induced spin-flip on site C. Hence, the DMI is a non-local phenomenon since the state |n needs hybridization between the orbitals at the magnetic sites A and B and the non-magnetic site C carrying the spin-orbit interaction. Since t/W is about 2t/|E A −σ 1 2 Im−E C | for small t relative to the on-site energy differences and 2 Im, we can conclude that the larger this hybridization either due to large hopping matrix elements t or a small energy difference between the on-site energies at the sites A and C the larger is δε n explaining the strong dependence on the on-site energy difference in Fig. 9.
Regarding the sign of δε n , Eq. (26) gives some insight into the intricate relationship between the sign of the DMI and the underlying electronic structure even for this simple model. Apparently, the sign of the canting angle and the sign of the asymmetry of the trimer atoms through the hopping matrix element t 2 control directly the sign of the DMI energy. Further, the nature of the electronic structure in terms of the spin projection σ of the occupied states, the sign of the hopping parameters t 1 and t 2 as well as the orbital character of the involved eigenstates |n σ is crucial. In addition, the energetic position of ε σ n , ε σ n and their bonding character have an influence on the sign, but also the magnitude of each term in the sum is important, making it not straightforward to relate the sign of δε n or E DMI to the physics of a system. Moreover, since the DMI energy is the integrated quantity over the values δε n of all occupied states, the sign and the magnitude of the DMI depend on the magnitude and sign of all δε n .
To get a better understanding of the sign of the DMI on the basis of Eq. (26) we focus now on the sign of the energy shift δε in terms of the DMI energy density e DM and the DMI energy E DMI . Namely, we will attempt to understand the behavior of e DM and E DMI for ϕ = 45 • displayed in Fig. 8(b), in terms of the perturbation theory expression Eq. (26) applied to unperturbed states for ϕ = 0 • , shown in Fig. 8(a). We concentrate first on the two low-lying pairs of occupied states n in Fig. 8(a), the two majority states (σ = 1) around −1.75 eV and the two minority states (σ = −1) around −0.75 eV. Both pairs are bonding states (τ = −1) with d xz character of atoms A and B resulting from the hybridization with the orbitals at C. The lower (upper) peak of each pair results from the hybridization with d yz , i.e. i = 1, (d xz , i = 2). Since the states of each pair are of the same spin and exhibit a similar contribution of both orbitals at each site d A xz and d B xz , the quantity δS n σ n σ is approximately the same. The energy differences ε σ n −ε σ n for these two states to all other states |n σ are also almost the same, since each pair of states is well-separated from the other states. Hence, the only major difference turns out to be the sign in δL n σ n σ , which is determined by the orbital character of the states at the non-magnetic site and manifests as (−1) i in Eq. (26) and results at the end in a sign change of e DM when passing through these peaks in energy. Now we focus on the pair of states n and n around the Fermi energy (denoted as 1 and 2 in Fig. 8(a)), for which the energy dominator |ε σ n − ε σ n | is smallest and consequently whose contribution finally determines the DMI energy. n = (−1, 1, ↓) is the highest occupied minority (σ = −1) state with d xz (i = 1) character and n = (−1, 2, ↑) is lowest unoccupied majority state with d yz character (i = 2). Both states are at the end of the eigenvalue spectrum and are therefore antibonding states (τ = τ = −1). Recalling that t 1 > 0, t 2 < 0 and ε σ n < ε σ n we understand through (26) that δε is negative. To the DMI energy there contributes also a second pair of states of similar size but opposite in spin (σ = 1), i.e. opposite sign, n = (−1, 1, ↑) and n = (−1, 2, ↓), but their energy difference |ε σ n − ε σ n | is slightly larger than the previous pair and thus the overall DMI energy is negative. Since the canting angle ϕ = 45 • produces a right-handed magnetic structure with a chirality vector c = S A × S B = −e z , D in this example is positive.
To conclude we developed a minimal model that carries general features of the DMI and is able to successfully reproduce and explain the DMI of the trimer regarding the symmetries, the magnitude and the sign. DMI can only occur at presence of spin-orbit interaction in inversionasymmetric non-collinear magnetic systems and its driving force is the hybridization between the orbitals of the magnetic and non-magnetic sites.
V. CONCLUSIONS
In the present paper, we have systematically investigated the non-collinear magnetic properties of infinite length 3d-5d bi-atomic zigzag chains. Our investigations show that 3d-5d chains exhibit an induced spin polarization on the 5d atoms, which decreases with increasing atomic number of the 5d element. In comparison to the Co-5d chains, the magnetic moments of Fe-5d chains show large variations as a function of lattice constant. We find a parabolic behavior of energy dispersion in the limit of large wave vectors q for spin-spiral calculations without spin-orbit interaction. The ferromagnetic (q = 0) and antiferromagnetic (q = 0.5) calculations performed as special cases of the calculational model based on the spin-spiral concept were in good agreement with the conventional collinear ferromagnetic and antiferromagnetic calculations. Without inclusion of spin-orbit interaction, the Fe-Pt and Co-Pt chains exhibit a degenerate spinspiral ground state at q = ±0.07 and ±0.03, respectively.
Including the spin-orbit interaction, all 3d-5d chains exhibit a non-vanishing DMI with signs that depend on the choice 5d metal, but only for the Fe-Pt and Co-Pt chains the DMI is sufficiently strong to compete with the MAE and the Heisenberg exchange to arrive at a noncollinear ground state. Since the non-collinear state is driven by the DMI, the magnetic structure is chiral in nature exhibiting a homogeneously left-rotating cycloidal spin-spiral. The magnetic ground state of the Fe-Ir and Fe-Au chains remain unaffected by the DMI and exhibit a ferromagnetic ground state.
We analyzed the behavior and strength of the DMI on the basis of the electronic structure by means of the single particle energy. We observe strong shifts of the single particle energies due to the spin-orbit interaction in an energy regime, where 3d minority states hybridize with the d states of the 5d metal. Finite positive shifts of the energy eigenvalue followed by negative ones lead to positive and negative contributions to the Dzyaloshinkii-Moriya interaction energy exhibiting an oscillating behavior of the DMI across the center of the 5d band. Changing the 5d metal from Ir to Au moves the Fermi across the 5d band which explains the oscillatory sign of the DMI with the choice of the 5d metal.
In order to provide a deeper understanding in the possible factors that influence the sign and strength of the DMI in low-dimensional systems on the level of the hybridization between relevant d-orbitals of the 3d and 5d atoms, we developed a minimal tight-binding model of a cluster of two magnetic 3d metal atoms and one nonmagnetic 5d atom carrying the spin-orbit interaction assuming a triangular geometry. The model catches the main features of the ab initio results. The tight-binding calculations show that breaking of structural inversion symmetry and the non-collinearity of the magnetic sites are crucial to obtain a non-vanishing DMI. The strength of the DMI is linear in the strength of the spin-orbit interaction of atom 5d. Further, the sign and strength of DMI is also proportional the sign and strength of the hybridization between magnetic and non-magnetic sites and inversely proportional to the the energy difference between those states.
|
2014-06-02T08:54:18.000Z
|
2014-06-02T00:00:00.000
|
{
"year": 2014,
"sha1": "4f27d25125032ad6e49fb6776cb3485b0b338c53",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.0294",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4f27d25125032ad6e49fb6776cb3485b0b338c53",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
57417317
|
pes2o/s2orc
|
v3-fos-license
|
Network Data Security for the Detection System in the Internet of Things with Deep Learning Approach
We thought to set up a system of interconnection which allows sharing the communication network of data without the intervention of a human being. The Internet of Things system allows many devices to be connected for a long time without human intervention, data storage is low and the level of data processing is reduced, which was not the case with older solutions proposed to secure the data for example: cyber-attack and other systems. But other theories like for example: artificial intelligence, machine learning and deep learning have a lot to show their ability and the real values of heterogeneous data processing of different sizes and many researchers had to work on it.In the case of our work, we have used deep learning theories, to achieve a light data interconnection security solution; we also have TCP/IP protocol for data transmission control, algorithm drillers for classifications. In order to arrive at a good solution; First, we thought of a model for anomalies detection in Internet of Things and we think about the improvement of architectures of the Internet of the existing objects already proposed a system with a light solution and especially multilayer for an IoT network. Second, we analyzed existing applications of machine learning, deep learning to IoT, and cybersecurity. The recent hack of 2014 Jeep Cherokee, iStan pacemaker, and a German steel plant are a few notable security breaches. Finally, from the evaluated metrics, we have proposed the best neural network design suitable for the IoT Intrusion Detection System. With an accuracy of 98.91% and False Alarm Rate of 0.76 %, this research outperformed the performance results of existing methods over the KDD Cup '99 dataset. For this first time in the IoT research, the concepts of Gated Recurrent Neural Networks are applied for the IoT security.
I. INTRODUCTION
By definition, it can be said that networked or interconnected systems often have a long list of interconnected data in parallel; this type of system facilitates the rapid sharing of data as they are distributed and errors are easily reduced according to each connected device. And these kinds of networks are also like a mathematical regression that is linear because the solution between the input variables and the output produced is often captured by nonlinear relations and this final solution which is found from variables results represents the set of many hidden layers of each predefined function. Our studies are very much about the analysis of a new domain that is the Internet of Things with the presentation of architecture and neural networks. We then made the comparison on security issues and privacy between the field of deep learning and that of the Internet of Things. This may be possible with the concepts of machine learning or deep learning because IoT generates a huge amount of heterogeneous data. And we note that with this research we fall into a multilayer architecture and the new technology of the Internet of Things for a unique system. And the algorithms we had to apply during this research are to monitor network data interconnect and classify activities are application attacks for multiple layers of each architecture. And for our research we thought about using the KDD 99'Cup intrusion detection dataset that many researchers who have been working on internet data security consider as a combination of referential data. For all the work related to our work are in part (2), the methodology used for our work is in part (3) and the last part focuses on the implementation and outcome of our work (4).
Motivation
In the world of IoT, the datasets are high-dimensional, temporal and multi-modal. Deep Learning algorithms with robust computation power are more suitable for complex IoT datasets compared to legacy machine learning techniques. The application of deep learning to the IoT domain, particularly in IoT security is still in the initial stages of research and has a great potential to find insights from the IoT data. With smart use of deep learning algorithms, we believe that IoT solutions can be optimized. For example, recurrent neural networks in deep learning have the capability to learn from previous time-steps of the input data. The data at each time-step is processed and stored and given as input to the next time-step. The algorithm at the next time step utilizes the previous information stored to process the information. Though the neural network structures are complex, the hyperparameters can be tuned to obtain light-weight functionality for IoT solutions. This hypothesis motivated us to apply deep learning concepts to IoT network security.
Problem statement
The goal of this thesis is to analyze and answer the following research questions: What are the security and privacy issues relevant to the IoT environment? Does GRU better than the other machine learning approaches for Intrusion Detection on the IoT? Does a separate GRU based IDS for each network layer perform better than the all layer GRU?
Contribution
This research can be extended by applying the algorithms on GPU environment on real-time IoT data. Though there are various deep learning algorithms such as deep neural networks, auto encoders, convolutional neural networks and recurrent neural networks, the research problem requires an algorithm that can learn from historical data. Therefore, we have selected the family of recurrent neural networks for the research. Considering the need of building smart and lightweight solutions for the IoT network, we have performed the experiments with only the Gated Recurrent-Unit (GRU) algorithm while the vanilla RNN and LSTM are ignored. We have modified the data by dividing it into various layers such that the same procedure can be applied in an IoT network.
II. RELATED WORKS
There are also several existing works in this area. In this section, we will discuss the most recent work that uses methods and architectures. We were motivated and inspired from this work" Cyber-Physical-Social Based Security Architecture for Future Internet of Things" because after taking a lot of time to study and read this work, we found tremendous benefits from doing our research in this area. Alrawashdeh and Purdy [18] proposed using a RBM with one hidden layer to perform unsupervised feature reduction. The weights are passed to another RBM to produce a DBN. The pre-trained weights are passed into a fine tuning layer consisting of a Logistic Regression classifier (trained with 10 epochs) with multiclass soft-max. The proposed solution was evaluated using the KDD Cup '99 dataset. The authors claimed a detection rate of 97.90% and a false negative rate of 2.47%. This is an improvement over results claimed by authors of similar papers. Similarly, Tang et al. [19] also propose a method to monitor network flow data. The paper lacked details about its exact algorithms but does present an evaluation using the NSL-KDD dataset, which the authors claim gave an accuracy of 75.75% using six basic features. Kang and Kang [20] proposed the use of an unsupervised DBN to train parameters to initialise the DNN, which yielded improved classification results (exact details of the approach are not clear). Their evaluation shows improved performance in terms of classification errors. You et al. [16] propose an automatic security auditing tool for short messages (SMS). Their method is based upon the RNN model. The authors claimed that their evaluations resulted in an accuracy rate of 92.7%, thus improving existing classification methods (e.g. SVM and Naive Bayes). In addition, there is other relevant work, including the DDoS detection system proposed by Niyaz et al. [21]. They propose a deep learning-based DDoS detection system for a software defined network (SDN). Evaluation is performed using custom generated traffic traces. The authors claim to have achieved binary classification accuracy of 99.82% and 8-class classification accuracy of 95.65%. However, we feel that drawing comparisons with this paper would be unfair due to the contextual difference of the dataset. Specifically, benchmark KDD datasets cover different distinct categories of attack, whereas the dataset used in this paper focuses on subcategories of the same attack
III. METHODOLOGY
We designed an innovative architecture for an IoT home network that would reduce the size of the datasets for the IDS classifier. We have selected the KDD Cup 1999 Intrusion Detection Dataset for the experiments and proposed an intelligent solution which satisfies the key requirements of the IoT solutions. We have performed the feature engineering using a Random Forest classifier and selected those features with high importance. We performed a rigorous data analysis and prepared the data in the required format before it was used as an input to the model. Proposed Multi-Layer architecture for IoT network Out of various security measures, we have selected network security as the use case to prove the defined features are apt for an IoT network. In a regular wireless system, the Intrusion Detection System (IDS) monitors the network data using either a "Signature-based approach" or an "Anomaly-based approach". The IDS mounted at a point in the network obtains 27 all the network data and classifies the data into "normal" or "attack". Other than traditional approaches, Machine Learning (ML) algorithms are applied to a dataset and classification is performed through supervised learning. However, this legacy approach may not be suitable for smart IoT network systems due to their heterogeneity. The security solutions for intrusion detection should be light-weight, multilayered and have a good amount of longevity. Hence, we developed a multi-layered architecture and applied lightweight machine learning algorithms which can work with better performance for longer periods of the time. An IoT system contains various devices which are placed at different locations with long distances between them. The number of devices involved in IoT systems is higher when compared to a regular wireless or wired system. A single IDS system must have the memory capacity to process the network data among all the devices and must be responsive in a short amount of time. In this case, the performance will be poor in the IoT network due to the high number of devices and the large distance between the devices. Each IDS placed at a TCP/IP layer monitors only the data obtained from the devices that belong to that layer. We chose this architecture as the main architecture of our work because this architecture has many advantages and uses a multilayer system that is a potential system in today's world.
Feature Selection
We have explained well in the above step-by-step chapters of this research and the random forest classification algorithm used to select the main important features of all the classifiers one by one and Intersecting graphical results for each classifier's characteristics are presented. The "Protocol Type" feature has been selected in all intrusion detection layers. Dividing the dataset into features set and the labels set. Converting catergoial data into numerical data. Encoding the normal as '1' and abnormal as '0'. Converting input data into float which is requried in the future stage of building in the network. Adding another column to the labels set -kind of one hot encoding i.e normal = '1' is represented as '1 0' abnormal = '0' is represented as '0 1' This is required so that the softmax entropy function can efficiently calculate the accuracy.Loading the data into the system And applying the data preprocessing and feature selection for the dataset. Divinding into train and test datasets, performing the above operations are required before training and testing the model.
Normalizing the Input Features and Hyper
Parameters: Here we are not restricting the input size, therefore it batch_size is given as "None", weights and biases are initialized in random using tf.random_normal function. Sizes are defined appropritately as per the logic, the biases output either '1 0' or '0 1' Building the Model: Before bulding the model, we have to reshape the inputs in to 3D tensors of size from 2D tensors of size. We can specify a loss function just as easily. Loss indicates how bad the model's prediction was on a single example; we try to minimize that while training across all the examples. Here, our loss function is the cross-entropy between the target and the softmax activation function applied to the model's prediction.
Evaluation Metrics
Compared to this part, we make a simple comparison of the following values: the precision of the training, the /dx.doi.org/10.22161/ijaers.5.6.30 ISSN: 2349-6495(P) | 2456-1908(O) rhythms of learning and for a good understanding on the behavior of the model according to the change of these hyper-parameters in relation to the time steps, after testing the performance of each class of IDS classifier by adding the hyper-parameters of the GRU algorithm. We performed a similar type of experiments for all IDS classifiers (all layers, the application layer, the transport layer, and other layer classifiers such as: for the network). And we are much interspersed with the results of the following classifications with their performance.
Classifier performance results with application layer:
With this experience we have achieved the best training accuracy with time steps of "40"; which is our complete result of this layer. The confusion matrix of the optimized (time-steps = 40, learning rate -0.01) and the corresponding plot for the Application-Layer IDS can be analyzed in Table Time
Comparing the results of all classifiers and their layers
The optimized results of all the IDS classifiers are compared and it was found that the performance of the All-Layers IDS classifiers is inferior to the Individual layer IDS classifiers in terms of training accuracy and training time. The light-weight algorithms, when used in a multilayer architecture, perform better which is suitable for an IoT system. The comparison of the results can be found in Table. Feature
V. CONCLUSIONS
Our studies are very much about the analysis of a new domain that is the Internet of Things with the presentation of architecture and neural networks. This research focused on the processing of IoT elements where processing power is low with data size that is not as huge. This interdisciplinary research is novel in a way that, it has applied deep learning methods for IoT security. We have proposed light-weight architecture for an Intrusion Detection System (IDS) in an IoT network. Based on TCP/IP layer architecture and the attack types at each layer, we have suggested placing IDS classifiers at each layer. This has reduced the data set size at each classifier and improved the performance in terms of accuracy, recall, training time and false alarm rate. We have applied deep learning algorithms to classify the data at each IDS classifier. This approach has achieved outstanding results with better results than existing work in the literature. Moreover, we have used the full KDD 99'cup 22% data set for the One can also build a hybrid network using convolutional neural networks and recurrent neural networks to deal with multi-modal data. And here we are at the end and this research that was focused on data security; we say here that our goal was achieved given the end result which was satisfactory. We say that our research has positive results and exceeded the capacity levels of all existing work. We will continue to deepen our knowledge and the suggestions of everyone are welcome.
ACKNOWLEDGEMENT In terms of gratitude, I first thank my God and my family (my father and mother) for this life that gave me. This research is the result of enormous support from my dear teacher MAYAO. I am thankful for his humble and simple personality, I would like to thank him with all my heart for all the sacrifices, directions, understanding and advice despite the language that does not allow us to communicate well but my teacher was always present for me. I thank all the teachers of my department and those who taught me the Chinese language for their support and encouragement. I am also grateful to all the friends who helped me a lot and motivated me to reach my goal, especially I would like to thank Jean Marie Cimula and Miguel Kakanakou for their love, motivation and encouragement. I think that the man must have the hard spirit to support the realities of life and the determination to accomplish his goals.
|
2019-01-23T16:43:17.730Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "57d38a638da37322ce71a4827f9fe83eb0f2c3d2",
"oa_license": "CCBYSA",
"oa_url": "https://ijaers.com/uploads/issue_files/34-IJAERS-JUN-2018-30-NetworkData.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5dbd004134ef12677ba7f6e4e30a6eac6ba5c0c6",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
55035809
|
pes2o/s2orc
|
v3-fos-license
|
Mucormycosis: the clinical spectrum
Mucormycosis is an opportunistic fungal infection caused by saprophytic fungal elements commonly found in soil, decaying materials and food. Respiratory route is the most common mode of entry to the body. The fungi have filamentous nonseptate hyphae with right angled branching. These fungal elements are widely distributed in nature and are commonly found in decaying matter. Though mucormycosis can affect any part of the body it has predilection for certain organ systems. It can present as follows: disseminated, cutaneous, rhinocerebral, pulmonary, and gastrointestinal or central nervous system involvement. The predisposing disease condition determines the predilection for one of these presentations. Diabetic patients most often develop rhinocerebral mucormycosis. The host response to mucorales is predominantly by neutrophils. Fungal growth and proliferation is promoted by certain metabolic states like hyperglycemia and acidosis. The increased release of iron from ferritin which occurs because of acidosis caused enhanced fungal and hyphae growth. Mucormycosis is usually not encountered commonly in day to day practice and hence a high index of suspicion is needed to avoid delay in diagnosis and treatment ABSTRACT
INTRODUCTION
Mucormycosis is an opportunistic fungal infection caused by saprophytic fungal elements commonly found in soil, decaying materials and food. Respiratory route is the most common mode of entry to the body. The fungi have filamentous nonseptate hyphae with right angled branching. 1 These fungal elements are widely distributed in nature and are commonly found in decaying matter. Though mucormycosis can affect any part of the body it has predilection for certain organ systems. It can present as follows: disseminated, cutaneous, rhino-cerebral, pulmonary, and gastrointestinal or central nervous system involvement. The predisposing disease condition determines the predilection for one of these presentations. Diabetic patients most often develop rhino-cerebral mucormycosis. The host response to mucorales is predominantly by neutrophils. 2 Fungal growth and proliferation is promoted by certain metabolic states like hyperglycemia and acidosis. 3 The increased release of iron from ferritin which occurs because of acidosis caused enhanced fungal and hyphae growth. 4 Mucormycosis is usually not encountered commonly in day to day practice and hence a high index of suspicion is needed to avoid delay in diagnosis and treatment Management of mucormycosis still represents a big challenge and is based on different strategies which envisage a rapid diagnosis, removal or reduction of risk factors, rapid aggressive antifungal therapy, with or without surgical treatment. 5 Symptoms and signs of rhinocerebral mucormycosis include: fever, headache, proptosis, dark nasal eschar, redness of skin overlying sinuses, sinus pain or congestion. Symptoms of lung pulmonary mucormycosis include: cough, hemoptysis, dyspnoea, fever. Symptoms of gastrointestinal mucormycosis include: Abdominal pain, hemoptysis, and diarrhea. Symptoms of renal mucormycosis include: flank pain and fever.
The aim of the study is to assess the different modes of presentation, risk factors management and prognosis of patients with mucormycosis.
METHODS
It is a retrospective study conducted in Fr Muller medical college hospital between January 2016 to October 2017 and included all patients hospitalized for mucormycosis confirmed by mycological and /or histological findings. The study was approved by the institutional research and ethics committee. All case records were identified from in patient MRD with the diagnosis of mucormycosis over the last one year. For each case taken, the clinical information was recorded from case sheet.
Inclusion criteria
All cases admitted to author's hospital between January 2016 to October 2017 with histological /mycological diagnosis of mucormycosis affecting any organ.
Exclusion criteria
• All cases diagnosed with mucormycosis with any hematological malignanacy • Post chemotherapy status • Post radiotherapy status or any solid organ carcinoma • Post organ transplant recipients.
For each patient indentified from case records clinical information was recorded as per preformed table which includes name, age, sex, clinical features, premorbid illness if any, involved sites, mode of diagnosis, radiology, predisposing risk factors, predisposing local conditions, treatment given, surgical debridement if any and outcome. Mucormycosis was diagnosed by mycological / histopathological findings and fungal isolation. Laboratory -based diagnostics included conventional procedures like fungal culture, direct fungal stain and histopathology. Computerized tomography of the involved site was done in all patients to know the extent of the lesion.
Statistical analysis
Qualitative data like gender, morbidities, and outcome will be analysed as frequency and percentages. Quantitative data like age, laboratory values and duration of stay will be presented as mean and standard deviation with 95% confidence intervals
RESULTS
During the eighteen months period total of 7 cases were identified with mucormycosis. The patient characteristics are shown in Table 1. The mean age of presentation was 55.7 years. The study group consisted of 4 male patients and 3 female patients aged between 45-65 years. The most common clinical presentation was facial pain, headache, nasal obstruction and swelling. Hemoptysis was the main clinical presentation in patients with pulmonary mucormycosis (100%). The majority of the patients were immunocomprised having diabetes mellitus (71.4%) while the remaining one patient had local chronic sinusistis (14.2%) while one more patient was immunocompetent (14.2%).
Mucormycosis occurs sporadically in patients with uncontrolled diabetes or previous trauma and may involve any areas of the body. The main site of involvement in this case series was rhinocerebral/rhinoorbital region (71.4%) as depicted in Figure 1. Pulmonary involvement was seen in 28.6% cases.
Hemoptysis was the initial presentation in pulmonary mucormycosis while hemifacial pain with nasal obstruction seen in the rest of the cases.
All the patients had neutrophilic leukocytosis in the complete blood picture with elevated ESR. The mean ESR in this study was 79.
The main risk factor in all these cases was uncontrolled diabetes mellitus (71.4%). The mean Hba1c levels was 13. The comorbid illness seen in the patients are shown in Table 2. The other comorbid illness included chronic kidney disease, hypertension, rheumatoid arthritis, HbsAg positive status with high viral load, old pulmonary tuberculosis and chronic local sinusitis. One patient was immunocompetent with no co morbid illness. Urine ketones was positive in all diabetic patients at the time of initial presentation of rhinoorbital mucormycosis. In diabetic ketoacidotic patient, there is high incidence of mucormycosis caused by Rhizopus oryzae, also known as Rhizopus arrhizus, because they produce the enzyme ketoreductase, which allows them to utilize the patients' ketone bodies.
Presence of broad aseptate fungal hyphae, with a right-angle branching varying from 45 to 90 All the cases underwent radiological imaging for the diagnosis with surgical debridement done in 5 cases. Fungal mycology was positive in all the cases. Two cases received convential injection amphoterecin B at dose of 1mg/kg alone for two weeks without surgical intervention while rest 4 cases received surgical debridement followed by itraconazole. One case of pulmonary mucormycosis died just before the initiation of treatment.
DISCUSSION
The mean age group in present study was 55.7. It was consistent with the study published by Gupta et al which had a mean age of 50 years. 6 Mucormycosis has shown an equal sex distribution. But in this there is slightly male predominance with ratio of male: female is 1 The most common clinical presentation was rhinoorbital mucormycosis Inhalation appears to be the most common route of infection with subsequent involvement of respiratory tract, resulting in both rhino-orbito-cerebral and pulmonary forms, the literature review has shown that the most common clinical presentation of mucormycosis turned out to be rhinoorbital mucormycosis. 8 So author have hypothesized that a chronic local insult, such as a chronic sinusitis, might have acted as a predisposing factor for a possible development of mucor infection in immunocompetent/ otherwise healthy individuals.
This speculation seems to be supported by the evidence that a chronic sinusitis might be caused by an alteration of first-line barrier defense of upper airway (sinusal mucosa) caused by an impairment of mucociliary clearance. An impairment or loss of immune defense at the sinusal mucosa would render individuals more vulnerable to fungal colonization.
The most common predisposing factor was uncontrolled diabetes mellitus as also shown by Chakrabarti A et al, in his study. 9 The mean ESR in present study was 79. A study done by Ghafur et al, showed the mean ESR was elevated significantly in their study 118 and also had predominant neutrophic leucocytosis in their case series. 10 Urinary ketones positive also favour mucormycosis as shown in a study done by Baldin C et al, Thus, the unique interactions of GRP78 and CotH proteins and their enhanced expression under hyperglycemia and ketoacidosis explain the specific susceptibility of DKA patients to mucormycosis. 11 Hyperglycemia causes reduction in chemotaxis and phagocytic efficiency .
One patient died in present study even before initiation of treatment due to massive hemopytsis. Pulmonary mucormycosis has a higher mortality rate shown in a study done by He R et al. 12
CONCLUSION
Mucormycosis is a rare opportunistic fungal infection with rapidly progressive and fulminant course with often fatal outcome. Uncontrolled diabetes mellitus is a strong predisposing factor for mucormycosis. Elevated ESR with leukocytosis may be seen in most patients suspicious of mucormycosis.
|
2019-03-18T14:04:08.508Z
|
2018-11-22T00:00:00.000
|
{
"year": 2018,
"sha1": "5629492d50967a3a8eddef96cf6baff8f4e502ec",
"oa_license": null,
"oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/1375/1068",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "25b4f7c3280817819829072e4a5e7b5b3601ad7e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
187437435
|
pes2o/s2orc
|
v3-fos-license
|
Landscape-typological mapping of the Baikal region (within the boundaries of the Irkutsk region)
The aim of the work is to create a landscape-typological map of the territory to study the territorial differentiation of recreational activities. The variety of landscapes of the Baikal region is determined by a large number of landscape-forming factors, as well as the heterogeneity of the conditions in which geocomplexes are formed. A landscape-typological map of the territory was created on a scale of 1: 500 000. As an input data for creating the map, electronic topographic maps of the territory, space images of Landsat 5, 7, 8 for different seasons and years (including mosaic of MrSID for 2000 and Hansen mosaic for 2016), digital elevation models, landscape, geological, soil and geobotanical maps of different scale were used. A total of 56 landscape units were identified: 13 highland (goltzy, sub-mountain, reduced development mountain-taiga), 6 medium-mountain (limited development taiga), 14 lowland (optimal development mountain-taiga, steppe), 13 foothill (limited development taiga, optimal development taiga, subtaiga, steppe), 10 - intermontane depressions and valleys (limited development taiga, optimal development taiga, steppe). The conclusions drawn about the landscape structure and recreational properties of the territory are generalized. More detailed landscape research is needed, including comprehensive field work to clarify the landscape map and assess the recreational significance of landscapes.
Introduction
The territory of the Baikal region, which is, on the one hand, promising for the development of various types of economic activity, and on the other hand, unique and highly valuable from an ecological point of view, needs scientifically based planning of nature management. The information basis for such planning should be a reliable medium-scale landscape map reflecting the potential and current state of the landscapes of the territory. Landscape maps of different scale on the territory of the Baikal region, created by employees of various laboratories of the Institute of Geography SB RAS in different years, are mostly fragmentary and require generalization and reduction to a single one [1].
The aim of the work is to create a landscape-typological map of the territory to study the territorial differentiation of recreational activities. Different landscapes with certain properties (vegetation, relief, humidification conditions, aesthetic appeal, comfort, resistance to recreational loads) are appropriate for the development of one or another type of recreational activities.
The themes of planning recreational activities on a landscape basis, sustainable tourism development, assessment and mapping of cultural ecosystem services are widely represented both in Russian and foreign studies [
Objects
As the object of the study, the territory of the Baikal region within the boundaries of the central ecological zone of the Baikal natural territory within the Irkutsk region was chosen.
The variety of landscapes of the Baikal region is determined by a large number of landscapeforming factors. The geological structure of the territory is very contrasting. Preсambrian acid magmatic and metamorphic rocks (granites, gneisses, stannites, argillites and conglomerates) prevail in the area. On the east coast, granitoids with the metamorphic (crystalline schists, biotite-garnet plagioclases, amphibolites) and sedimentary (sandstone, gravel) rocks predominate, and on the western side -Proterozoic sedimentary and metamorphic deposits (sandstones, siltstones, various shales, gneisses and tuffs), as well as igneous rocks (granites, granosyenites, porphyries, granodiorites) [3].
The climate of the Baikal region is determined by the location of the territory in the belt of temperate latitudes, considerable distance from the oceans, and also by the features of the mountainhollow relief. Average annual air temperatures are negative throughout the territory. The lowest temperatures are recorded in January (-16 above the water area of Lake Baikal, down to -28 on the periphery), the highest -in July (from 8 above the water area of Lake Baikal, to 20 in the basins of the Baikal region). The western transport of air masses with a possible chill of cold air from the north, and a warm moist one -from the south prevails. The annual amount of precipitation varies from 200 mm or less in Priolkhonye to 1000 and more on Khamar-Daban ridge [4].
The complexity of the geological history and modern conditions of the Baikal region determine the contrast of the soil of the territory. In the goltsy belt -stony, low-power organogenic-gravelly, substructures and podzols, in the forest zone -soddy taiga with accompanying podzolic, soddycarbonate and brown soils, in the steppes of Priolkhonye -chestnut, sod steppe and chestnut soils predominate. Meadow, meadow-marsh, and marshsoils prevail in valleys, and saline soils -near saline lakes in Priolkhonye under saline meadows [3].
The distribution of vegetation in the area is determined by high-altitude zones, complicated by the bolson effect, aspect, lithology, microclimate of the lake Baikal, latitudinal and meridional zoning. Еhe alpine, mountain-tundra, subalpine, subgoltsy, light coniferous taiga, dark coniferous taiga, steppe and shrub-meadow-marsh plant complexes are distinguished in the area [5].
The landscape structure of the Baikal region is characterized by high complexity and contrast. Two large regions of the North Asia -Baikal-Dzhugdzhursky and South-Siberian physico-geographical regions -are in contact here; three types of natural environment are combined: tundra, taiga and steppe; a wide range of landscapes: goltsy, subgoltsy, mountain-taiga, mountain-forest, mountainforest-steppe (subtaiga) and mountain-steppe [6].
Data, methods, results and discussion
A landscape-typological map (figure 1) of the territory was created on a scale of 1: 500 000. As an input data for creating the map, electronic topographic maps of the territory, space images of Landsat 5, 7, 8 for different seasons and years (including mosaic of MrSID for 2000 and Hansen mosaic for 2016), digital elevation models, landscape, geological, soil and geobotanical maps of different scale were used. [7]. With the use of a digital elevation model, maps of slope and aspect were constructed. When creating the map, methods of automatic processing of space images were used. Using the methods of visual interpretation of Landsat images of different seasons, as well as using maps of slope and aspect, the territory was differentiated into homogeneous landscape units and their typological affiliation was determined. In the legend (table 1) of the landscape map landscapes are grouped into clusters of high-altitude belts (high-mountain, middle-mountain, low-mountain, piedmont, intermountain depressions and valleys), developmental conditions (optimal, reduced and limited), features of mesorelief and vegetation. A total of 56 landscape units were identified: 13 highland (goltzy, sub-mountain, reduced development mountain-taiga), 6 medium-mountain (limited development taiga), 14 lowland (optimal development mountain-taiga, steppe), 13 foothill (limited development taiga, optimal development taiga, subtaiga, steppe), 10 -intermontane depressions and valleys (limited development taiga, optimal development taiga, steppe). Siberian pine and fir-Siberian pine (with larch on nothern slopes) sparse wood subshrub-true mosses with short grass (with Siberian dwarf pine and rhododendron goldenon nothern slopes, with bergenia on steepe slopes, with rhododendron goldenon gentle slopes and planate surfaces) 12 Planate surfaces of watersheds, upper and lower slopes Fir и Siberian pine-fir (with spruce and Siberian pine on nothern slopes) subshrub-true mosses (with bergenia on steepe slope, with Siberian dwarf pine on northern slopes) in combination of subshrub-herbaceous and herbaceous on southern slopes 13
Middle-mountain, Mountain-taiga of limited development Of flat surfaces
Larch with admixture Siberian pine cowberry-bergenia 14 Slopes Larch with admixture Siberian pine, spruce with mixed undergrowth 15 Larch with pine 16 Flat watersheds and southern slopes Fir-Siberian pine and Siberian pine-fir subshrub-short grass-true mosses (short grass-subshrub-true mosses in places)
17
Largely gentle slopes Siberian pine и fir-Siberian pine (with admixture of spruce and larch in places) subshrub-true mosses with short grass (subshrub-true mosses with marsh tea on northern gentle slopes) 18
Slopes of different light aspects and steepness
Fir and spruce-fir (on steep slopes -Siberian pine-fir) subshrub-true mosses (with bergenia on steep slopes, with marsh teaon northern gentle slopes, short grass-large grass-small reed and cowberry-forb with brackenon southern gentle slopes, with sphagnum moss on the lower parts of slopes) 19
Low-mountain, Mountain-taiga of optimal development, Baikal-Dzhugdzhurski Slopes
Larch with shrub undergrowth 20 Larch с участием pine and pine-larch forb and cowberry-forb and grass-mosses 21 Gentle slopes Larch rare forests with rare shrub undergrowth gramineous-forb combined with steppes 22 The study area has a high landscape diversity, which makes it possible to develop a wide range of tourism activities [8] Landscapes of the goltsy belt (3% of the research area) are represented by the near-watershed, slope, rocky and rockfall-talus moorlands and with lichen tundra complexes and alpine meadows. In the subgoltsy belt (11% of the study area) moss-lichen, yernik, dwarf Siberian pine and shrub-lichen tundra, as well as larch, Siberian pine and fir sparse forest are widespread. They are confined mainly to the peaks and slopes of the Khamar-Daban and the Baikal ridges, to a lesser extent, the Primorsky ridge.
Southern-Sibeirian
Mountain-taiga landscapes (62% of the investigated territory) of Khamar-Daban ridge are mainly represented by dark coniferous Siberian pine and spruce-fir-Siberian pine forests. The mountain taiga belt of the Primorsky range is represented mainly by larch and pine, as well as Siberian pine with an admixture of spruce and larch forests. The Baikal ridge is dominated by fir-Siberian pine forests. On the Olkha plateau, fir-Siberian pine forests, as well as areas of their light coniferous and small-leaved succession stages predominante.
Submountain and subtaiga complexes (9% of the study area) are characterized by a moderately roughness of relief and a high species diversity. On Olkha plateau they are mainly represented by light coniferous and small-leaved forests on the slopes facing Lake Baikal, on the Khamar-Daban ridgeby Siberian pine-fir forests of foothills, on the Baikal ridge -by larch with Siberian pine forests, on the Primorsky ridge, in Priolhonye and on the Olkhon islandby subtayga pine and larch forests.
Steppe landscapes of Priolkhonye and Olkhon island, occupying piedmont locations and located in the "rain shadow" of the Primorsky ridge, are spatially differentiated depending on the relief and lithomorphic factor and characterized by the predominance of large-grained feather-grass on the bottoms of the basins (9 %) and small-grained lithotrophic on terraces (6 %).
About 30% of the Baikal Range, 13% of the Primorsky Range, 9% of Priolkhonye and Olkhon, 7% of the Olkha plateau within the central ecological zone of the lake Baikal are occupied by new and regenerating burnt areas.
Conclusions
Thus, within the territory under study, a number of zones (goltsy, subgoltsy, mountain-taiga, piedmont and subtaiga and steppes) are identified, each of which has unique conditions for the tourism development.
The conclusions drawn about the landscape structure of the territory are generalized. More detailed landscape research is needed, including comprehensive field work to clarify the landscape map and assess the recreational significance of landscapes.
|
2019-06-13T13:19:22.917Z
|
2018-10-30T00:00:00.000
|
{
"year": 2018,
"sha1": "6d4ce4b272d38beb65363b410562a73259af4030",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/190/1/012037",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "bb9e0bb58890312550a07b497ffa22b65fdf4d1b",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
}
|
14589855
|
pes2o/s2orc
|
v3-fos-license
|
School of Mathematical Sciences
Supervisor's name (Block capitals): For Staff Use Only I approve the above project topic and am prepared to supervise it.
Introduction and main results
The six-vertex model and related spin- 1 2 XXZ chain play a central role in the theory of exactly solved lattice models [1]. Typically the six-vertex model is 'solved' by diagonalising the rowto-row transfer matrix with periodic boundary conditions. Several methods have evolved for doing this, including the co-ordinate Bethe ansatz [1,2], the algebraic Bethe ansatz [3,4], and the analytic ansatz [5]. All of these methods rely heavily on the conservation of arrow flux from row to row of the lattice.
In terms of the vertex weights (see figure 1) the transfer matrix eigenvalues on a strip of width N are given by [1] (1.5) The Bethe ansatz equations follow from (1.2) as , j = 1, . . . , n. (1.6) The integer n labels the sectors of the transfer matrix.
Here we consider the same six-vertex model with anti-periodic boundary conditions. That such boundary conditions should preserve integrability is known through the existence of commuting transfer matrices [6]. However, the solution itself has not been found previously.
In section 2 we solve the anti-periodic six-vertex model by the 'commuting transfer matrices' method [1]. This approach has its origin in the solution of the more general 8-vertex model [7], which like the present problem, no longer enjoys arrow conservation. We find the transfer matrix eigenvalues to be given by where now sinh 1 4 (v − v k ). (1.8) In this case the Bethe ansatz equations are , j = 1, . . . , N. (1.9) In contrast with the periodic case the number of roots is fixed at N .
In section 3 we use this solution to derive the interfacial tension s of the six-vertex model in the anti-ferroelectric regime. Defining x = e −λ , our final result is e −s/k B T = 2x (1. 10) in agreement with the result obtained from the asymptotic degeneracy of the two largest eigenvalues [1,8].
With anti-periodic boundary conditions on the vertex model, the related XXZ Hamilto- where σ x , σ y and σ z are the usual Pauli matrices, with boundary conditions This boundary condition has appeared previously and is amongst the class of toroidal boundary conditions for which the operator content of the XXZ chain has been determined by finite-size studies [9]. It is thus an integrable boundary condition, with the eigenvalues of the Hamiltonian following from (1.7) in the usual way [1], with result We anticipate that the approach adopted here may also be successful in solving other models without arrow conservation. The solution given here can be extended, for example, to the spin-S generalisation of the six-vertex model/XXZ chain [10].
Exact solution
The row-to-row transfer matrix T has elements where α = {α 1 , . . . , α N }, β = {β 1 , . . . , β N }, and the anti-periodic boundary condition is such that µ N +1 = −µ 1 . Now consider an eigenvector y of the form where g i (α i ) are two-dimensional vectors. From (2.2) the product T y can be written as where G i (±) are 2 × 2 matrices with elements The appearance of the spin reversal operator S = In particular, there still exist the same 2 × 2 matrices P 1 , . . . , P N such that where P i and H i are of the form As for the periodic case, (2.8) follows from the local 'pair-propagation through a vertex' property, i.e. the existence of for α, µ = ±1. The available parameters are [1] g i (+) = 1, g i (−) = r i e (λ+v)σ i /2 However, p N +1 needs to be different from the periodic case (where p N +1 = p 1 ). The antiperiodicity suggests that we require where h is some scalar. Since we already require p i (+) = 1 and p i (−) = r i , we must have (2.14) In addition, To proceed further, we write P 1 and P N +1 in full, Putting the pieces together we then have However, as for the periodic case, we have which follows from (2.7) to (2.9). Thus . (2.20) At this point it is more convenient to write The result (2.20) can then be more conveniently written as To proceed further, let Q ± R (v) be a matrix whose columns are a linear combination of y ± σ with different choices of σ (2 N altogether). It follows from (2.25) that One can show that the transpose of the transfer matrix has the property T (−v) = t T (v).
. * Then we can show that the "commutation relations" hold for arbitrary u and v. This result follows if we can prove that is a symmetric function of (u, v) for all choices of σ, σ ′ . Using (2.24), (2.21) and (2.22) this function reads . . , p. The terms in F σσ ′ which involve these σ i k (in the prefactor and in the j = i k terms) are manifestly symmetric in (u, v). The remaining terms are exactly of the form (2.29) with N → N − p after relabelling of sites. We can thus restrict ourselves to the case where To prove this case we proceed inductively. From (2.28) we have (2.30) Let us now denote F σσ = F N (σ 1 , . . . , σ N ). By inspection, F 1 (σ 1 ) and F 2 (σ 1 , σ 2 ) are sym- is symmetric in (u, v) and furthermore that σ k + σ k+1 = 0 for some k. Then from (2.30) we have F N (σ 1 , . . . , σ k , −σ k , . . . , σ N ) = F N −2 (σ 1 , . . . ,σ k ,σ k+1 , . . . , σ N −2 ) times a symmetric function of (u, v), which is therefore symmetric in (u, v). This is true for all 1 ≤ k ≤ N − 1. The only case left to consider is therefore times a symmetric function of (u, v), which is again symmetric. Thus by induction on N , the assertion (2.28) follows.
As in the periodic case, we assume that Then from (2.27) and (2.28) we obtain
Interfacial tension
In this section we derive the interfacial tension by solving the functional relation (1.7) and integrating over the band of largest eigenvalues of the transfer matrix [11]. We consider the case where N , the number of columns in the lattice, is even. The partition function of the model is expressed in terms of the eigenvalues Λ(v) of the row-to-row transfer matrix T (v) as where the sum is over all 2 N eigenvalues.
The interfacial tension is defined as follows. Consider a single row of the lattice. For a system with periodic boundary conditions, in the λ → ∞ limit we see from (1.1) that the vertex weight c is much greater than the weights a and b, so in this limit, the row can be in one of two possible anti-ferroelectrically ordered ground states. These are made up entirely of spins with Boltzmann weight c, and are related to one another by arrow-reversal.
When we impose anti-periodic boundary conditions, this ground-state configuration is no longer consistent with N even. To ensure the anti-periodic boundary condition, vertices with Boltzmann weight c must occur an odd number of times in each row. Thus the lowest-energy configuration for the row in the λ → ∞ limit will consist of N − 1 vertices with weight c, and one vertex of either types a or b. This different vertex can occur anywhere in the row.
As we add rows to form the lattice, the a or b vertex in each row forms a "seam" running approximately vertically down the lattice; it can jump from left to right but the mean direction is downwards. † A typical lowest-energy configuration is shown in figure 2. The extra free energy due to this seam is called the interfacial tension. This will grow with the height M of the lattice, so we expect that for large N and M the partition function of the lattice will be of the form where f is the normal bulk free energy, and s is the interfacial tension. † This is the analogue of the anti-ferromagnetic seam in the Ising model [12].
We introduce the variables Expressing the Boltzmann weights in terms of z and x, from (1.1) the model is physical when z and x are real, and z lies in the interval We consider λ ≥ 0 in order that the Boltzmann weights are non-negative, so we must have where z j = e −v j /2 , j = 1, . . . , N , and In terms of these variables, the functional relation (1.7) becomes Both terms on the right hand side of (3.7) are polynomials in z of degree 3N , but the coefficients of 1 and z 3N vanish, so z −1 V (z) is a polynomial in z of degree 2N − 2. We know how to solve equations of this form for both V (z) andQ(z) using Wiener-Hopf factorisations (see references [7,8] and [13]).
We shall need some idea where the zeros of the polynomialsQ(z) and V (z) lie in order to construct the Wiener-Hopf factorisations. From the anti-periodicity of T (v) we see that V (z) is an odd function of z, so its zeros and poles must occur in plus-minus pairs. To locate the zeros in the z-plane, we consider z to be a free variable, and vary the parameter x, in particular looking at the limit We find the following; in the x → 0 limit, N − 2 of the N zeros ofQ(z) lie on the unit circle, the other two lying at distances proportional to x 1/2 and x −1/2 . For V (z), there is the simple zero at the origin, and two zeros on the unit circle. The remaining 2N − 4 zeros of V (z) are divided into two sets, with N − 2 of them that approach the origin and N − 2 that approach ∞ as x → 0. The N zeros of the two polynomials that lie on the unit circle are spaced evenly around the circle.
As x is increased, the zeros ofQ(z) and z −1 V (z) will all shift. We assume that the distribution of the zeros mentioned above does not change significantly as x increases. Thus the zeros that lie at the origin in the x → 0 limit move out from the origin as x increases, but not so far out as the unit circle, and similarly for the zeros that lie at ∞. Also, the zeros that lie on the unit circle are assumed to stay in some neighbourhood of the unit circle as x increases (we will show that these zeros remain exactly on the unit circle as x increases, which is what happens in the periodic boundary condition case).
Bearing in mind the above comments, we writẽ whereQ 1 (z) is a polynomial of degree N − 2 whose zeros are O(1) as x → 0, and α, β = O(x 1/2 ), so α lies inside the unit circle, β −1 outside. Define r(z) as the quotient of the two terms in the RHS of the functional relation (3.7); (r(z) has no zeros or poles on or between the curves C + and C − ). Then in the x → 0 limit, we see that |r(z)| ∼ 1/z N , so when |z| > 1, |r(z)| < 1. Thus ln[1 + r(z)] can be chosen to be single-valued and analytic when z lies in the annulus between C − and C + . We can therefore make a Wiener-Hopf factorisation of 1 + r(w) by defining the functions P + (z) and P − (z) as Then P + (z) is an analytic and non-zero (ANZ) function of z for z inside C + , and P − (z) is an ANZ function of z for z outside C − . As |z| → ∞, we note that P − (z) → 1. When z is inside the annulus between C − and C + , we have, by Cauchy's integral formula We then define the functions V ± (z); where V + (z) is an ANZ function of z for z inside C + , V − (z) an ANZ function of z for z outside The LHS (RHS) is an ANZ function of z inside C + (outside C − ), which is bounded as |z| → ∞ and so the function must be a constant, c 1 say. Thus When |z| < 1, we proceed the same way. Draw the curves C ′ + and C ′ − , C ′ + inside the unit circle, C ′ − inside C ′ + , and with α and all the zeros of A(z) inside C ′ − . In the limit x → 0, |1/r(z)| ∼ z N , so |1/r(z)| < 1. Thus ln[1 + 1/r(z)] can be chosen to be single-valued and analytic between and on C ′ + and C ′ − . We can then Wiener-Hopf factorise 1 + 1/r(z) by defining the functions P ′ + (z) and P ′ − (z) as When z is in the annulus between C ′ + and C ′ − , Cauchy's integral formula now implies (3.20) Define V ′ + (z) and V ′ − (z) as follows: We have now factorised V (z) into two factors, V ′ + (z) which is is ANZ for z inside C ′ + , and V ′ − (z) which is ANZ for z outside C ′ − . When z is in the annulus between C ′ + and C ′ − , we have the equality V (z) = V ′ + (z)V ′ − (z). When z is inside this annulus, we equate (3.10) with where now the LHS (RHS) is an ANZ function of z for z inside C ′ + (outside C ′ − ). Thus both sides of the equation are constant, c 2 say, and we have To evaluate the constant c 1 /c 2 , consider (3.27) in the limit z → ∞; we noted earlier that P − (z), P ′ − (z) → 1 as z → ∞, so from (3.5), (3.15) and (3.22) we deduce that c 1 /c 2 = 1. (3.28) We may use equations (3.26) and (3.27) to derive recurrence relations satisfied byQ(z), which we can solve explicitly in the N → ∞ limit.
which is valid for z outside C − . Taking the limit N → ∞ once more, so that the functions P − (z) and P ′ − (z) → 1, we get To derive an expression for V (z) valid between C + and C ′ − , using equation (3.27), we have Substituting this in, the infinite products involving α and β cancel, and we get, from (3.6) and (3.34) This expression for the eigenvalue is still dependent on the parameter t, different values of t corresponding to different eigenvalues of the transfer matrix. All we know about t so far is that it is bounded as x → 0, and that it lies on the unit circle in the x → 0 limit. We shall now show that it in fact remains exactly on the unit circle as x increases.
We substitute into the functional relation (3.7), using equations (3.30) and (3.33) to get an expression for the productQ(z)V (z) which is valid when z is in the annulus between C + and C ′ − . Substituting into (3.7), the function on the right hand side is equal to zero when z is one of the N − 2 zeros ofQ 1 (z), or when z = ±t. For the latter case, substituting z = t and −t, and dividing the resulting equations, we arrive at the following relation between α, x, and t which means that t must satisfy where φ(t) is given by This implies that t lies on the unit circle for all x, there being 2N possible choices for t. The partition function depends on t only via t 2 , so there are only N distinct eigenvalues. The right hand side of (3.7) also vanishes when z is a zero ofQ 1 (z) so in the same way we show that the zeros ofQ 1 (z) lie exactly on the unit circle for all x. As the zeros lie exactly on the unit circle, we may shift the curves C − and C ′ + so that they just surround the unit circle. Hence our expressions forQ(z) are valid all the way up to the unit circle; (3.30) is valid for |z| < 1, and (3.33) is valid for |z| > 1.
We now evaluate the partition function, as defined in (3.1), in the large-lattice limit.
When v is real, the eigenvalues (3.36) are complex, so as N → ∞, the partition function, a sum over the N eigenvalues defined by (3.39), becomes an integral over all the allowed values of t, where the integral is taken around the unit circle, and ρ(t) is some distribution function, independent of N and M . Substituting (3.34) into (3.41) then gives an expression for Z.
(The number of rows M is even to ensure periodic boundary conditions vertically, and so the ± sign in (3.37) is irrelevant.) The eigenvalue (3.36) contains two distinct types of factors; those that are powers of N , and those that are not. The terms that increase exponentially with N contribute to the bulk part of the partition function, the free energy per site in the thermodynamic limit. This factor is also independent of t, and can be taken out of the integral (3.41). The integral is then independent of N , so we have, from (3. 2) for the free energy per site in the thermodynamic limit. This result agrees with the result for periodic boundary conditions (equations (8.9.9) and (8.9.10) of Ref. [1]). As z is arbitrary, this point may lie off the unit circle; it will however lie inside the annulus between C + and C ′ − because of the restriction (3.4), and so we will be able to deform the contour to pass through this saddle point.
|
2014-10-01T00:00:00.000Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "e80bd10db013e619e558b21d360e828a4e4b9961",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9502040",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a021cbfabffb1807b0657127b843d17671cd3ece",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
84028348
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Mortality in Africanized Honey Bee Colonies with High Levels of Infestation by Varroa destructor by
The mite Varroa destructor (Anderson & Treuman 2000) is one of the world’s most important plagues of apiculture. In Brazil this mite does not encounter good conditions for parasitism because weather conditions are not ideal for its maintenance, and some strains of Africanized honey bees are resistant to the parasite. This status is reflected in the low number of dead colonies caused by varroatosis and also the stability of infestation levels. The aim of this study was to evaluate the damage caused by mite infestations in hives with higher levels of infestation than the ones considered normal for Brazilian apiaries. The level of infestation in each colony was determined and the mortality rates of parasitized bees during development were periodically recorded. The G Test of Independence and a Test of Proportions were used to compare the data. The rates of mortality of pupae and larvae were mostly proportional to the level of infestation in each colony. All colonies showed mortality rates significantly higher than the control one. In Africanized honeybee colonies with high rates of infestation by Varroa destructor mortality rates varied from 19.27% to 23.28% in pupae ( X = 21.27%) and from 15.71% to 16.15% in larvae ( X = 15.93%), against 3.85% and 3.74% in the control colony, respectively. In the parasitized colonies the average rates of mortality caused by the hurtful effects of the mite were, respectively, 5.52 and 4.26 times greater in those two developmental stages. Thus it can be concluded that even in tropical regions, like Brazil, it is necessary to give special attention to the levels of mite infestation (IR), particularly where the IR tends to be higher.
INTRODUCTION
Varroa jacobsoni Oudemans was described as an ectoparasitic mite of the Eastern honeybee, Apis cerana Fabricius.This parasite is incapable of reproducing on Apis mellifera brood (Anderson 1994, Anderson & Sukarsih 1996).The mite that parasites the Western honeybees (Apis mellifera Fabricius) presents size variations and is a different species, renamed as Varroa destructor (Anderson & Trueman 2000).
This new species started causing serious damages to these bees and consequently to apiculture (lfantidis & Rosenkranz 1988, Erickson et al. 1994, Donze´ & Guerin 1994, Anderson & Trueman 2000).The extensive damage caused by Varroa destructor is often related to the brief period of coevolution between this parasite and the new host, since, except for African and Africanized honeybees, the other races of Apis mellifera did not develop an effective defense behavior against the mite (De Jong 1984, De Jong et al. 1984, Moretto et al. 1993, De Jong 1997, Boecking & Genersch 2008).
The varroatosis, as the parasitism is called, causes serious harm to the developing bees as well as the adults (Delfinado-Baker et al. 1992, De Jong 1997, Beestma et al. 1999).As a consequence of parasitism during development, newly emerged honeybee workers present reduced weight, wing deformations and changes in several other appendices (mainly the legs), besides a significant decrease in the size of the abdomen.Other signs, such as hypopharyngeal gland malformation and decreased life span (in adults) are also commonly found (De Jong et al. 1982, Schneider & Drescher 1987, Schatton-Gadelmayer & Engels 1988, Beestma et al. 1989, Bowen-Walker & Gunn 2001, Romero-Vera & Otero-Colina 2002, Garedew et al. 2004, Genersch 2005, Kralj & Fuchs 2006).Furthermore, the mites act as a vector for the transmission of some viruses, for example, DWV (Deformed Wing Virus) (Ball 1988, Martin 1998, Bowen-Walker et al. 1999, Martin 2001, Tentcheva et al. 2006).
Some researchers relate the Varroa mite to CCD (Colony Collapse Disorder) and van Engelsdorp et al. (2009) suggested that this syndrome originates from an interaction between pathogens and other factors that cause stress to the colony, whereas the mite could suppress some immune responses of its host.According to Le Conte et al. (2010) the hypothesis that CCD is due to the parasitic mite is feasible and, indeed, is reinforced by studies carried out by van Engelsdorp et al. (2009).CCD was first reported in colonies of A. mellifera in the U.S.A.Interestingly, at the time of collapse, the infestation levels of Varroa had not reached levels known to cause economic damage or declining populations (van Engelsdorp et al. 2009).Considering this information, we may conclude that Varroa can play an important role as a cause of CCD even in Brazil, where the infestation rates by the mite are not commonly high.
Considering this worldwide scenario of damages, controlling the mite has become necessary.But the use of acaricids has shown no satisfactory results because of important drawbacks such as contamination of bee products and resistant mite lineages (Milani 1995, 1999, Hillesheim et al. 1996, Jacobs et al. 1997, Elzen et al. 1998, 2000, Wallner 1999, Bogdanov 2006, Martel et al. 2007).
In Brazil, where AHB are commonly used, the levels of infestation have remained stable since 1978 (between 2% and 3%, reaching 5% in some apiaries) (Gonçalves 1986, Rocha & Almeida-Lara 1994), but as shown by Mattos & Chaud-Netto (2011) even at those levels of infestation the death rates of developing bees could be 2.28 times greater (for pupae) to 2.65 times greater (for larvae) than those found in less-infested colonies.
So, it is apparent that new information and research about the relationship between the mite and its host, such as the harm caused by the parasite, are very important to the development of new methods of controlling and preventing varroatosis.
The aim of this study was to quantify the damages caused by mites in colonies with high levels of infestation, in relation to those considered common in tropical areas of Brazil (2-5%), by registering the loss of individuals during development.
MATERIAL AND METHODS
To quantify the level of infestation in each colony the method of Stort et al. (1981) was used: frames covered by bees were removed from colonies of the apiary from UNESP Campus at Rio Claro, and the workers swept with a brush into containers load with 150 ml of 96 % alcohol (volume corresponding to about 300 bees).Next the samples were placed in a shaker for 30 minutes so that the mites still attached to the bees could be separated.The samples were then transferred to plastic containers fitted internally with a fine metallic net which separates bees and mites.All bees (NB) and mites (NM) were counted and the Infestation Rate (IR) was calculated: IR= (NM/NB) x 100.This experimental procedure was repeated three times with one week interval between samples collected.Then an average of three data obtained was calculated in order to determine the final degree of parasitism in each colony, making possible the establishment of a rank of infestation.
Three colonies were chosen from the rank of degree of infestation, two of which had levels of infestation higher than values usually found in Brazilian apiaries (between 2% and 5%).The colony named C1 presented IR=15.91%, while the colony C11 had IR=10.96%.The colony that showed the lowest degree of infestation (C20: 0.20%) was used as a control.
The experiment began with the introduction of an empty comb with a demarcated area in each colony.That area contains approximately 500 brood cells.After the queens performed postures in the demarcated area, the combs were periodically inspected in order to follow the bees' development.The technique described by Garófalo (1977) was used to quantify the loss of developing bees.
The results were analyzed using the software BioStat 5.0 (Ayres et al. 2007).The binomial statistical test (Two Proportions) and the G Test of Independence were used to compare the data.
RESULTS AND DISCUSSION
The results showed that the rate of mortality of pupae was proportional to the degree of infestation in each colony, i.e., colonies with major rates of infestation presented higher frequencies of individuals died in the pupa stage (Fig. 1).The G Test of Independence indicated a significant interaction between the mortality of pupae and the infestation rates (Test G = 117.33; p <0.0001; Test-G (Williams) = 117.04; p <0.0001).
In the colony used as control (C20) the experiment was performed during three complete cycles of development of Apis mellifera workers (Table 1; Fig. 3).In this period 1220 pupae were observed, but 47 did not complete their development, which represents 3.85% loss.
Colony C11 was studied by a period of two cycles for worker brood (Table 2; Fig. 4).From 638 pupae observed in the experiment, 123 did not survive (19.27%).The test of proportions indicated a significant difference between the mortality rates of pupae obtained for the control colony (C20) and colony C11 (Z = 10.95;p <0.0001).
In colony C1, with the highest infestation rate, a proportional death rate for pupae was observed (Table 3; Fig. 5).During three cycles of development 1224 pupae were observed, and 285 of them did not complete the cycle (23.28%).A significant difference between the mortality rates of pupae obtained for the control colony (C20) and the colony C1 (Z= 14.02; p<0.0001) was detected.
The tests concerning the death of larvae showed a similar trend to those of pupae, i.e., the colony with the higher rate of infestation showed the higher frequency of dead larvae (Fig. 2).The G test of independence confirmed the existence of a significant interaction between the mortality of larvae and the In colony C20 (control group), 1229 worker larvae were followed for three cycles of development (Table 1; Fig. 3).In this period, 46 dead larvae were recorded, corresponding to a mortality rate of 3.74%.
Survival tables were made with basis on the total number of eggs laid in the observation area (Garófalo 1977).Tables 1, 2 and 3 show the percentage of survivors during the stages of development in each colony.In those tables day 3 -4 was included in the egg stage and day 5 -6 was included in the larval stage in order to show the mortality in these periods (considering that the mor- phological alterations to the next stage had not already occurred) (Garófalo 1977).The duration of the pupae stage was considered to be 12 days, as Garófalo (1977) said it is the most common duration.In all tables the frequency of survival in each period of 24 hours was calculated, so that a general survival frequency was calculated in each stage of development.At last the number of adults produced was compared with the number of eggs laid and the final frequency of survival was calculated.any relationship among the total number of dead larvae and the number of larval deaths that occurred between the 3 rd and 4 th days of larval life.This interval of days was characterized by Laidlaw et al. (1956) as the period of main effects of inbreeding.The statistical analysis indicated no significant Laidlaw et al. (1956) observed that in colonies with low genetic variability (major inbreeding) the developing bees show higher rates of mortality on the 3 rd and 4 th days of larval life.Mattos & Chaud-Netto (2011) have shown similar correlation on data obtained for lower IR of AHB (between 2% and 5%).In the present research the G Test of Independence did not indicate a significant difference between the number of dead larvae and the number of larvae which died on the 3 rd and 4 th days of life (inbreeding effect).So it can be concluded that the deaths occurred during the period cited by Laidlaw et al. (1956) did not contribute significantly to the total number of larval deaths.
CONCLUSION
The results obtained in this research revealed that in AHB colonies parasitized by Varroa destructor mortality rates of larvae under conditions of high infestation ranged from 15.71% to 16.15% ( X = 15.93%).In the case of pupae, the frequencies of dead bees varied from 19.27% to 23.28% ( X = 21.27%).Considering that in the control group the rate of dead bees was 3.74% in the larval stage and 3.85% in the pupal stage, it can be deduced that in the infested colonies the average rates of mortality caused by the harmful effects of the mite were, respectively, 4.26 times and 5.52 times greater in those two developmental stages.This implies a significant loss of developing bees in the colony and consequently a lower number of adults produced, which could be reflected directly on hive productivity.It proves that high IR could be significantly detrimental for beekeeping.
Thus it can be concluded that even in tropical regions, like Brazil, it is necessary to devote special attention to the levels of mite infestation (IR), particularly when the IR tends to be higher such as in winter, or in the case of susceptibility to the mite or weakening of the colony (caused by any reason).
Fig 1 .
Fig 1. Number of dead pupae (grey) recorded in Africanized honeybee colonies infested by Varroa destructor.
Figs. 3, 4 and 5 show the survival curves during those stages, in each colony.The G Test of independence was used to verify if there was
Table 1 .
Percentage survival during each immature stage in colony C20.
Table 2 .
Percentage survival during each immature stage in colony C11.
Table 3 .
Percentage survival during each immature stage in colony C1.
|
2019-03-20T13:05:06.717Z
|
2014-09-26T00:00:00.000
|
{
"year": 2014,
"sha1": "6a11f9a206335bb4a03e8ded6f53a6a02ffcfc2f",
"oa_license": "CCBY",
"oa_url": "http://periodicos.uefs.br/index.php/sociobiology/article/download/601/505",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6a11f9a206335bb4a03e8ded6f53a6a02ffcfc2f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
204941350
|
pes2o/s2orc
|
v3-fos-license
|
Final state interaction and \Delta I=1/2 rule
Contrary to wide-spread opinion that the final state interaction (FSI) enhances the amplitude<2\pi;I=0|K^0>, we argue that FSI does not increase the absolute value of this amplitude.
The essential progress in understanding the nature of the ∆I = 1/2 rule in K → 2π decays was achieved in the paper [1], where the authors had found a considerable increase of contribution of the operators containing a product of the left-handed and right-handed quark currents generated by the diagrams called later the penguin ones.But for a quantitative agreement with the experimental data, a search for some additional enhancement of the < 2π; I = 0|K 0 > amplitude produced by long-distance effects was utterly desirable.A necessity of additional enhancement of this amplitude due to long-distance strong interactions was also noted later in [2].
The attempts to take into account the long-distance effects were undertaken in [3] - [14].
In [3], the necessary increase of the amplitude < 2π; I = 0|K 0 > was associated with 1/N corrections calculated within the large-N approach (N being the number of colours).
One more mechanism of enhancement of the < 2π; I = 0|K 0 > amplitude was ascribed to the final state interaction of the pions [6] - [14].But as it will be shown in present paper, unitarization of the K → 2π amplitude in presence of FSI leads to the opposite effect: a decrease of the < 2π; I = 0|K 0 > amplitude.
We exploit the technique based on the effective ∆S = 1 non-leptonic Lagrangian [1] L Here O i are the four-quark operators and c i are the Wilson coefficients calculated taking into account renormalization effect produced by strong quarkgluon interaction at short distances.Using also the recipe for bosonization of the diquark compositions proposed in [2], one obtains the following result: where κ is a function of G F , F π , θ C and some combination of c i .The numerical values of κ obtained in [1] and [2] turned out to be insufficient for a reproduction of the observed magnitude of the < 2π; I = 0|K 0 > amplitude.
Could a rescattering the final pions occuring at long distances change the situation?To answer this question, we consider at first the elastic ππ scatterig itself.The elastic ππ scattering.The general form of the amplitude of elastic ππ scattering is where k, l, i, j are the isotopical indices and A, B, C are the functions of The amplitudes with the fixed isospin I = 0, 1, 2 are To understand the problems arising in description of ππ scattering in the framework of field theory, let's consider the simplest chiral σ model, where and B and C are obtained from A by replacement s → t and s → u, respectively.
It follows from Eqs.( 4) and (5), that the isosinglet amplitude tree is a sum of the resonance part A tree Res = 3A tree (6) and the potential part The resonance part must be unitarized summing up the chains of pion loops, that is , taking into account the repeated rescattering of the final pions.At the one loop oder ), (8) where ℜΠ R is the renormalized real part of the closed pion loop [15] ℜΠ R (s The last two terms in r.h.s. of this equation are absorbed in renormalization of the resonance mass and coupling constant g σππ .Though ℜΠ R (s) can be calculated to leading order in g σππ [16], in view of very big value of this constant such a calculation does not give a proper estimate of ℜΠ R (s).It will be explained below, how to get a reliable magnitude of ℜΠ R (s).The unitarized expression for A Res is 2 where The Eq.( 10) may be rewritten in the form leading to the cross section 2 Strictly speaking, the 4π intermediate state brings a correction in Eq. (10).But its contribution to ℑΠ(s) is equal to zero because 4m π > m K .As for ℜΠ R (s), in our approach, all separate contributions to it will be taken into account phenomenologically introducing a form factor, see below Eq. (18).
Of course, the amplitude T (0) must be unitarized including the potential part B + C too.But if this potential part is considerably smaller than the resonance one, the effect of FSI can be estimated roughly from A unitar Res .To understand what gives the unitarization of A tree Pot , we use the form of the S matrix of elastic scattering with the total phase shift as a sum of the phase shifts produced by separate mechanisms of scattering [17].In other words, if there is a number of resonances and if, in addition, there is potential scattering, the matrix S looks as Then, in terms of A unitar = 16π or The phase shifts δ Res and δ Pot can be taken from [18], where the Resonance Chiral Theory of ππ Scattering was elaborated.This model incorporates two σ mesons, f 0 (980), ρ(750) and f 2 (1270).In addition, some phenomenological form factors were introduced in the vertices σππ, ρππ, f 2 ππ.Their appearence follows in the field theory from the result (10), according to which the effect of ℜΠ R (s) may be incorporated in g 2 σππ (s), where The model gives a quite satisfactory description of the observed behavior of the phase shifts δ 0 0 (s), δ 2 0 (s), δ 1 1 (s) in the range 4m 2 π ≤ s ≤ 1GeV 2 .The phase shifts δ 0 2 (s) and δ 2 2 turn out to be consistent with the results obtained using the Roy's dispersion relations.
Using the parameters found in [18], one obtains This result agrees with the Cabibbo-Gell-Mann theorem [20], according to which the K → 2π amplitude vanishes in the limit of exact SU(3) symmetry.From Eq.( 26) in the leading order of perturbation theory one has But, as it was noted above, the perturbation theory does not give a reliable value of ReΠ R (s) and for its estimate some more complicated procedure (described above) must be applied.The unitarization of the amplitude (26) done in accordance with the prescription (10) leads to the result This part yields 0.61 of a value of the initial amplitude (2) and the part connected with the potential rescattering, being negative, can not change the conclusion that FSI diminishes the tree amplitude.The influence of FSI on the K 0 → 2π decay was studied in the framework of σ model in the papers [21].In these papers, the authors, however, put ℜΠ = 0. Then A unitar = A tree /(1 − iℑΠ) and this formula was used by them to estimate the FSI effects in the K 0 → 2π decay.But earlier the same authors had found that ℜΠ = 0 [16].In this case, the unitarization leads to A unitar for the elastic ππ scattering given in Eq.( 10) and to A unitar Res (K → 2π; I = 0) in Eq.(28).As it is seen from Eq.(28), FSI could increase or diminish the K → 2π amplitude depending on relative magnitudes of cos δ and (1−ℜΠ R ) .We have shown that cos δ/(1−ℜΠ R ) < 1, that allows us to affirm that FSI diminishes the isosinglet part of the K → 2π amplitude.
Conclusion.
We have not found an enhancement of the amplitude < ππ; I = 0|K 0 > due to final state interaction of pions.On the contrary, our analysis has shown that FSI diminishes this amplitude.Hence, FSI is not at all the mechanism bringing us nearer to explanation of the ∆I = 1/2 rule in the K → 2π decay.As for the results [3] - [5], obtained without unitarization of the K → 2π amplitude, they ought to be reconsidered.
|
2007-10-15T11:11:24.000Z
|
2007-10-15T00:00:00.000
|
{
"year": 2007,
"sha1": "4f6c487f11d90f67d84ba4fe9e44149f6fae5e12",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4f6c487f11d90f67d84ba4fe9e44149f6fae5e12",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
264658977
|
pes2o/s2orc
|
v3-fos-license
|
Total Synthesis and Prediction of Ulodione Natural Products Guided by DFT Calculations
Abstract A biomimetic synthetic strategy has resulted in a two‐step total synthesis of (±)‐ulodione A and the prediction of two potential natural products, (±)‐ulodiones C and D. This work was guided by computational investigations into the selectivity of a proposed biosynthetic Diels–Alder dimerization, which was then utilized in the chemical synthesis. This work highlights how biosynthetic considerations can both guide the design of efficient synthetic strategies and lead to the anticipation of new natural products.
1.1 Overview of previous total synthesis of ulodione A 1 • 9 steps • 5.2-5.5% overall yield (9-pot approach), 4.5-4.7% overall yield (7-pot approach) • 2.6 mg prepared 1 Alcohol 3 (20 mg, 167 µmol) was dissolved in EtOAc (0.5 mL) in a sample vial equipped with a magnetic stirring bar. The vial was sealed and the contents were stirred at room temperature for 7 days, after which time the solvent was removed under reduced pressure and the crude reaction mixture was redissolved in CDCl3 and analysed by 1 H NMR spectroscopy. No change was observed.
Subjecting alcohol 3 to aqueous reaction conditions
Alcohol 3 (20 mg, 167 µmol) was dissolved in deionised water (0.5 mL) in a sample vial equipped with a magnetic stirring bar. The vial was sealed and the contents were heated to 65 ℃ for 7 days, after which time the solvent was removed under reduced pressure and the crude reaction mixture was redissolved in CDCl3 and analysed by 1 H NMR spectroscopy. No change was observed.
Subjecting ulodione B (2) to conditions which mimic the isolation conditions 3
Ulodione B (20 mg, 120 µmol) was dissolved in EtOAc (0.5 mL) in a sample vial equipped with a magnetic stirring bar. The vial was sealed and the contents were stirred at room temperature for 7 days, after which time the solvent was removed under reduced pressure and the crude reaction mixture was redissolved in CDCl3 and analysed by 1 H NMR spectroscopy. No change was observed.
Subjecting ulodione B (2) to aqueous reaction conditions
Ulodione B (20 mg, 120 µmol) was dissolved in deionised water (0.5 mL) in a sample vial equipped with a magnetic stirring bar. The vial was sealed and the contents were heated to 65 ℃ for 7 days, after which time the solvent was removed under reduced pressure and the crude reaction mixture was redissolved in CDCl3 and analysed by 1 H NMR spectroscopy. The composition of the crude reaction mixture was approximately 65% ulodione B (2), 32% alcohol 3, 1.5% ulodione A (1) and 0.7% ulodione D (9). See 1 H NMR spectrum below ( Figure S1) and see section 4.5 for spectroscopic data of ulodione A (1) and ulodione D (9).
Figure S1: 1 H NMR spectrum (600 MHz, CDCl3) of crude reaction mixture obtained when ulodione B (2) was subjected to aqueous reaction conditions (65 ℃ for 7 days). Representative signals used to determine the quantity of ulodione D (9), A (1) and B (2) and alcohol 3 are shown.
Synthesis of ulodione A using 2,4-dinitrobenzenesulfenyl chloride
Based on conditions reported by Reich and co-workers. 4 To a stirred solution of alcohol 3 (200 mg, 1.59 mmol, 1 equiv.) in dichloroethane (2 mL
Synthesis of ulodione A using PEP-K
Based on conditions reported by Kanai and co-workers. 5 To a stirred solution of alcohol 3 (20 mg, 160 µmol, 1 equiv.) and PEP-K (147 mg, 711 µmol, 4.4 equiv.) in anhydrous DMF (4 mL) was added TBAHS (35 mg, 95 µmol, 0.6 equiv.) and the reaction was stirred at 100 ℃ for 21 h. The reaction mixture was concentrated under reduced pressure to give an orange oil. The crude mixture was analysed by 1 H NMR spectroscopy, using benzyl benzoate as an internal standard. Ulodione A was formed in an NMR yield of 22%.
See page 11 for spectroscopic data for ulodione A (1). Table S1 -Comparative NMR data for synthetic and natural ulodione A (1), 3 and assignments of synthetic ulodione C (8) and D (9).
Procedure 2: Large scale synthesis of ulodione A using T3P with optimised purification
To a stirred solution of alcohol 3 (1.00 g, 7.93 mmol, 1 equiv.) and i-Pr2NEt (2.00 mL, 11.5 mmol, 1.5 equiv.) in EtOAc (6 mL) at 0 o C, was added T3P (50% w/w in EtOAc, 6.00 mL, 10. To a stirred solution of a mixture of ulodione A (1), ulodione C (8) and ulodione D (9) (50 mg, 0.23 mmol, 1 equiv.) in MeCN-d3 (1.8 mL), in a 5 mL round bottom flask, was added durene (17.2 mg, 0.13 mmol). An aliquot from this mixture was analysed by 1 H NMR spectroscopy (400 MHz, t = 0). The aliquot was returned to the reaction mixture and DBU (60 mg, 0.39 mmol, 1.7 equiv.) was added in a single portion. The resulting solution was heated at 80 o C for 1 hour. A second aliquot was then analysed by 1 H NMR spectroscopy (400 MHz, t = 1 h). Consumption of ulodione D (9) is evident by the decrease in the integral for the peak at 5.77 ppm, while the formation of bis-enone 10 (d.r. 4:1) is evident by the emergence of peaks at 6.05 ppm and 6.01 ppm, representing the major and minor diastereomers of 10, respectively.
Computational methods
All density functional theory (DFT) calculations were performed using the PW6B95D3 functional, [1] which includes the empirical D3 dispersion correction [2] and was demonstrated to be among the most accurate hybrid functionals for main group thermochemistry, kinetics and non-covalent interactions. [3] Initial calculations were primarily focused on obtaining the minimum-energy paths for the key expected products with the nudged elastic band (NEB) approach. [4] NEB calculations were performed using the def2-SVP basis set, the geometrical counterpoise correction gCP [5] to account for the basis set superposition error, and the chain of sphere (RIJCOSX) approximation for two-electron integrals, [6] as implemented in the Orca 4.2.1 program. [7,8] The highest energy geometries along each path were then used as starting points for geometry optimizations of transition state (TS) structures. All the TS geometries as well as the structures of substrate complexes and products were optimized using the PW6B95D3 functional and the def2-TZVPP basis set. The character of all stationary points was confirmed by analytical calculation of the Hessian, and for each transition state we identified only one imaginary frequency associated with the expected reaction coordinate. Gibbs free energy differences were calculated assuming the rigid rotor and harmonic oscillator approximations and the temperature of 298.15 K. All values reported in the main article were obtained in the gas phase which should be representative of the experimental conditions, which included a non-polar solvent, namely ethylacetate. Solvation effects exerted by bulk water were calculated with the conductor-like continuum solvation model CPCM for TSA, TSA/G and TSB/D and the corresponding substrate complexes, to estimate the influence of polar solvent on energy barriers. Energies including the effect of CPCM were calculated for the geometries previously optimized in the gas phase. The distortion interaction analysis was performed using the protocol proposed by Ess and Houk. [9,10] Quasi classical molecular dynamics simulations (MD) initiated from the optimized structure of TSB/D were performed employing the Atom-Centered Density Matrix Propagation (ADMP) formalism [11] and the PW6B95D3/def2-TZVP level of theory. For this purpose the geometry of TSB/D was reoptimized with the slightly smaller def2-TZVP basis set and used as the starting point for the MD simulations. 70 initial conditions for the MD simulations were generated based on randomized velocity vectors for all atoms for which the initial total nuclear kinetic energy was set to the zero-point vibrational energy obtained from the frequency calculations. Temperature was kept constant during the simulations (353 K) by applying the velocity rescaling scheme with the rescaling occurring every five steps. The time step for the integration of Newton's equations of motion for the nuclei was set to 1 fs and Kohn-Sham self-consistent field was converged for each step during the dynamics. All trajectories were propagated for 200 fs. All 70 trajectories were then analysed to identify the products. The optimizations of stationary points and MD simulations were performed with the Revision C.01 of the Gaussian 16 program. [12] The analysis of noncovalent interactions between the two reacting dienone molecules was performed for all investigated [4+2] transition states using the NCIPlot program. [13,14] Intrinsic bond orbital analysis was carried out using the IboView program. [15] 6.2 Bifurcating reaction that could enable the formation of products B and I TSB/I is another bifurcating ambimodal TS that could lead to the formation of cyclohexene B and dihydropyran I. This transition state has a slightly higher ΔG ‡ (13.1 kcal mol -1 ) than the primary transition states presented and discussed in the main article. Considering that TSB/D is associated with a barrier lower by merely 0.9 kcal mol -1 , TSB/I could be an alternative route for the formation of product B. Similar to TSB/D, the second forming bond leading to product B is shorter that the second forming bond leading to product I (2.81 vs 3.02 Å), thus indicating strong dynamic control leading to cyclohexene B as the major/sole product. However, even if dihydropyran I forms, it can undergo the reverse reaction to dienone 4 (23.4 kcal mol -1 ). Cyclohexene B could also form through [3,3]-Claisen rearrangement of product I, which has a barrier of 14.1 kcal mol -1 .
Figure S5
-Calculated (PW6B95D3/def2-TZVPP) free energy profile diagram and transition state structures for the formation of products B and I. 6.3 Minimum energy path for the formation of ulodione A via TS A and analysis of intrinsic bond orbitals. Fig. S6 shows a minimum energy path calculated using the NEB approach (see the Computational methods section) with single point energies calculated at the PW6B95D3/def2-TZVPP level of theory. The insets show intrinsic bond orbitals (IBOs) for the most important geometries along the path, which allow one to analyse the flow of valence electrons involved in the formation of new bonds. The selected valence orbitals correspond to the three double bonds involved in the [4+2] reaction. The IBOs plotted for the TSA geometry demonstrate a symmetric flow of electrons from both reacting terminal α-alkene carbon atoms of the approaching dienone 4 monomers. This indicates that the formation of the first C-C single bond involves one electron from each of the involved π (green and blue) orbitals. Once the first C-C bond is formed (first structure after TSA), both α carbon atoms possess an equally sized fragment of an IBO (green), which is then involved in the formation of the second C-C single bond (red/orange; also with symmetric flow of electrons) and the C=C π bond (green). This indicates that once the first C-C single bond is formed, the system adopts biradical character leading to barrierless formation of the second C-C single bond. Figure S6 -The minimum energy path for the most energetically favorable reaction path leading to ulodione A. The structures above include the analysis of intrinsic bond orbitals for the most crucial structures along the reaction path.
We base our reasoning on the fact that TSA is C2-symmetric and the electronic density right after the formation of the first C-C bond (green) is evenly distributed between the two α carbon atoms, which would not be the case if the mechanism had ionic character. In particular, an ionic-type mechanism would involve movement of whole electron pairs, localization of the electronic density only on one of the two α carbon atoms and most likely an asymmetric geometry of the corresponding transition state. Furthermore, a concerted mechanism with radical-like character, albeit completely synchronous, has been proposed for the [4+2] cycloaddition of ethylene and 1,3-butadiene. [16] Our analysis demonstrates that despite the highly asynchronous nature of the [4+2] cycloaddition of dienone 4, the basic electronic characteristics of the reaction are analogous to that for the [4+2] cycloaddition of ethylene and butadiene.
Effects of polar solvent on Gibbs free energy barriers.
Since the biosynthesis of ulodione A could occur in an aqueous environment we additionally considered how bulk water could affect the ΔG ‡ of TSA, TSA/G and TSB/D. For this purpose, we used the substrate complexes and transition state structures optimized in the gas phase and recalculated the potential energies including the conductor-like continuum solvation model C-PCM with the epsilon value for bulk water. In particular we see that polar environment stabilizes TSB/D by 1.9 kcal mol -1 (the resulting ΔG ‡ amounts to 10.3 kcal mol -1 ), which indicates that cyclohexene B might also be formed alongside ulodione A in aqueous environments. In contrast, TSA and TSA/G are stabilized by 0.6 and 0.7 kcal mol -1 (with the resulting ΔG ‡ of 7.9 and 10.4 kcal mol -1 ), respectively. Even though these differences are far below the accuracy of the method, it is clear that all of the primary products could readily form in polar environments via the proposed mechanisms and with analogous kinetics.
Analysis of noncovalent interactions
To rationalize the difference in Gibbs free energy barriers for all the considered transition states that could enable the formation of the most stable products (i.e., cyclohexenes A, B, C or D) we performed an analysis of non-covalent interactions with the NCIPlot program. [13,14] Figure S7 shows a comprehensive analysis, part of which was shown in the main article in Scheme 3. The surfaces plotted between the reacting monomers qualitatively demonstrate the extent of attractive interactions, which correlates with the calculated interaction energies. The green colour of these surfaces indicates that these transition states are primarily stabilized by dispersion interactions. The strongest stabilization can be observed for TSA and TSB/D, which enable the formation of the two experimentally observed [4+2] products, namely cyclohexenes A and B. In contrast, weakest stabilization by dispersion can be observed for transition states for which the dienophile is in the β-alkene of the dienone, namely for TSC, TSC' and TSD. These three latter transition states are also characterized by more pronounced structural distortion from the equilibrium geometries, which results in high Gibbs free energy barriers.
|
2022-06-08T06:23:32.289Z
|
2022-06-07T00:00:00.000
|
{
"year": 2022,
"sha1": "41002c1eac7ec06bd8bb846dfd1540ef675df266",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/anie.202207004",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "19ae47a234412d943a29cf7e20d4bb54bbc3b807",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211795862
|
pes2o/s2orc
|
v3-fos-license
|
Analysis on 5W Mode of Uniqlo Brand Communication in the Era of New Media*
With the rapid development of new media information, the communication methods and marketing methods of many branded clothing are also from the Internet. As a giant in the apparel industry, Uniqlo's communication concept has also changed under the influence of new media. In this paper, the 5W model is used to deeply analyze the Uniqlo brand communication. It strives to show the detailed picture of Uniqlo brand communication under the new media environment. Keywords—new media; Uniqlo; brand communication
A. Uniqlo Leader
In the process of shaping the corporate brand image, the image of the leader affects the business performance of the company. On the one hand, the leader is the representative of the whole enterprise. His image affects the consumers who buy their products and services. On the other hand, consumers will have a deep understanding of their brand through association with business leaders. As a leader, the formulation of brand communication strategies and the choice of communication routes require his behind-thescenes participation. The founder and CEO of Uniqlo, Tadashi Yanai's communication can be said to be best.
When Tadashi Yanai founded Uniqlo, he hoped to make it a "great warehouse where clothes can be selected at any time." When Uniqlo opened its first store, it paid attention to the brand communication. "If you want to do business in an unfamiliar area, the customer will not know or come without promoting the brand and product through the advertisement." Even if it's already well-known, Uniqlo must do it every time when opening a store. Starting from the first store of Uniqlo, Tadashi Yanai insisted on publicizing on radio and television. He believes that Uniqlo's advertisement is not a unilateral transmission of information, but allows the audience to remember the brand and the characteristics of the brand's goods after reading it.
B. Uniqlo Sales Staff
Salesmen are the first face of the company directly facing consumers, and the first impression they give consumers is very important. The spirit and image of the brand can be reflected in the sales staff.
Every sales personnel working in the Uniqlo store, regardless of the position, must undergo multi-faceted training to ensure professional work behavior. The sales staff knows the product information of Uniqlo. They are responsible for receiving customers and introducing the materials and performance of the products to customers. Sometimes they need to solve the after-sales and complaints. Every store sales personnel must be smiling and use polite expression to communicate with customers when they need help. They also carry a notebook with them, and record everything that the manager tells them or suggestions from customers. Uniqlo has a unique concept to strengthen the training of its employees. They are truly international operators who can take responsibility for running stores and businesses, not just training individuals. This is not only to ensure the quality of the service, but also to ensure the brand image of the employee's behavior.
C. Consumers
In the brand communication of Uniqlo enterprises, consumers, like leaders and employees, can spread the Uniqlo brand through relevant channels. Unlike leaders and employees, consumers not only play the role of the disseminator of brand information, but also act as the role of the recipient.
Uniqlo's summer UT always cooperates with various cartoon characters or famous designers to launch jointlydesigned styles, such as One Piece, Sesame Street series, and classic series with Hermè s designers, which are deeply loved by consumers, even stars are no exception. It is a good idea to take this opportunity to spread the brand. Taking the One Piece series of UT as the example, the passers-by photographed Liu Haoran and Xu Weizhou wearing jointlydesigned UT of the One Piece at the airport. After the photos were sent to the Internet, fans rushed to the Uniqlo store to purchase the same style, resulting in a scene of "hard to finding the same style". Some consumers also like to match Uniqlo's clothing and upload it to new media apps such as Xiaohongshu and Bilibili. They share their tips with netizens to resonate with netizens, leading their purchase behavior, and ultimately achieving the effect of promoting the brand. Therefore, the author believes that it is crucial to turn consumers into communicators.
A. Uniqlo's Brand Vision
Brand visual identity is the most intuitive part of brand communication. It can reflect the culture and spirit of the company and brand, and is the appearance of the brand. The closest thing to consumers is brand visual design. Brand vision mainly includes brand logo, product design, store design and so on.
The Uniqlo brand logo and all its products come with the English word Uniqlo, which is displayed in red and white colors. White has a strong contrast with the red background, and the latter one has a strong visual impact. Corporate logo is the main idea passed to consumers. "Clothes are supporting roles, and the people who wear clothes are the protagonists", which makes Uniqlo's people-oriented concept emphasized. In the design of the store, the store has become an "environment that allows customers to choose freely." When the shop is renovated, the main passage in the store must be straight and spacious. The zenith should not be suspended as much as possible. The cement frame does not matter. It looks spacious and has a sense of space. The window and lighting are mainly ordinary lighting. The overall lighting is clean and bright, creating a kind of easy, open atmosphere, which is in line with popular brand characteristics.
B. Uniqlo's Brand Concept
The brand concept is a key part of brand communication and represents the brand's thoughts and soul. Brand values, visions, slogans, and emotional appeals all belong to the brand concepts.
Uniqlo's brand slogan is "LifeWear Applicable Life", and LifeWear refers to a new concept of clothinghigh-quality fabrics, stylish and precise design aesthetics and comfortable clothing. Instead of pursuing fashion like other similar brands, Uniqlo is committed to developing innovative functional clothing and high-quality clothing that emphasizes comfort, maximizing the comfort of wearing clothing while adding natural beauty.
C. Uniqlo Brand Behavior
Brand behavior is the method and means of putting the brand concept into practice. It is mainly divided into two parts: brand behavior within the company and external brand behavior. Internal brand behavior includes employee training, employee benefits, company systems, etc. External brand behavior includes employee service levels and public relations.
Taking employee training as an example, Uniqlo is one of the few companies that attach great importance to employee training, such as, smile practice, seven hospitality terms, and active delivery baskets. Uniqlo employees always treat each customer with the most polite and most intimate look. Uniqlo has a very good idea of training employees, and they require employees' smiles to be sufficiently appealing to make customers feel at home. They also asked employees to shout "Welcome" every time they meet any guest, and be polite and kind. To this end, employees have to practice by "biting chopsticks". This rigorous approach allows each Uniqlo employee's smile to penetrate the customer's heart. In addition, Uniqlo employees must master the skills of stacking clothes quickly and regularly. Uniqlo employees are very motivated to serve every consumer. If it hits the rainy weather, every consumer's packaging paper bag will be intimately put on a transparent plastic bag to prevent the clothes from getting wet. These are the brand behaviors that Uniqlo has been talked about by consumers, and consumers who have experienced these brand behaviors have become loyal consumers of Uniqlo.
A. Uniqlo Sales Channels
By holding promotions, it is possible to stimulate consumers to buy products. Uniqlo spends less on fashion shows, celebrity endorsements, etc., and their brand and product promotion is mainly through stores, that is, the sales strategy of "sales being advertising". In the store of Uniqlo, the costumes on the mannequin and the models on the posters have played a big role in sales promotion. The clothes on these models are made by the employees of Uniqlo, which makes the consumers have a simple and stylish feeling when entering the store. In addition, Uniqlo promotes sales by creating a sense of scarcity. In order to create this scarcity, the staff has allocated these products to different stores when new products are launched. Each store has its own special products. A marketing approach can be very attractive to consumers and promote consumer purchases. This is actually a hunger marketing strategy adopted by Uniqlo.
Uniqlo will also launch a limited-time promotion at the store, which on the one hand attracts potential customers and on the other hand reduces inventory. Overall, Uniqlo's short-
Advances in Social Science, Education and Humanities Research, volume 356
term price strategy is more likely to achieve scale benefits than a significant price cut after a season or a few months.
B. Uniqlo's Advertising Channels
Tadashi Yanai has said that publicity advertisements are love letters that companies write to customers. Uniqlo's advertising not only has the design of a professional public relations team, but also carries the mission of brand communication, with good creativity and the emotional input of the brand. In the Chinese market in 2017, Uniqlo adopted the rap form in order to attract young people, invite consumers from different places to play a dialect version of rap dubbing. This series of dialect advertisements includes dialects such as Cantonese, Shanghai dialect and Dongbei dialect, carrying on the localization advertisement marketing for the light down jacket series. At that time, it caused a lot of attention on the social network. This is the localization attempt of the Chinese team to cater to the tastes of young people based on Japanese creative materials.
The bigger breakthrough came from the latest HEATTECH series of advertisements in 2018. Uniqlo invited a group of well-known Japanese electronic singers for this series of advertisements. The singers are famous for their musical style. They used to appear in the advertisements of the Apple mobile phone series, and their arrangement guides created the music of "Eight Minutes" at the closing ceremony of the Rio Olympics. Therefore, HEATTECH series of advertisements pf the Uniqlo also use this fantasy music style. In the video, the audience will see three girls who are not afraid of cold floating on the ice. These three girls use an avant-garde way to express HEATTECH technology to keep warm. Uniqlo no longer adopts the method that a model wears thermal underwear standing in print ads, which is the usual way used by Uniqlo.
In recent years, Uniqlo has increased its efforts in advertising, and it has also made great breakthroughs in its creativity. Through the individual dialect advertisements and cool advertising images, it has refreshed the image that has been conservative and unchanging in the hearts of consumers, making eyes shine.
C. Uniqlo's Public Relations Channels
The purpose of the company's public relations activities is to establish a good relationship with the society, so as to leave a good brand impression in the hearts of consumers, thereby enhancing the brand image.
Beginning in July 2017, Uniqlo launched a wholeproduct recycling campaign with the theme of "one piece of clothing delivering thousands of loves" in national stores, and appealed to caring people to recycle unused clothes and donate clothes to stores. In August, Uniqlo teamed up with the China Soong Ching Ling Foundation to travel to Ningxia to hand over the recycled love clothes to the local children and their families. While passing on love, they will feel the warmth from all over the society and help them grow healthily and confidently. It also allows the world to develop in a better direction. Through the love recycling activities, it is possible to pass the love and help consumers establish a green lifestyle, so as to play the value of the clothing.
The author believes that Uniqlo's public relations activities only stay in promoting their social responsibility and product quality. However, there is a slight lack of awareness of the core value of the brand. More important for public relations activities are to use the power of new media to take advantage of this rapidly evolving information age to promote the brand value of the company. Therefore, on the basis of promoting its social responsibility and product quality, Uniqlo should focus on using new media to showcase the brand's high quality and low price, thus enhancing the brand's reputation.
D. Uniqlo's Marketing Channels
New media marketing is a marketing approach that uses new media channels as a vehicle, using modern marketing theories and the overall environment of the Internet. Brand marketing with new media channels can not only expand the brand's visibility and influence, but also close the distance with consumers.
1) Cross-border marketing:
In terms of cross-border marketing, Uniqlo collaborated with Juvenile Weekly JUMP to launch the JUP 50th Anniversary Series UT in the summer of 2018. The 57 types of original printing UT, including 22 classic anime such as one piece, naruto and silver soul, were quickly snapped up by consumers. Uniqlo took advantage of this cross-border marketing and successfully evoked consumers' memories of the young age. Some people think that Uniqlo's cross-border marketing only borrowes the JUMP 50th anniversary. It is only a kind of sentiment. This marketing method is difficult to repeat, but the author believes that Uniqlo has changed from the former bargains to the current "fashion brand". Its development and cross-border marketing are inseparable. For Uniqlo, such cross-border cooperation can not only be repeated, but also become their core competitiveness. The pattern on the UT not only shows the consumer's preference, but also conveys a feeling to the surrounding people. If the people around have the same feelings, they will also be prompted to buy the Uniqlo brand.
2) Experience marketing: In terms of experience marketing, Uniqlo consumers can choose to pick up the goods in the offline store after placing the order online. After picking up the goods, they can try on them immediately. If the size is not suitable, the size and color can be changed. At the same time, the store provides free modification of trouser length; and scanning code to check the inventory of relevant products and other services are provided in stores, saving unnecessary time consumption in the purchase process. Since Uniqlo's stores support picking up goods in different places, consumers can also purchase goods for family and friends who are not around. On the eve of the Spring Festival in 2019, Uniqlo also focused on promoting this experiential marketing model, aiming at Advances in Social Science, Education and Humanities Research, volume 356 attracting more foreigners who had not been able to go home for the New Year to purchase Uniqlo products for their families.
A. Uniqlo's Internal Audience
For Uniqlo, the main practitioners belong to its internal audience. When Uniqlo selects employees and management, it focuses on whether employees truly understand the company and love the company. Most Uniqlo employees are loyal fans of their own brands, which enables employees to digest Uniqlo's management philosophy and corporate culture, and to better publicize and achieve corporate marketing strategies. Uniqlo believes that the audience within the company directly affects the development and image of the company. It uses the loyal employees as the driving force to establish the elite image of the industry and enables employees to work in two-way because of the development of the Uniqlo brand. In Uniqlo, not only leaders can become shareholders. As long as employees are hardworking and loyal, they can become a member of the company's shareholders. As shareholders, they are supporters of the Uniqlo brand and have inherent advantages in communication.
B. Uniqlo's External Audience
Consumers, the media, the government, etc. all belong to the external audience of Uniqlo. They have no direct interest relationship with the enterprise, but they play an important role in the development and marketing of the enterprise. The media is the main intermediary for the brand to carry out cultural communication and development, and can guide social public opinion and shape the brand image. The government can directly influence the development of the company and play a pivotal role in branding. As a consumer group at the core of the brand relationship, it is possible to directly communicate the image and feedback the brand. Enterprises must maintain a good and stable relationship with all major interest groups in order to successfully shape the social image of the brand. To be proud, Uniqlo is a reputable brand in the eyes of consumers, governments and the media.
VI. COMMUNICATION EFFECT (WITH WHAT EFFECT)
The effect of communication is the response caused by the information at the level of cognition, emotion, and behavior after it reaches the audience. It is an important criterion for testing the success of communication.
In terms of sales, Fast Retailing Group released the 2018 Uniqlo financial report in October 2018, and the company achieved total revenue of 2.13 trillion yuan, a year-on-year increase of 14.5%.1 The Group's outstanding performance was mainly due to the sales performance contribution of Uniqlo in China. The financial report shows that China has become the main driving force for Uniqlo's overseas market. 1 Quoted from 2018 Annual Report of the Fast Retailing Group. https://www.qianzhan.com/analyst/detail/220/181015-e9c3cedb.html Uniqlo also highlighted the help of new media channels in its earnings to increase sales of its brands in the Chinese market. According to the financial report, Uniqlo (China) online brand information dissemination speed is very fast, and online store sales performance is even stronger.
In terms of popularity and reputation, through the 2018 Uniqlo retail market research report, consumers can see a new demand trend when shopping. More than 50% of people will refer to friends or online buyers when they shop, and new media channels will socialize. The promotion of brand awareness and reputation has become an important decisionmaking reference. 2 Regardless of online and offline, customers want to get the latest and most comprehensive information through brand information communication. More than half of the customers will search for the latest brand and price information on the Internet before shopping, and most customers will trust and continue to buy Uniqlo products due to the improvement of its brand awareness and reputation.
In terms of consumer feedback, Uniqlo's assessment of the effectiveness of brand information dissemination mainly comes from the questionnaire survey conducted by the company. After becoming a WeChat member, Uniqlo consumers will receive a message from WeChat public account when having each consumption and fill out a questionnaire survey. Through the questionnaire survey, companies can learn more about the loyalty, trust, satisfaction, etc. of the brand, and can also learn more about the main types and reasons to purchase products through feedback information, consumers' own preferences, opinions and suggestions for the brand. Such research can better benefit the development of the brand, and the brand can also improve and innovate according to the feedback results.
Uniqlo has been adjusting the products, brand communication channels and disseminated content through data indicators and consumer feedback, so as to achieve the most appropriate communication status. From these data indicators and consumer feedback, it can also be seen that Uniqlo's brand communication has rapidly improved its popularity and reputation through new media channels, and its sales have shown a steady growth trend.
VII. CONCLUSION
According to the development trend of the Internet, adopting new media to spread its own brand is the right choice. As the development of e-commerce platforms becomes more and more mature, Uniqlo has also chosen media formats such as Weibo and WeChat that are closer to consumers. Uniqlo keeps its brand in consumers' daily life through the promotion, advertising and marketing of We-Media of various new media platforms, which brings the distance between the brand and consumers closer. Finally, it is possible to achieve the effect of expanding the brand. 2 Quoted from the "new retail era, why does Uniqlo grow worldwide?" http://www.sohu.com/a/276052991_168180
|
2019-11-22T00:44:11.045Z
|
2019-10-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f0df2d1eb8701edc8214dc604ee2e915df8baf60",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/cesses-19.2019.144",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b84c50c7fbcb4fe166bae323562e717b0b76e001",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
250275492
|
pes2o/s2orc
|
v3-fos-license
|
Advancing Key Gaps in the Knowledge of Plasmodium vivax Cryptic Infections Using Humanized Mouse Models and Organs-on-Chips
Plasmodium vivax is the most widely distributed human malaria parasite representing 36.3% of disease burden in the South-East Asia region and the most predominant species in the region of the Americas. Recent estimates indicate that 3.3 billion of people are under risk of infection with circa 7 million clinical cases reported each year. This burden is certainly underestimated as the vast majority of chronic infections are asymptomatic. For centuries, it has been widely accepted that the only source of cryptic parasites is the liver dormant stages known as hypnozoites. However, recent evidence indicates that niches outside the liver, in particular in the spleen and the bone marrow, can represent a major source of cryptic chronic erythrocytic infections. The origin of such chronic infections is highly controversial as many key knowledge gaps remain unanswered. Yet, as parasites in these niches seem to be sheltered from immune response and antimalarial drugs, research on this area should be reinforced if elimination of malaria is to be achieved. Due to ethical and technical considerations, working with the liver, bone marrow and spleen from natural infections is very difficult. Recent advances in the development of humanized mouse models and organs-on-a-chip models, offer novel technological frontiers to study human diseases, vaccine validation and drug discovery. Here, we review current data of these frontier technologies in malaria, highlighting major challenges ahead to study P. vivax cryptic niches, which perpetuate transmission and burden.
BACKGROUND
Human malaria is a disease caused by five species of plasmodia of which Plasmodium falciparum and Plasmodium vivax represent the vast majority of burden whereas Plasmodium ovale, Plasmodium malaria and the recent zoonosis caused by Plasmodium knowlesi in South East Asia, are likely to contribute to less than 5% of such burden (Battle et al., 2019). In the particular case of P. vivax, its burden has dramatically decreased from 11.9-22 million cases in 2013 to 7 million clinical cases in 2020 (WHO, 2020). This panorama would indicate that elimination of P. vivax can be achieved with the present control tools available. However, in low-endemic countries that have been targeting vivax malaria for elimination, an increasing number of asymptomatic infections capable of transmitting to mosquitoes are being reported (Cheng et al., 2015). In fact, during chronic infections, numerous epidemiological field studies support that >90% of infections are sub-microscopic and asymptomatic (Angrisano and Robinson, 2022). Thus, the global burden of this species is certainly underestimated. Moreover, experts agree that P. vivax will be the last human malaria parasite species to be eliminated due to its unique biology, including (i) the presence of hypnozoites, a latent (dormant) form of the parasite that develops in the liver (Krotoski, 1985) which can reactivate weeks or months after the primary infection causing clinical relapses (White, 2011); (ii) the limitation of using primaquine and tafenoquine to kill hypnozoites in pregnant women and G6PD-defficient individuals due to the risk of acute hemolytic anemia (Baird, 2019); (iii) the risk of primaquine treatment failure in patients with particular cytochrome P450 isozyme 2D6 (CYP2D6) polymorphisms (Suarez-Kurtz, 2021); (iv) the recent findings that invasion of merozoites into reticulocytes is not limited to the Duffy binding protein (Meńard et al., 2010) and the detection of P. vivax in sub-Saharan Africa, where this blood group is nearly absent (Zimmerman, 2017); (v) the outdoor biting behaviour of vectors transmitting P. vivax, that results on low efficacy control measures when impregnated bed nets are used. All together, these unique aspects of the biology of P. vivax strongly support the generalized view that this species will be the last human malaria parasite to be eliminated (Mueller et al., 2009).
Cryptic stages can be defined as parasites difficult to detect with currently available tools which persist in different host tissues during chronic asymptomatic infections. For centuries, it was amply accepted that the only source of cryptic stages in P. vivax were hypnozoites inside infected-hepatocytes (Krotoski, 1985). As this liver stage cannot be detected using currently available diagnostic methods, it constitutes a silent reservoir of the disease and a major threat for malaria elimination. However, as recently reviewed (Fernandez-Becerra et al., 2022), new evidence indicates that cryptic niches outside the liver, in particular in the bone marrow (BM) and the spleen, represent a major source of hypnozoites-unrelated recrudescence. Thus, early studies on a clinical case of spontaneous spleen rupture revealed the presence of large numbers of intact parasites in the red pulp (Machado Siqueira et al., 2012). This observation raised evidence of a long standing hypothesis of spleen cytoadherence by reticulocyte-prone malaria parasites (del Portillo et al., 2004). Remarkably, recent studies of spleen rupture in Timika, Indonesia, have unequivocally demonstrated that the largest parasite biomass of P. vivax during chronic asymptomatic infections is observed in the reticulocyte-rich spleen (Kho et al., 2021a) (Kho et al., 2021b). Also, P. vivax parasites have been suspected to reside in the BM during chronic asymptomatic infections as originally observed in the late 19 th century (Bignami, 1894). Morphological and molecular evidence of the parasite in this hemopoietic tissues have been recently observed in patients (Baro et al., 2017;Brito et al., 2020). Noticeably, all patients presented defects in erythropoiesis and global transcriptional analysis corroborated these morphological observation (Baro et al., 2017;Brito et al., 2020). All together, these data call for a new paradigm in P. vivax research which should now incorporate these cryptic infections into its life cycle ( Figure 1). This will clearly require a renewed emphasis on understanding the origin and significance of these cryptic erythrocytic niches, if elimination of malaria is to be achieved.
For ethical and technical reasons, working with the human spleen have been mostly limited to post-mortem analysis (Imbert et al., 2009) and studies on bone marrow aspirates are limited as it is considered a highly invasive clinical procedure. Here, we will review new enabling technologies on humanized mouse models and organs-on-a-chip, promising approaches to unravel the mechanistic particularities of these largely unexplored cryptic infections of P. vivax.
HUMANIZED MOUSE MODELS
The interest in engrafting human immune systems to immunocompromised mice started with the discovery of the nude mouse in the laboratory of Dr. N.R. Grist (Flanagan, 1966). However, this early mouse strain was unable to sustain the engraftment of human bone marrow cells, thus failing to establish their growth (Ganick et al., 1980) (Macchiarini et al., 2005). A breakthrough came with the discovery of immunodeficient mice with the Prkdc-scid mutation, which in humans is called Severe Combined Immunodeficiency (SCID) resulting in the C.B-17-Prkdc scid mouse (Pearson et al., 2008). The severe combined immunodeficiency (SCID) is a loss-of-function mutation, which affects the protein kinase, DNA-activated catalytic polypeptide (PRKDC), leading to an inefficient DNA non-homologous end joining repair that is required for T cell and B cell receptor rearrangement, resulting in a lack of mature T and B cells (Blunt et al., 1995) (Bosma et al., 1983) (Kirchgessner et al., 1995) (Mosier et al., 1988). Comparing SCID mice with nude mice there is a clear reduction of immune responses whereas the residual immune responses in nude mice are higher. Thus, SCID mice have been extensively used as recipients for human cell and tissue xenografts in vivo.
Another genetic deficiency that leads into a reduction of an adaptive immune response is the mutation in one of the recombination-activating genes Rag1 and Rag2, responsible for T cell and B cell receptor rearrangements (VDJ recombination), thus leading to lack of B and T cells (Shinkai et al., 1992). Rag-knockout (Rag KO) strains of mice are also used as recipients for human cell xenografts. These immunodeficient mouse strains are very useful as human cell transplant recipients, but they have several limitations, including a small production of T and B cells in older SCID mice and an increased susceptibility of SCID mice to preconditioning by radiation due to inefficient DNA repair. Moreover, there is still high levels of innate immune cells, which includes substantial NK cells that complicate their use in long-term studies and systemic reconstitution with human cells (Murphy et al., 1987).
An important advancement in the development of more immunodeficient mouse strains was the production of murine strains that have a mutation or deletion on the interleukin (IL)-2 receptor common g chain (Il2rg or gc). The IL-2 receptor common gamma chain is an important cytokine receptor subunit for IL-2, IL-4, IL-7, IL-9, IL-15 and IL-21 and is indispensable for high-affinity binding and signaling of these cytokines (Cao et al., 1995) (DiSanto et al., 1995 (Ohbo et al., 1996). This gene disruption results in critical cytokine signaling networks that are required for both adaptive and innate immune responses leading to the lack of NK cells characterized by the absence of IL-15 signaling (Kennedy et al., 2000) (Kerre et al., 2002) (Ranson et al., 2003) (Traggiai et al., 2004) (Walsh et al., 2017). However, there is still some remaining phagocytic activity which triggers-on the development of graft-versus-host disease (GVHD) after the engraftment of human cells (Greenblatt et al., 2012).
The Cluster of Differentiation 47 (CD47) acts as a marker of self (Oldenborg et al., 2000), and its interaction with the inhibitory receptor signal regulatory protein a (SIRPa) on macrophages delivers a "do not eat-me" signal preventing phagocytosis of selfcells by macrophages. Therefore, if this interaction is not avoided on the murine environment, mouse SIRPa receptors will not recognize human CD47 and human engrafted cells will be phagocytized by murine macrophages. Fortunately, a polymorphism in the mouse Sirpa gene found in the non-obese diabetic genetic background (NOD) resembles to the human SIRPA gene, thus achieving phagocytic tolerance by the interaction via CD47-SIRPA (Yamauchi et al., 2013). Phagocytic tolerance can also be achieved by the transgenic expression of mouse CD47 on human hematopoietic cells (Takenaka et al., 2007;Legrand et al., 2011). On the other hand, phagocytic tolerance can be achieved temporarily by eliminFortunately, a polymorphism inating recipient phagocytic cells using clodronate-containing liposomes (Clo-lip), small hydrophilic molecule ingested by a macrophage in a liposome-encapsulated form. It will be accumulated within the cell as soon as the liposomes are digested with the help of its lysosomal phospholipases. At a certain intracellular clodronate concentration, the macrophage is eliminated by apoptosis. Alternatively, if phagocytic cells develop in an environment without CD47, then they become tolerized to cells that lack CD47 . Natural killer cells also have limited activity in NOD mice because of a defect in the NKG2D receptor (Ogasawara et al., 2003). All these properties have made the NOD background preferred for the development of models that receive xenotransplantation of cells and tissues from human origin.
The combination of IL2rg knockout with SCID or Rag KO mutations leads to highly immunodeficient strains that have neither mature T and B cells, nor NK cells, with severely debilitated monocyte/macrophage function. Reported strains that use IL2rg mutation are the NSG (Blunt et al., 1995;Kirchgessner et al., 1995), NOG (Blunt et al., 1995;Kirchgessner et al., 1995), NRG, and BRG (Shultz et al., 2007;Shultz et al., 2012; FIGURE 1 | Life cycle of Plasmodium vivax highlighting new cryptic erythrocytic stages. During a blood meal, malaria-infected mosquitos inject sporozoites which after reaching the bloodstream enter hepatocytes initiating the pre-erythrocytic cycle. Within the liver, P. vivax either differentiates (i) into a dormant stage called a hypnozoite which, upon reactivation, causes clinical relapses, or (ii) into tissue schizonts, which after thousands of mitotic replications in membranous sacks known as merosomes, release merozoites into the bloodstream initiating the erythrocytic cycle. In this cycle, P. vivax merozoites predominantly, if not exclusively, invade reticulocytes starting asexual blood stage differentiation of rings, trophozoites, schizonts and egress to invade new red blood cells. This cyclical developmental process takes about 48 h. In addition, P. vivax produces specific proteins to create caveola-vesicle complexes that appear as profuse speckling in Giemsa-stained blood smears, known as Schüfnner's dots. Moreover, some P. vivax parasites can differentiate into mature gametocytes before a clinical infection and illness develops, thus having the advantage of continued transmission to the insect vector before the appearance of clinical symptoms and subsequent treatment. Remarkably, presence of parasites in the spleen and the bone marrow represents novel cryptic erythrocytic infections that need to be incorporated in the life cycle of this species (boxed). Infections of these organs can be either directly from invasion of merozoites into the reticulocyte-rich bone marrow and spleen, or by infectedreticulocytes in peripheral blood. Circulating gametocytes are rounded shape and on uptake in the blood meal of anopheles mosquitoes begin the sexual cycle. This includes release of the male and female gametes, fertilisation, and formation of a motile ookinete that crosses the midgut epithelium. Differentiation into a new replicative form known as the oocyst, release of sporozoites, migration, and invasion of the salivary glands ends this complex life cycle in which the parasite undergoes more than ten stages of cellular differentiation and invades at least four types of cells within two different hosts. (Created with BioRender.com by Carmen Fernandez-Becerra). Akkina et al., 2016) models. Taking into account this mutation, these strains can support more efficient, long-term, stable, and systemic engraftment with human cells and tissues. Additional mutations in cytokine genes, the transgenic expression of human cytokines by exogenous plasmid injection, recombinant protein injection or the expression of additional transgene can help on the development of a specific cell type of interest. Yet, most strains support humanization of lymphoid and not myeloid erythropoietic lineages, thus limiting their use for studies on asexual blood stages in human malaria.
Noticeably, NSGW41 or NBSGW strains were created to overcome a lack of erythropoiesis and megakaryopoiesis in humanized mouse models. These strains support long-term engraftment of human hematopoietic stem cells (HSCs) without previous pre-conditioning therapy due to the loss of endogenous Kit function in Kit W-41J allele (Yurino et al., 2016). This KIT-deficient mice demonstrated improved erythropoiesis formation into NOD/ SCID/Il2rg -/-(NSG) background (Cosgun et al., 2014;McIntosh et al., 2015;Rahmig et al., 2016). After reconstitution, significant numbers of mature thrombocytes were present in the peripheral blood while human erythroblasts were seen in the bone marrow (BM). In addition, the morphology, composition, and enucleation ability of de novo generated human erythroblasts were similar with those in human BM (Rahmig et al., 2016). After humanization of unconditioned NSGW41 or NBSGW mice, no human red blood cells (huRBCs) or lower numbers are detected in circulation, whereas the BM is highly repopulated with human erythroid progenitor cells, suggesting that human HSC engraftment supports an increased erythroid lineage production. All differentiation stages of human erythroid precursors have been detected and increased numbers of huRBC progenitors are present suggesting that human erythropoiesis and differentiation up to nucleated erythroid progenitor stages is supported by the murine microenvironment in these mouse strains (McIntosh et al., 2015;Rahmig et al., 2016). The enucleation frequency in vivo is totally different between humanized mice and human BM due to the formation of erythroblastic islands, which probably requires factors that are incompatible between both species (Dzierzak and Philipsen, 2013). Checking the expression of transcripts encoding for adult-type a-, b-, gand d-globin in NSGW41 model showed that there is no block in human erythrocyte maturation and give paucity of huRBCs to either insufficient in vivo enucleation or SIRPa-independent phagocytosis (Rahmig et al., 2016). NSGW41 and NBSGW strains are thus an interesting model to study P. vivax blood stages in the bone marrow during infection since these are able to sustain human eythrocytic precursors by the engraftment with HSCs and can be maintained for longer periods of time if clodronate liposomes are administered depleting the remaining murine macrophages.
Pre-Erythrocytic Infections
The pre-erythrocytic cycle of Plasmodium spp. infection begins when salivary gland sporozoites enter the human body through the bite of female infected Anopheles spp. mosquitoes during blood meals. Sporozoite then cross the skin barrier to enter the bloodstream before homing to the liver initiating asexual differentiation to form liver-stage schizonts. Upon completion of liver-stage development, thousands of merozoites are released into the blood circulation starting erythrocytic infections. Noticeably, in the case of P. vivax and P. ovale, a proportion of sporozoites enter a quiescent stage, the hypnozoite, after hepatocyte invasion. Without including the 8-aminoquinoline hypnozoitocidal drug in the standard anti-malarial treatment, the hypnozoite remains in the liver causing relapses of the disease. These relapses accounted for over 70% of clinical cases in endemic areas (Commons et al., 2020) causing significant morbidity and sustaining the transmission cycle. Primaquine and tafenoquine have been proven to effectively eradicate hypnozoite (John et al., 2012;Commons et al., 2018;Llanos-Cuentas et al., 2019) but the liabilities on the 14-days treatment regimen (primaquine) and the risk of hemolytic anemia in G6PD deficient patients have to be overcome. Therefore, there remains a major need of new drugs.
The lack of in vivo models to study liver-stage has been an obstacle for studying the biology of hypnozoite formation and drug development. Indeed, as detailed below, this was only possible through the recent development of humanized mouse models engrafted with human hepatocytes. Interestingly, this mouse liver repopulation with human hepatocytes required the establishment of two premises: (i) an immuno-compromised recipient mouse and (ii) the initiation of liver injury that depletes mouse hepatocytes, thus creating a niche that allows human hepatocytes to repopulate the mouse liver. Currently, there are three humanized mouse models that can be repopulated with human hepatocytes and have been mostly used to study Plasmodium infections: The first liver-humanized model described was the albuminurokinase-type plasminogen activator (Alb-uPA) transgenic SCID mouse in which the urokinase transgene is linked to an albumin promoter (Mercer et al., 2001). This resulting in the elevated level of plasma uPA, hypofibrinogenemia and accelerated hepatocyte cell death causing sub-acute liver failure. The transplantation of human hepatocyte into 7-12 days Alb-uPA mouse allow a high repopulation of human hepatocytes resulting in human/mouse chimeric liver with 60-70% observable human hepatocytes. The Alb-uPA human liver chimeric SCID mouse model (Alb-uPA huHep mouse) has been shown to support P. falciparum infection (Sacci et al., 2006). One disadvantage of this humanized mouse model is the continuous expression of uPA transgene that causes a progressive damage to the liver parenchyma probably via activation of plasminogen, which regulates the activity of matrix metalloproteinases that are critical for liver cell growth. Therefore, Alb-uPA model/uPAdependent models have a limited utility for many applications due to this disadvantage, as well as a few others such as very poor breeding efficiency, renal disease, and a very narrow time window for transplantation before the mice submit to their bleeding (Heckel et al., 1990).
After Alb-uPA huHep mouse model, the FRG KO mouse model with triple knock out of tyrosine catabolic enzyme fumarylacetoacetate hydrolase FAH, Rag2 -/and IL2rg null was developed.
The fumaryl-acetoacetate hydrolase (FAH) KO leads to the toxic accumulation of fumaryl-acetoacetate, an intermediate of the tyrosine hormone metabolism (Azuma et al., 2007) causing liver injury. Like in Alb-uPA model, the FRG KO model can be transplanted with human hepatocytes (changing its name to FRG KO huHep) (Azuma et al., 2007). These FAH KO mice maintain their hepatocytes only in the presence of 2-(2-nitro-4trifluoromethylbenzoyl)-1,3-cyclohexanedione (NTBC) and lose them when the drug is withdrawn (Grompe et al., 1995). This model is also able to support complete development of P. falciparum liver-stage (LS) infections (Vaughan et al., 2012). Vaughan and collaborators have observed that backcrossing the FRG KO huHep model on the NOD background (FRG NOD mouse) results in the transition of exo-erythrocytic merozoites stage to blood stage infection (Vaughan et al., 2012). This model also supports successful transition of recombinant P. falciparum parasites from various experimental genetic crosses (Vendrely et al., 2020). The FAH KO mice also show some disadvantages including the development of liver carcinomas because of their type I tyrosinemia and the continued or intermittent drug treatment after humanization to suppress the development of liver cancer (Bissig et al., 2010). FRG KO huHep mouse model can also support complete development of liver-stage of P. vivax an importantly hypnozoite formation. (Mikolajczak et al., 2015) (Schafer et al., 2020).
The third liver chimeric humanized mouse model expresses the herpes simplex virus thymidine kinase (HSVtk) transgene under the control of a mouse albumin enhancer/promoter in the liver of NOG mice (TK-NOG) (Hasegawa et al., 2011). In this model, the HSVtk mRNA is selectively expressed causing severe parenchymal liver damage after ganciclovir (GCV) treatment which allows repopulation of human hepatocytes (Hasegawa et al., 2011). This TK-NOG huHep mouse model has demonstrated a normal systemic and metabolic function and can be maintained without administration of exogenous drugs. Noticeably, the humanized TK-NOG huHep mouse model maintains their synthetic function for a prolonged period of time (over 8 months), as well as very high plasma human albumin levels. Moreover, additional administration of GCV doses after engrafted with human hepatocytes enables the depletion of residual mouse hepatocytes after human cell reconstitution (Hasegawa et al., 2011). This model enables the study of P. falciparum and P. ovale liver-stage. Therefore, TK-NOG huHep mouse model may also serve as a suitable model for P. vivax liver-stage infection.
Erythrocytic Infections
Once the sporozoites invade and establish themselves in the liver, parasites undergo asexual multiplication and develop into schizont stages that finally generates exo-erythrocytic merozoites. Upon released, thousands of merozoites enter the bloodstream and initiate erythrocytic cycle. In bloodstream, some parasites undergo gametocytogenesis therefore developing sexual stages which can be transmitted to mosquito vector during the blood meal. The in vivo study of Plasmodium infection is more advanced in the FRG huHep mouse model. Injection of human reticulocytes/erythrocytes into the FRG huHep mouse resulted in successful transplantation of human blood generating FRG huHep-blood mouse model. The FRG KO huHep-blood mouse model has been shown to support P. vivax liver-stage and transition to blood-stage enabling in vivo drug efficacy testing (Mikolajczak et al., 2015). By injecting human reticulocyte on day 9 and day 10 post sporozoite injection, bloodstages of P. vivax were observed as early as 4 hours after reticulocytes injection. Thus, the model can be used for studying the liver stage-to-blood stage transition of P. vivax (Mikolajczak et al., 2015). The model has also been used to test the efficacy of a pre-erythrocytic vaccine against P. vivax by passive immunization of anti-PvCSP antibodies prior to sporozoite injection, demonstrating that the vaccine could be of high benefit by reducing the hypnozoite reservoir and thereby reducing the number of relapses (Schafer et al., 2021).
The FRG huHep mouse model has been improved recently (Schafer et al., 2020) by combining an immunomodulatory treatment in which Clodronate liposomes and Cyclophosphamide were co-administered to deplete murine macrophages and neutrophils, respectively (Foquet et al., 2018). This help to increase the lifespan of the infused human reticulocytes in the so called FRGNKOhuHep/huRetic mouse model. Blood chimerism reached 30% after the second infusion of reticulocytes. Liver stages development in this mouse model resulted in the release of merozoites which were able to invade the infused human reticulocytes starting on day 9 onwards with the highest blood parasitemia on day 10 post sporozoite injection. This procedure allows efficient and reproducible transition of P. vivax liver-stage to blood-stage and specially gametocyte. The FRGNKOhuHep/huRetic model allows the study of vaccine candidates for the blockage of blood stages (Schafer et al., 2020). Moreover, in FRGNKOhuHep/huRetic model was detected a subset of exo-erythrocytic schizonts expressing the sexual stage marker Pvs16 as early as 2 days after the beginning of blood stage infection indicating that exo-erythrocytic merozoites might be pre-programmed to become gametocytes in the first cycle of blood stage infection. However, the mature gametocyte marker Pvs230 was not detected. Therefore, FRGNKOhuHep/huRetic model allows the natural route of infection, liver-stage development and transition to blood-stage, providing a valuable system to test liverand blood-stage vaccines and drug candidates (see Figure 2).
Perspectives and Challenges of Humanized Mouse Models for Studying P. vivax Malaria
Advances on humanized mouse models are leading to a better understanding of malaria parasite biology, pathogenesis, and immunology as well as allowing testing, discovery and validation of new drugs and antigens for vaccination. Therefore, humanized mouse models can be seen as the link between rodent models and human infections to translate knowledge from both. In the case of Plasmodium vivax, this is of outmost importance as this species lacks a continuous in vitro culture system for blood stages (Noulin et al., 2013) and, in addition to the liver, the bone marrow and spleen have recently shown to represent a large biomass of hidden parasites in natural infections. In that sense, the generation of FRGNKOhuHep/huRetic model represented a major breakthrough for the study of P. vivax liver stages and its transition to blood stages since this model can sustain both stages, as well as, the possibility to study gametocytogenesis since sexual stage marker Pvs16 was early identified in the onset of the infection (Schafer et al., 2020). However, next-generation humanized mouse models for vivax malaria research will require humanization of the liver, bone marrow and spleen for studies on cryptic pre-erythrocytic and erythrocytic infections. Unfortunately, access to human fetal tissues and broad availability to the research community will limit their use. Therefore, novel enabling technologies reducing the use of animal experimentation and widely available, are also needed to advance research knowledge on cryptic pre-erythrocytic and erythrocytic infections.
ORGANS-ON-A-CHIP
The development and implementation of two-dimensional (2D) cultures in cell biology has revolutionized our knowledge since the 19 th century description of frog embryos in hanging drops of coagulated frog lymph (Harrison et al., 1907) and the use of Petri dishes named after the inventor (Fischer, 1887). In 2D cultures, cells are grown as a monolayer under controlled physicochemical parameters such as oxygen, pH and temperature, as well as suitable growth medium, for trying to recapitulate as closely as possible the cellular microenvironment of the cells. 2D cell culture systems have advanced significantly our knowledge on cell biology and for many simple applications they will continue providing new insights into cell biology. However, as cells grow, their morphology changes by flattening and distorting as well as by forming a simple monolayer where they present forced, artificial polarization. Moreover, cells are subjected to excessive nutrition, molecular gradients cannot be reproduced and the characteristics of the extracellular matrix (ECM), including its architecture and stiffness, are also altered (Pampaloni et al., 2007). The simplicity of these traditional 2D culture systems, usually consisting of a single-cell type, makes them robust to perform high-throughput experiments; yet, they provide little information about complex systems such as tissues or organs, where cell-cell and cell-matrix communication is essential and where cellular in vitro mimicry of the micro-geometry of native in vivo environments is needed.
Microfluidics Models
Back in the 70's of last century, the revolution that supposed the use of the micro-fabrication methods from the industrial microelectronics applied to other materials, such as glass and polymers, lead to the development of miniaturized electromechanical components with sensors and acturators called Micro-ElectroMecanical Systems (MEMS). Noticeably, when MEMS were used to handle fluids, the term microfluidics was introduced (Whitesides, 2006). The main advantage of these devices relies on the capability to handle small fluid volumes, for instance a microchannel of hundreds of microns can hold fluid volumes at the nano liter range (100 x 100 x 100 mm = 1nL). This capability of generating controlled environments with low volume increased the processing velocity (Whitesides, 2006). Afterwards, microfluidics research centered on the development of analytical devices called Lab-on-a-Chip (LOC) and micro-Total analysis (µTAS) in order to develop point of care (POC) devices (Dittrich et al., 2006) (Yager et al., 2006). On these POC devices, researches were looking to develop fast, reliable and low cost diagnostics capable to process and analyze proteins, enzymes or cells (Yager et al., 2006). Nowadays, it has been demonstrated that cell viability can be maintained in these micro systems with the appropriated ECM coatings, culture media and flow conditions. Then, as for the conventional in vitro models these cultured cells can be induced to express and maintain specific tissue functions in a controllable environment, with the capacity of replicating tissue/organ microstructures. These devices, now known as "organs-onchips" (OOC) (Huh et al., 2011) (Hammel et al., 2021 (Moraes et al., 2012) (Bhatia and Ingber, 2014) are essentially, micro structured microreactors containing microchannels created on polymeric materials or glass that contain and compartmentalize cultured cells. The greater promise of these microsystems lies in the accuracy to recreate physical and biochemical microenvironments of specific and key compartments of living organs that are crucial for organlevel functions.
Platform Fabrication
Soft lithography with PDMS is the most used technique for the Organ on a Chip platforms fabrication (Huh et al., 2011). This elastomeric material is cheap, transparent, gas permeable and biocompatible. However, PDMS absorbs hydrophobic molecules that could affect some analytical processes (Toepke and Beebe, 2006) that would be important for the final purpose of the OOC model. Therefore, this possibility should be taken in consideration, and, if needed be addressed by surface modification (Herland and Yoon, 2020) or by using alternative materials (Campbell et al., 2021).
Adding Dimensionality
Over the last decades, the superiority of microfluidic-based cell culture platform has been demonstrated in comparison with static 2D culture in regard to the growth of cells and specific physiological functions (Jang et al., 2015), (Terrell et al., 2020) (Chou et al., 2020). Indeed, microfluidic-based culture platforms allows a continuous supply of nutrients and oxygen while removing cells' metabolic waste -maintaining a stable microenvironment -for optimal cell growth and function. Moreover, microfluidics allows cells to experience stable molecule gradients and shear stress mimicking the in vivo blood physics in microcapillaries. Interestingly, this approach has the inherent flexibility of allowing multiple human cell types cultured in distinct compartments to be connected for intertissue modelling obviating the inter-species discrepancies of animal models (Maschmeyer et al., 2015) (Picollet-D'hahan et al., 2021. Nevertheless, the first-generation microfluidics approach frequently cultured cells as a 2D monolayer. In this sense, these cell-on-a-chip approaches failed to recapitulate the complex 3D cell-cell and cell-ECM interactions observed at the in vivo microenvironments. Accumulating evidence clearly highlights the importance of tridimensionality for in vitro cell culture. Indeed, cells in vivo are often surrounded by ECM, which is pivotal to maintain the in vivo-like cell behavior including cell polarity, proliferation, migration and gene expression (Malinen et al., 2014) (Fontoura et al., 2020). Thus, the successful development of Organ-on-a-chip models requires the culture of multiple cell types in a highly controlled 3D microenvironment. This highlights the importance of developing fully customizable 3D matrices that resemble the native environment of each cell type. In this sense, hydrogels emerge as an important tool to take microfluidic models to the organ level.
Hydrogels for Tailored Organs-on-a-Chip
Hydrogels are polymeric materials of cross-linked hydrophilic macromolecules with the ability to retain high amounts of water. Importantly, several types of natural and synthetic hydrogels are available possessing a wide range of relevant biochemical and biophysical features including cytocompatibility, biodegradability, and viscoelastic properties (Neves et al., 2020). Moreover, the easy visualization of cells embedded within the hydrogel, often transparent, is a plus for the real-time monitoring of cell behavior, essential for any organ-ona-chip approach. Notably, hydrogels need to be carefully selected and their inherent properties fine-tuned depending on the intended use. Natural hydrogels including Matrigel, collagen and alginate typically display milder gelling conditions compared to synthetic ones as polyacrylamide or polyethylene glycol (Hu et al., 2022). Indeed, most of 3D cultures have relied on the use of natural hydrogels (Santos Rosalem et al., 2020). Basement membrane-derived matrigel is possibly the most widely used basement membrane hydrogel in 3D organoid culture experiments (Rossi et al., 2018). Matrigel is composed by a plethora of different rat basement membrane components including laminin-1, collagen IV, entactin, and heparin sulphate proteoglycans, as well as numerous growth factors (Blondel and Lutolf, 2019). Nevertheless, several drawbacks of this hydrogel include (i) unquantifiable lot-to-lot variability which undermines inter-laboratory reproducibility, (ii) presence of ill-defined amounts of growth factors making this approach unsuitable for understanding cell signaling and (iii) their high susceptibility to protease-mediated degradation (impacting the matrix stability) which may often preclude long-term cell culture (Blondel and Lutolf, 2019). On the other hand, alginate-based hydrogels are non-adhesive biomaterials, as cells cannot establish specific attachment points with the polymer itself (Bidarra and Barrias, 2019) (Neves et al., 2020). Interestingly, this feature is rather useful in artificial microenvironment design as these materials can act as blank-slates. Importantly, natural alginate can be chemically modified to promote specific biological responses in a highly tunable fashion through the incorporation of moieties that specifically modulate cell-material interactions (Caires et al., 2015). Indeed, depending on the intended need alginate has been shown to acquire key biological features as cell adhesiveness, guided differentiation and matrix proteolytic degradation through chemical modification with instructive peptides. These designer hydrogels have been shown to offer unprecedent control over the ECM viscoelastic and biochemical properties and cell fate (Bidarra et al., 2010;Bidarra et al., 2011;Fonseca et al., 2011;Maia et al., 2014). Collectively, this next generation of bioengineered organ-on-achip hold the promise to bridge even further the gap between standard 2D cell culture and the more complex, expensive and ethically controversial animal experimentation in malaria research.
Since the study of P. vivax infection encompasses the interrogation of their distinct stages over multiple organs, in the next section we will explore the minimal functional units of liver, bone marrow and spleen. Most importantly, we will then review the available in vitro models that attempt to recapitulate P. vivax life cycle in each of these organs. Finally, we will address the outstanding questions and challenges for the next generation of bioengineered 3D OOC models in P. vivax research.
The Liver Niche
The liver is a major organ that performs more than 500 physiological functions including compound detoxification, decomposition of red blood cells and production of hormones (Calitz et al., 2018). The portal hepatic lobules are the minimal functional units of liver, which includes three classical hexagonal hepatic lobules, and are considered the structural unit of the liver. These structures are composed mostly by parenchymal cells hepatocytes (up to 60%) while the remaining includes nonparenchymal cells as hepatic stem cells, connective tissue cells, hepatic stellate cells, monocytic Kupffer cells and endothelial cells (Miyajima et al., 2014). The lobular acinus can be further divided into three main zones of decreasing oxygen tension due to increasing distance towards the radial hepatic arterioles (Panday et al., 2022). Most notably the liver zonation seems to display distinct hepatocyte metabolic profiles and functions. Nevertheless, their role in host-pathogen interaction remains unknown.
Biophysical Features of Native ECM
The native ECM composes up to 10% of the liver volume, providing structural support, and has a Young's Elastic Modulus that ranges between 300 and 600 Pa (Wu et al., 2018;Passi and Zahler, 2021) to provide structural support. Most notably, it has been shown that a healthy liver ECM is composed by more than 150 different proteins, including the most abundant, fibronectins, elastin, and fibrillar collagens (Naba et al., 2014;Arteel and Naba, 2020).
Organ Infection by P. vivax
During Plasmodium vivax infection the female Anopheles inoculated sporozoites, will reach the liver via blood circulation and invade the target hepatocytes either through Kupffer cells or via sinusoid endothelial cells (Pradel and Frevert, 2001;Tavares et al., 2013;Venugopal et al., 2020). However, others claim that a direct invasion of hepatocytes by sporozoites is also possible (Frevert et al., 1993;Ejigiri and Sinnis, 2009). Within invaded hepatocytes these sporozoites rapidly proliferate and generate thousands of merozoites giving rise to schizonts. This parasitic intrahepatic cycle develops over 7 days until the rupture the liver schizonts from infected/dying hepatocytes (Venugopal et al., 2020). Importantly, this rupture mediates the release of merozoites into blood circulation, where they preferentially invade young reticulocytes to start the intraerythrocytic life cycle. In contrast to this route, some P. vivax sporozoites can undergo a non-proliferative, metabolically quiescent stage known as hypnozoites (Gural et al., 2018b;Sylvester et al., 2021). Most importantly, these hypothetical liver-dwelling hypnozoites may be reactivated months or years and are a major cause of clinical relapse in P. vivax (Markus, 2018;Taylor et al., 2019). Unfortunately, the current inability to perform long-term in vitro culture of P. vivax severely hampers the controlled interrogation of this parasitic dormancy and re-activation processes.
2D Attempts to Replicate the Liver Stage
So far, most of the key experimental insights on malaria research was provided either by in vivo rodent models or by the in vitro 2D monoculture (Voorberg-van der Wel et al., 2020a). Interestingly, most of these studies use hepatoma-derived cell line model systems such as HepG2, Huh7 and HC-04 (Hollingdale et al., 1985;Kaushansky et al., 2015;Ribeiro et al., 2016;Tweedell et al., 2019). While this provides a constant and reproducible source of host cells, these largely differ from primary hepatocytes in key biological features for Plasmodium infection (Manzoni et al., 2017), including the lack of important hepatic receptors and functions, lower metabolic activity and high dependence on glucose uptake (Castell et al., 2006;Meireles et al., 2017;Tripathi et al., 2020). Additionally, the high proliferative capacity of these cell lines precludes their application in the study of human P. vivax and hypnozoite formation, which present prolonged development times (more than 7 days). Interestingly, most of these bottleneck´s issues that hampered the in vitro study of P. vivax liver stage seem to be partially overcome by the use of primary human hepatocyte (PHH) cells. Nevertheless, upon 2D culture, the PHH progressively de-differentiate, losing their hepatic in vivo phenotype (Du et al., 2006). Indeed, several strategies have been explored to counter PHH hepatic dedifferentiation through limited biomimicry of hepatic microenvironment. Some of these attempts include (i) media supplementation with molecules that inhibit TGF-b, Wnt, Notch and BMP dedifferentiation hepatocyte signaling pathways (Xiang et al., 2019;Lucifora et al., 2020), (ii) coating of the 2D substrate with hepatic-like ECM components as collagens (Itsara et al., 2018;Roth et al., 2018), (iii) co-culture with non-parenchymal cells such as fibroblasts (March et al., 2013;Gural et al., 2018b), or (iv) the use of multiple bioengineering approaches to implement the proper 3D tissue architecture (Chua et al., 2019;Mellin and Boddey, 2020).
Available 3D Models in Malaria (Focus on P. vivax)
Pioneering work by Dembele et al. has shown that it was possible to extend the lifespan/viability of sporozoite-infected simian hepatocytes in culture, via co-culture with human hepatoma cells HepaRG (to compensate infected hepatocyte cell loss) in a sandwich culture system (Dembele et al., 2014). Most notably, the use of collagen I coating allowed hepatocytes anchoring to the bottom of the culture dish while matrigel was placed on top of the cultured cells to impact ECM dimensionality and cell polarization. Interestingly, the authors have demonstrated that this approach led to a higher in vitro infectivity of P. falciparum (40-50 times) and that P. cynomolgi sporozoites infection (simian surrogate of P. vivax) of hepatocytes developed into both large dividing, but also, in small nondividing forms of the parasite during 40 days of culture (Dembele et al., 2014). Collectively, the authors reported the completion of the full liver cycle of the parasite with development of schizonts, hypnozoites formation up to 15 days with subsequent functional re-activation in culture. Most recently, Voorberg-van der Wel et al. also demonstrated that P. cynomolgi-infected primary rhesus hepatocyte cultured in collagen-coated substrates could be routinely maintained for 3 weeks (Voorberg-van der Wel et al., 2020b). This feature was also achieved by Roth et al. with PHH-infected with P. vivax using the same commercially available system (Roth et al., 2018). Interestingly, it was shown that, under such conditions, it was possible to explore malaria hypnozoite reactivation in vitro using a dual fluorescent P. cynomolgi reporter line (Voorberg-van der Wel et al., 2020b).
Tissue dimensionality is a key factor for imposing the right cell polarity and functionality. In this sense, Arez et al. have employed a stirred tank culture system to generate spheroids of human hepatic cell lines with a stable hepatic phenotype up to four weeks. Interestingly, P. berghei invasion and development were recapitulated in these hepatic spheroids, yielding functional blood-infective merozoites (Arez et al., 2019). Most importantly, Chua et al. have demonstrated that the formation of spheroids of simian and PHH in the soft macroporous 3D Cellusponge platform, which resemble the native liver ECM, supported the complete liver stage life cycle of both P. cynomolgi and P. vivax parasites in vitro for up to 30 days (Chua et al., 2019).
The importance of heterotypic cell-cell interaction was illustrated by showing that a combination of PHHs with supportive fibroblasts in a 2D multiwell micropatterned coculture (MPCC) format was able to stabilize in vivo-like hepatocyte-specific functions and metabolism up to 4-6 weeks (Gural et al., 2018b). Most interestingly, it was demonstrated that this MPCC platform was also able to support the full liver cycle of P. vivax including the formation and reactivation of hypnozoites in vitro (March et al., 2015;Gural et al., 2018b). In a similar multiwell-based approach, it was also highlighted that substrate stiffness can also play a role in maintaining PHH phenotype (Maher et al., 2020). Indeed, these authors obtained similar results for P. vivax infection by simply exploring the use of soft PDMS molds to emboss hepatocyte-confining microfeatures into standard culture polystyrene microplates. Although further improvements in terms of tissue engineering complexity (i.e. with integrated dimensionality, heterotypic coculture, improved infection rate, among others) are required to resemble the in vivo situation, this type of systems provided the first steps towards the future in vitro study of the liver hypnozoite dormancy/activation of the human P. vivax.
The Bone Marrow Niche
The bone marrow is located in the trabecular cavities of the long bones, pelvis, sternum, etc. and is composed by multiple cell types surrounded by an heterogeneous ECM within an intricate microvascular network (Crane et al., 2017). Most importantly, the BM microenvironment is responsible for the de novo generation of 5 × 10 11 blood components per day, including platelets, immune cells and red blood cells (Nombela-Arrieta and Manz, 2017). This is achieved throughout distinct BMniches in a highly regulated process known as hematopoiesis (Nombela-Arrieta and Manz, 2017). For this process several cell types including mesenchymal stem/stromal cells (MSCs), osteoblasts, endothelial cells and others articulate in exquisite niches -local microenvironments that maintain and regulate stem cell fate -to simultaneously support hematopoietic stem cell asymmetric division, progenitor cell proliferation and lineage commitment into the required blood components (Meńdez-Ferrer et al., 2020) (Crane et al., 2017).
Biophysical Features of Native ECM
The BM can be functionally divided into 3 distinct regions: endosteal, central and perivascular niches (Nombela-Arrieta and Manz, 2017). Structurally, the endosteal niche is located in the vicinity of BM cortical bone and is composed by osteoblasts and osteoclasts and MSCs surrounded by a 35 to 40 kPa stiff ECM of collagen type I and IV, osteopontin and fibronectin (Nilsson et al., 2005;Engler et al., 2006). The central niche is mostly inhabited by adipocytes, macrophages and fibroblasts embedded in a softer 0.3 kPa ECM containing laminin, fibronectin, heparin, hyaluronic acid (Nilsson et al., 1998;Omatsu et al., 2010;Shin et al., 2014). This is followed by the perivascular niche with a reported ECM stiffness of 2-10 kPa composed of collagen IV, fibronectin, and laminin, where endothelial, stromal cells and MSCs are in close contact with the arterial and sinusoidal blood vessels (Siler et al., 2000;Nelson and Roy, 2016). As expected, the relative proximity of the niches with the vascular network creates particular oxygen gradients, decreasing oxygen tension from the perivascular to central and endosteal niches. This feature combined with niche-specific cytokine gradients are instrumental for the regulation of HSC function and quiescence (Spencer et al., 2014).
Representing some of these examples, Chou et al. have reported a vascularized human BM-on-a-chip (BM chip) that supports the differentiation and maturation of multiple blood cell lineages, including reticulocytes, over 4 weeks (Chou et al., 2020). Interestingly, the authors demonstrate that this 3D fibrin/ collagen hydrogel-based microfluidic culture system had vastly improved CD34 + cell maintenance and function over the standard 2D and 3D bulk approaches, modulating more accurately distinct aspects of hematopoiesis and BM pathophysiology. More recently, Glaser et al. has also demonstrated that it is possible to study niche-specific functions on BM pathophysiology using a 3D fibrin-based microfluidic co-culture system. For this, the authors have simultaneously used distinct chambers in the same chip that recapitulated either the endosteal and perivascular niches (Glaser et al., 2022). Most interestingly, this system had fully perfusable vascular networks that allowed the maintenance of CD34 + HSCs but also their proliferation and differentiation along myeloid and erythroid lineages with the release and the egress of neutrophils (CD66b + ) through the microvascular network.
The described BM-on-a-chip systems have a wide range of applicability. Yet, none of these systems has been applied to the malaria research so far. Instead, the few reported studies in P. vivax research remain limited to address fundamental BMrelated questions in 2D culture systems (Panichakul et al., 2007;Fernandez-Becerra et al., 2013;Martin-Jaular et al., 2013;Noulin et al., 2014).
The Spleen Niche
The spleen is the largest secondary lymphoid organ which is primarily responsible for blood immune-surveillance and erythrocyte recycling (Bowdler, 2002) (Mebius and Kraal, 2005). The spleen's architecture is composed by two minimal functional units, the white pulp and red pulp regions, which are connected by a marginal zone. The white pulp encompasses nearly 25% of the splenic tissue and is composed by lymphoid tissue (Kashimura, 2020). Here, two regions can be distinguished: (i) the periarteriolar lymphoid sheaths which have mostly T cells lining the central arteriolar surrounded by (ii) the lymph follicles where B cells divide and maturate (Bowdler, 2002). These regions are followed by a marginal zone composed by antigen presenting cells (dendritic cells and macrophages) which are in close proximity to the red pulp. The red pulp represents up to 75% of the splenic tissue and is formed by reticular cells and immune cells (granulocytes and monocytes/macrophages) (Bowdler, 2002). Importantly, the circulatory network in this region contains open spaces with a complex reticular mesh known as splenic cords. Before exiting this compartment the cells and RBCs pass through 1-2 µm open interendothelial slits (IES), before reentering into venous sinuses circulation (Bowdler, 2002). Importantly, during this stage the stiffer dysfunctional or infected RBCs are unable to squeeze through being cleared from circulation by resident macrophages (del Portillo et al., 2012).
Biophysical Features of Native ECM
Apart from identifying the splenic region by the residing immune populations in these niches, each of these compartments can also be distinguished by the type of reticular network organization and matrisome (Lokmic et al., 2008). Interestingly, despite several studies reported the macrostiffness of a healthy spleen in the order of 15-20 KPa, the ECM stiffness attributed to each splenic niche remains largely unknown (Giuffrè et al., 2019) (Veiga et al., 2017).
Organ Function During P. vivax Infection
Not surprisingly, during the intra-erythrocytic life cycle of P. vivax, the spleen constitutes the main organ involved both in immune recognition and elimination of parasitized reticulocytes (del Portillo et al., 2012). Interestingly, recent data strongly suggests that besides facing destruction the P. vivax parasites can also be actively accumulating in this organ as part of their asexual blood stage lifecycle (Kho et al., 2021a). This would constitute an important cryptic niche/reservoir of the parasite. Indeed, several reports pinpoint the ability of P. vivax in modulating VIR proteins for enhanced parasitic cytoadhesion to human spleen fibroblasts (Fernandez-Becerra et al., 2020). Nevertheless, the questions remain on whether P. vivax cytoadhesive mechanisms can be used to home to these splenic cryptic niches, what is the nature of these niches and how do they enable parasite survival in such harsh lymphoid environment.
Available/Reported Models in Malaria
Unfortunately, our current understanding of P. vivax-infected RBCs interaction with the spleen derives mostly from postmortem spleen sections (Machado Siqueira et al., 2012). (Kho et al., 2021a) (Kho et al., 2021b) (Imbert et al., 2009). One of the few examples that attempted to dissect the human parasitic infection in the human spleen was provided by Buffets' group. Indeed, these authors have developed a technically challenging ex vivo perfused model of the human spleen that was able to maintain clearing functions of P. falciparum-infected RBCs for two hours experiments (Safeukui et al., 2008). Given the structural complexity of the splenic white pulp, no bioengineered in vitro 3D models were reported so far. Importantly, this precludes the comprehensive understanding of the human parasite-immune cell interactions occurring within this niche. Additionally, scarce literature exists regarding the in vitro biomimicry of the Plasmodium-infected erythrocyte clearance performed at the splenic red pulp interendothelial slits using a microfluidics-based approach (Rigat-Brugarolas et al., 2014) (Picot et al., 2015) (Guo et al., 2012) (Elizalde-Torrent et al., 2021) (see Figure 3).
Key Challenges of OOC in Malaria Research
Understanding the interactions of malaria parasites during different life stages with the vascular endothelium under flow conditions is essential for unveiling pathophysiological mechanisms in malaria (Bernabeu et al., 2021). Initial examples of microfluidic devices containing endothelial cells cultured under flow conditions used commercial channel arrays. These approaches consisted on channel surface functionalization with collagen for allowing adherence and endothelization of these device structures with HUVEC cells (Introini et al., 2018). Recently, a vascular structure formed by endothelium pipes was also developed by culturing HUVEC cells on a microfluidic device that emulate arteriole, capillary and venule vessels geometry. Interestingly, the authors could monitor the spatial location and travel dynamics and the interaction of the infected red blood cells at different stages (Arakawa et al., 2020). These elegant studies demonstrated the proof-of-principle that OOC offer the potential of a transformative technology for studying malaria and other diseases. In fact, OOC are revolutionizing studies on physiopathology, drug development, and POC diagnostics in several different human diseases (Ingber, 2022).
In the case of Plasmodium vivax, OOC offers an unprecedented opportunity to study cryptic niches of infection, which perpetuate transmission and challenges malaria elimination. Noticeably, the liver, bone marrow and spleen are the organs where these hidden parasites reside making studies of these organs from natural infections, extremely challenging or not feasible. Previous development of a microscale human liver platform had demonstrated the feasibility of mimicking this organ in 2D to study human malaria liver infections (March et al., 2013). Later, the development of functional units of a liver-on-a-chip emulating the endothelial barrier of a liver sinusoid physically separated form primary human hepatocytes, showed the potential of using OOC of this organ in studies of physiology and drug-testing (Gural et al., 2018a).
In addition to latent liver infections, P. vivax evolved cryptic erythrocytic infections in the spleen where the largest parasite biomass is accumulated and in the bone marrow where sexual stages developed before reaching circulation (Fernandez-Becerra et al., 2022). The development of minimal functional units of these organs-on-a-chip thus offer another unprecedented opportunities to study these niches. Minimal functional units of the spleen-on-a-chip have been reported (Rigat-Brugarolas et al., 2014) (Elizalde-Torrent et al., 2021) (Picot et al., 2015); yet, in addition to rheological studies of infected blood they need now to incorporate ECM matrices and cells. In contrast, elegant OOC of the bone marrow showing sustain expansion of CD34 + cells, differentiation and egress of cells from the chip emulating vascular and bone marrow channels (Chou et al., 2020) (Glaser et al., 2022). It is therefore legitimate to speculate that human bone marrow and spleen OOC models will soon be applied to advance our knowledge of these cryptic erythrocytic infections and to screen for novel drugs as parasites in those niches seem to be shelter from antimalarial drugs (Lee et al., 2018).
As a bona fide aspect, OOC will also offer the opportunity to study the role of extracellular vesicles (EVs), nanovesicles of endocytic origin, in the formation/activation of such niches at a space and velocity mimicking functional units of these organs in 3D. Of note, P. vivax has a tropism for reticulocytes, young red cells residing in the bone marrow and the spleen, and which in FIGURE 3 | Organs-on-a-chip for cryptic infections in malaria research. Microfluidic system connecting the minimal functional units of a liver, bone marrow and spleen-on-a-chip on a PDMS support with controlled perfusion rate. Different molecular-design hydrogels can be used for tissue dimensionality, i.e., matrigel, alginate, fibrin/collagen. The small circles show the presence of malaria parasites and extracellular vesicles; the big circles show 3D cultures with illustrative examples of cell types present in each specific organ (Created with BioRender.com by Nuria Sima). their maturation to erythrocytes release EVs (Harding et al., 1983;Pan and Johnstone, 1983). Noticeably, circulating EVs from natural vivax infections have recently shown to contain parasite proteins and signal spleen fibroblast to increase surface expression of ICAM-1, thus facilitating cytoadherence of P. vivax-infected reticulocytes directly obtained from patients (Toda et al., 2020). EVs thus should contain precious insights into the formation of such niches.
CONCLUDING REMARKS
Wrongly considered benign, vivax malaria has been a neglected disease. To further complicate matters, in spite of more than a century of research investigations, this species still lacks a continuous in vitro culture system for blood stages, which has severely hampered research of it. It is now clear that (i) P. vivax can cause severe disease (Baird, 2007), that (ii) chronic infections are associated with a higher risk of death than those caused by P. falciparum (Chen et al., 2016) and that (iii) P. vivax is a resilient species towards malaria elimination (Fernandez-Becerra et al., 2022). Most of this resilience relies on the fact that the largest parasite biomass resides in cryptic niches of the spleen, bone marrow and the liver. Many gaps in the knowledge of these niches as well as technical challenges to study them remain to be investigated and solved (Box). Yet, the use of humanized mouse models and organs-on-a-chip technologies reviewed here offer a technological breakthrough to study these niches, which ultimately might give new clues to develop the continuous in vitro culture system of P. vivax. BOX | Advances and challenges of humanized mouse models and organs-on-a-chip for malaria research.
Humanized Mice
Organs on a Chip
Advances Technical Limitations Advances Technical Limitations
Generation of immunocompromised mice to engraft human cells and tissues -Incomplete mouse immunodeficiency hinders engraftment of human cells -Access to human fetal tissues -Minimal group size Microfluidics -Bubbles, fluidic connections, manipulation, tubbing and pumping systems must be robust and easy to use -Interconnection between different organ models to emulate tissue-tissue interaction.
Models for liver stages Alb-uPA -Collateral effects due to the continuous expression of uPA transgene: Liver parenchyma damage, poor breeding efficiency, renal disease, narrow time window for transplantation PDMS platform -Absorption of hydrophobic molecules could be an issue for free vesicle circulation FRG (N) huHep -Liver carcinomas Hydrogels Matrigel* Alginate Fibrin/ collagen -Ensure the homogeneous diffusion of oxygen, medium, blood, extracellular vesicles.
-Emulate the mechanical properties of ECM of organs.
-Allow internal vascularization and biochemical gradient formation.
-Lot-to-lot variability -*Unsuitable for understanding cell signaling TK-NOG Models for blood stages -Incompatibility between human and mice factors leads to inefficient differentiation ofthe HSC NSG, NOG, NRG -Dependence on human cytokines administration -Low capacity for human erythroid cells development NSGW41 -Human erythroid lineage engraftment in the bone marrow; yet, few or no red blood cells are observed in circulation Organs-on-achip BM-BM-on-a-chip Splenon-on-achip -Dependence on primary cells from human donors.
-Donor-to-d onor variability -Emulate the mechanical properties of circulating blood.
|
2022-07-04T13:48:10.048Z
|
2022-07-04T00:00:00.000
|
{
"year": 2022,
"sha1": "2791beb1e3da6e75e757fbd08f680cf339f63d07",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcimb.2022.920204/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1277cc8ddebc9074be8709be2fe7c1ace69e7019",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259316345
|
pes2o/s2orc
|
v3-fos-license
|
Review helps learn better: Temporal Supervised Knowledge Distillation
Reviewing plays an important role when learning knowledge. The knowledge acquisition at a certain time point may be strongly inspired with the help of previous experience. Thus the knowledge growing procedure should show strong relationship along the temporal dimension. In our research, we find that during the network training, the evolution of feature map follows temporal sequence property. A proper temporal supervision may further improve the network training performance. Inspired by this observation, we propose Temporal Supervised Knowledge Distillation (TSKD). Specifically, we extract the spatiotemporal features in the different training phases of student by convolutional Long Short-term memory network (Conv-LSTM). Then, we train the student net through a dynamic target, rather than static teacher network features. This process realizes the refinement of old knowledge in student network, and utilizes it to assist current learning. Extensive experiments verify the effectiveness and advantages of our method over existing knowledge distillation methods, including various network architectures and different tasks (image classification and object detection) .
The concept of knowledge distillation (KD) was firstly proposed in (Hinton, Vinyals, and Dean 2015), where the student model can achieve better performance by learning the output probability distributions of the teacher model ( Fig. 2(a)). Existing distillation works can be divided into two categories: logits-based (Hinton, Vinyals, and Dean 2015;Zhao et al. 2022) and features-based (Romero et al. 2014;Zagoruyko and Komodakis 2016a;Tian, Krishnan, and Isola 2019;Heo et al. 2019;Chen et al. 2021;Ji, Heo, and Park 2021). Since Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
FitNets (Romero et al. 2014), research has mainly focused on distilling intermediate features which contain plentiful spatial knowledge ( Fig. 2(b)). However, though two kinds of methods have shown their excellent performance, they tend to neglect the fact that student networks have different learning process on the same data due to structural discrepancy with teachers. Using teacher outputs (either logits or features) as the consistent learning goals may not be the best choice for students to converge. Little research considers supervising the temporal learning process of students. In this paper, we pro-= + + Figure 1: The ARIMA analysis of the fully connected network. We record the output of a neuron in FC1 in the first 30 epochs, and use ARIMA to predict its output in the next 10 epochs. The results show that the prediction can match well with the real outputs.
pose to distill along the temporal dimension. Our motivation comes from the human learning process, where reviewing not only deepens our memory of old knowledge, but also more importantly, inspires us to learn the new knowledge. We believe that the learning process of neural networks also possesses strong temporal relationship.
To verify this hypothesis, we conduct time series prediction analysis on a fully connected network using Autoregressive Integrated Moving Average model (ARIMA) (Box and Pierce 1970). Specifically, we train the network to fit a quadratic function and use ARIMA to model the intermediate outputs through different epochs. As shown in Fig. 1, the fitted ARIMA model can provide an approximate prediction to the real training process, which indicates the temporal sequence property of network learning. Based on this discovery, we come up with two questions: (1) Can networks utilize the old knowledge to assist current learning like humans? (2) (c) Our TSKD Figure 2: Illustration of classical KD, feature-based KD and our TSKD. Instead of distilling between corresponding Teacher-Student outputs, we aim to make teacher supervise the temporal learning process of student.
apply positive supervision to the temporal learning process of network? To solve these two questions, we propose a novel knowledge distillation framework named Temporal Supervised Knowledge Distillation (TSKD). Different from existing feature-based distillation methods that distill knowledge from spatial features, we attempt to extract more knowledge along the temporal dimension ( Fig. 2(c)). Firstly, we modify the student network training as a memorize-review mode, which imitates the human learning and reviewing knowledge. Moreover, we design a dynamic learning target with teacher features to train the temporal feature extractor, which enables teacher to guide the learning process of students. A powerful student network can be obtained by the temporal supervision of the teacher network.
Overall, our contributions are summarized as follows: • We find that the knowledge in the network training grows regularly over time. There exists exploitable information in the temporal dimension.
• We establish a new training paradigm by planning the network training as memorize-review mode. This makes it possible for the student network to review old knowledge and utilize it to assist current learning.
• We propose a novel knowledge distillation framework by supervising the student network along the temporal dimension. Specifically, we use Conv-LSTM network to extract temporal features in the student learning and train it with a dynamic target.
• We achieve competitive performance against representative feature-based distillation works on various network architectures and different computer vision tasks.
Related Work
The concept of knowledge distillation was firstly proposed by Hinton et al. in (Hinton, Vinyals, and Dean 2015). As an efficient manner to train smaller student networks with the guidance of bigger teacher networks, it is applied to various downstream tasks (Li, Jin, and Yan 2017;Li et al. 2021).
Previous research mainly focus on matching the output distributions of two models (Hinton, Vinyals, and Dean 2015;Mirzadeh et al. 2020;Cho and Hariharan 2019;Furlanello et al. 2018). As intermediate features contain more valuable information, FitNets (Romero et al. 2014) was proposed to transfer the knowledge from teacher network features to student network features using L 2 distance as a constraint. Following FitNets, most of the research attention has been drawn to utilize the knowledge within intermediate features and feature-based methods have achieved state-of-the-art distillation performance. Representative works can be categorized to two branches: design new transformation and loss functions (Zagoruyko and Komodakis 2016a;Tian, Krishnan, and Isola 2019;Heo et al. 2019); optimize the matching relationship between teacher and student feature candidates (Chen et al. 2021;Ji, Heo, and Park 2021) Most works distill between the corresponding teacherstudent outputs directly. Though ReviewKD (Chen et al. 2021) proposed to perform "review" in the distillation, its main idea is to utilize multi-level information of the teacher to guide one-level of the student, which realized multiscale spatial knowledge transfer. This paper focused on supervising the student along the temporal dimension.
Method Notations and definitions
Let S i denote the student network at the ith training epoch and T denote the teacher network. Given the same input data X, we denote the intermediate features of teacher layer t l and student layer s l as F t l and F i s l respectively. Previous works have mainly focused on transferring spatial knowledge from F t l to F i s l , usually by reducing the distance between the two in the transformation space. The adding loss term for S i can be written as follows: where M ap(·) is the transformation that transfers the feature map to a more representative space, and D(·) is the distance measurement function. Similarly, multi-layers distillation is written as: where C stores the layers of features to transfer knowledge. And the loss function of feature-based KD is: . During training at a review node, the same data will be input into the k memory networks and the current student network. After transforming their outputs feature maps to attention maps, knowledge sequences are obtained by connecting the increments between two adjacency states. Finally, L temporal is calculated between Conv-LSTM's prediction and the absolute increment. BP and FP denote back propagation and forward propagation respectively.
The core idea is to use teacher features as auxiliary optimization goals. Although L spatial brings improvements to students, these methods neglect the progressive learning process. In our method, we exploit the temporal information in student learning and use the teacher model to guide the whole learning process. Specifically, we view the training of the student network as a temporal process and plan it as a memorize-review mode (shown in Fig. 4). Here we give some definitions for better explaining our method. The replanned training is also shown more clearly in Algorithm 1. Action 1 (Memorize): As the training progresses, the network will gradually converge. We hope that the network can memorize the current state at some points for future review. This action is implemented by saving the current model. Action 2 (Train): This action is the same as the general databased training and continues throughout the entire training process. Action 3 (Review): Review the knowledge learned in k previous memory nodes and utilize it to assist the current training. The implementation details are given in the following section. Memory nodes: Perform Action 1 and 2. The set of memory nodes is denoted by M. General nodes: Only perform Action 2. The set of general nodes is denoted by G. Review nodes: Perform Action 2 and 3.The set of review nodes is denoted by R. Memory interval: The number of general nodes between memory nodes, denoted by δ.
Review Mechanism
Given input data X, the student model will have outputs at different layers, which contain plentiful knowledge. The outputs evolve as training progresses. For example, the network will gradually focus on the class discriminative regions, which is called "attention" in previous works (Zagoruyko and Komodakis 2016a;Guo et al. 2023). Take layer s l as an example, the knowledge increment between two temporally adjacent models S i−1 and S i can be represented as: Intuitively, ∆ i−1,i s l represents the cognitive differences between S i and S i−1 for the same input X.
When the training reaches a review node S t , t ∈ R, we calculate increments among k previous memory nodes and S t , and connect them to obtain a length k knowledge sequence: where δ denotes the memory interval. The value of δ indicates the number of general nodes between two memory nodes. As the knowledge sequence briefly summarizes the learning process during the period, we use a simple Conv-LSTM network to extract temporal features in it and obtain a prediction: Specifically, the sequence can be seen as "what has been learnt" (old knowledge) in previous training, and Conv-LSTM gives a prediction of "what to learn next" based on the sequence. Now the problem is the training of the Conv-LSTM network. Due to the complexity of neural network and data distribution, it is hard to find an optimal solution for the whole learning process of student. However, we already have a well-trained teacher network whose outputs can be used as strong outlines. Thus, we design the following distillation mechanism. Specifically, we calculate the increment between S t and T and use it as Conv-LSTM's target: We call ∆ abs s l ,t l as absolute increment because it implies "what needs to be learnt". More importantly, ∆ abs s l ,t l is a dynamic learning goal that changes as student outputs gradually approximate teacher outputs during training. This is more suitable for training in a memorize-review cycle.
In conclusion, for the current review node, the temporal loss can be calculated as : Similarly, equation 8 can also be written in multi-layers style: where C stores the layers of features to perform review. The update of Conv-LSTM will also bring gradients to student based on chain rule , which can be represented as: where θ St and θ lstm denote the parameters of student and Conv-LSTM respectively. The overall loss function of student network at a review node is: To get a better understanding of our method, we provide detailed description in Fig. 3 and Algorithm 1. Map function. We choose Attention Transfer (AT (·)) as map function in our method, which utilizes the insight of (Zagoruyko and Komodakis 2016a). It is a transformation function that maps a 3D feature map tensor F ∈ R C×H×W to a 2D attention map F sum ∈ R H×W . In our method, F is if current node ∈ M then 4: Freeze θ lstm ; 5: Perform general train on B and update θs with L task ; 6: Memorize the current state. 7: else if current node ∈ G then 8: Freeze θ lstm ; 9: Perform general train on B and update θs with L task . 10: else if current node ∈ R then 11: Unfreeze θ lstm ; 12: Forward propagation B into θt and θs to obtain Ft l and Fs l across layers; 13: Construct knowledge seq and forward it into θ lstm to obtain ∆ pred s l , calculate L temporal ; 14: Update θ lstm with L temporal , update θs with L task and L temporal , 15: end if 16: end while flattened by summing the squares along the channel dimension, which can be denoted as: Although various transformation functions have been proposed (Tian, Krishnan, and Isola 2019;Heo et al. 2019) to map the features maps to a more knowledge-transferable space, we choose AT (·) because it is simple and intuitive. More importantly, it does not bring disruption to the temporal pattern contained in the original feature maps. The distribution of values in F sum reflects the spatial attention of the network clearly. Temporal feature extractor. In the proposed method, KD is reformulated as a time series prediction problem. Since ARIMA (Box and Pierce 1970) can not deal with highdimension data and transformer (Vaswani et al. 2017) may contribute to higher computational cost, we choose Conv-LSTM (Shi et al. 2015) as the temporal feature extractor. Conv-LSTM is a variation of general LSTM, which replaces the element-wise operations in calculation by convolutional operations. Thus, Conv-LSTM is able to handle spatiotemporal data. The detailed information of the designed Conv-LSTM in TSKD is attached in supplement due to the page limit.
Experiments
We Hyperparameters. The loss weight λ, the number of memory nodes k and the memory interval δ are important hyperparameters in the review stage. For image classification, we set λ = 1, k = 3 and δ = 5. For object detection, we set λ = 0.4, k = 3 and δ = 1. The influence brought by the different settings of these hyperparameters are explored further in ablation studies. Datasets.
(2) ImageNet (Deng et al. 2009) is considered the most challenging dataset for classification, offering 1.2 million images for training and 50,000 images for validation across 1,000 classes.
(3) MS-COCO (Lin et al. 2014) is an 80-category general object detection dataset. The train2017 and val2017 contain 118k and 5k images respectively. Implementation Details. Our implementation for CIFAR-100 and ImageNet strictly follows (Chen et al. 2021;Zhao et al. 2022). Specifically, for CIFAR-100, we train all models for 240 epochs with batch size 128 and decay the learning rate by 0.1 for every 30 epochs after the first 150 epochs using SGD. The initial learning rate is set to 0.1. For ImageNet, we adopt the standard training process, which involves training the model for 100 epochs with batch size 256 and decaying the learning rate every 30 epochs (initial learning rate is 0.1). For MS-COCO object detection, we take the most popular open-source report Detectron2 1 as our strong baseline. We 1 https://github. com/facebookresearch/detectron2 train the student models using the standard training policy following tradition (Wang et al. 2019).
For fairness, previous method results are either reported in previous papers or obtained using author-released codes with our training settings.
Main Results
CIFAR-100 classification. Table 1 presents the results on CIFAR-100. We have categorized previous works into different groups based on the main idea. In contrast, our method utilizes the spatiotemporal feature extracted in the training process. Our method achieves 3 ∼ 5% improvements on different teacher-student pairs, which strongly demonstrates the effectiveness of TSKD. ImageNet classification. We also conducted additional experiments on ImageNet to further validate our approach. Specifically, we experimented with two distillation settings: from ResNet50 to MobileNet and from ResNet34 to ResNet18. The experimental results are reported in Table 2 and Table 3. Our method achieves competitive results. In the setting from ResNet34 to ResNet18, the gap between the student and the teacher had already been reduced to a very small value of 1.61 by the previous best method. Nevertheless, we were able to further reduce this gap to 1.41, resulting in a 12% relative performance improvement. MS-COCO object detection. In addition to classification, we also apply our method to object detection task. For this task, we distill the output features of the teacher and student's backbones, following a similar procedure as in the classification task. We use the best pre-trained models provided by Detectron2 as teachers.
However, we find that the number of epochs for training a classical detection network is relatively fewer compared to the classification networks (usually hundreds). This makes it Table 3: Top-1 and top-5 accuracy (%) on the ImageNet validation.We set ResNet-34 as the teacher and ResNet-18 as the student.
difficult for deploying the memorize-review pipeline. Singly applying TSKD can hardly achieve outstanding performance. Thus, we introduce ReviewKD (Chen et al. 2021) as our strong baseline to obtain satisfactory results. It can be observed that our TSKD brings a further boost to the AP metrics, even the performance of ReviewKD is relatively high.
Ablation Studies
Feature maps as knowledge sequence. In our distillation method, we extract spatiotemporal features from the increment sequence rather than feature map sequence. The reason we choose the increment is that it can filter the irrelevant information in feature maps. The network will pay more attention to the new content in the progressive learning. The experiments show that using feature maps themselves as sequences also brings improvements, but increment groups have better performance (Table 5). In fact, first-order differential is a common way to process raw data in classical time series analysis problem, which will enhance the temporal property of the sequence. The experimental results are consistent with this theory.
Effects of memory interval. The memory interval δ determines how many epochs of general training are performed between memory nodes. When δ is relatively high, the learning period recorded in the knowledge increment sequence will become longer. This may make the review more difficult. Different settings of δ are explored in Table 6. Effects of memory nodes. In order to investigate how many memory nodes are proper in one review, we compare different settings of k in Table 7. Obviously, the knowledge sequence will become longer as the number of memory nodes increases. However, given the same training time, the frequency of review will also decrease accordingly. On the other hand, too few memory nodes can also make the temporal property in the sequence not clear enough. The R56-R20 experiment shows the highest accuracy when k = 6.
Extensions
Visualizations. We visualize the deep features of student network WRN-16-2 (distilled from WRN-40-2 on CIFAR-100). The t-SNE (Fig. 5) results show that representations of TSKD are more separable than general KD, which proves that student trained by TSKD benefits from the discriminability of features. Efficiency. We compare the training cost of the proposed TSKD with state-of-the-art feature-based distillation methods. Specifically, we record the average training time per batch in a whole memory-review cycle. Different from existing methods which perform distillation throughout the entire training process, our TSKD, instead, only needs to distill in review nodes. Therefore, the computational costs of training student networks have been reduced. As the results reported in Fig. 5, our method has better trade-off between training time and accuracy. Transferability. We evaluate the generalizability of distilled student network. A primary goal of representation learning is to acquire general knowledge that is not only useful in the current dataset, but also in dataset/tasks from other domains. Therefore, we test if the knowledge distilled by TSKD transfer well. In the experiment, we use WRN-16-2 as the frozen representation extractors, which is either trained from scratch on CIFAR-100 or distilled from WRN-40-2 with various KD methods. We then perform linear probing tasks on TinyIma-geNet 2 . As the results reported in Table 8, TSKD outperforms other methods by a certain margin, demonstrating its strong generalizability.
Conclusion
This paper deals with the problem of knowledge distillation from a novel perspective. We find that there exists temporal pattern in the evolution of network knowledge. Motivated by this observation, we propose Temporal Supervised Knowledge Distillation (TSKD) to solve two questions: (1) How to utilize the old knowledge to assist current learning? and (2) How to supervise the temporal learning process of network? The student trained by TSKD achieves significant improvements on CIFAR-100, ImageNet and MS-COCO datasets for image classification and object detection tasks. Besides, TSKD also shows superiority on training efficiency and knowledge transferability. We hope our work will help future research on knowledge distillation and interpretable deep learning.
|
2023-07-04T06:42:14.989Z
|
2023-07-03T00:00:00.000
|
{
"year": 2023,
"sha1": "7a8668119af26033ee904b35e64c96a2595ae03d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ab88b9378d2d4ddbc092c4baec712fc0821f9c57",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
231205183
|
pes2o/s2orc
|
v3-fos-license
|
Single Vs double implants in ipsilateral fracture shaft with neck of femur fracture: A comparative study
Combined ipsilateral femoral neck and shaft fractures are uncommon and challenging injuies to manage. Treatment of these type of fractures are different and associated with high rate of complication. Choosing a right implant is necessary to get optimal results with minimal complication. To evaluate the functional, radiological and anatomical outcomes of these cases treated by single implant or individual implant for each fracture by osteosynthesis. A total of 20 patients with ipsilateral femoral neck and shaft fractures were included in our study. Patients were divided into single implant group (Group I; 10 patients) and multiple implants group (Group II; 10 patients). All the patients were followed up prospectively for two years. Fracture union was confirmed radiologically, and functional evaluation was done as per Harris Hip Score.70% of both groups achieved successful fracture union with the remaining 30% with either nonunion, malunion or necrosis of the femoral head but with no statistical significant difference between both the groups. Upon comparing single versus multiple implants methods nearly similar results; were clinically and radiologically obtained. However, it is difficult to draw a definite conclusion as the number of cases were relatively smaller. A study with a larger population scale probably can give a definite conclusion.
Introduction
Although combined ipsilateral femoral neck and shaft fractures are relatively uncommon injury pattern, it is critical to recognize the presence of an associated ipsilateral femoral neck fracture occurring in conjunction with the more obvious femoral shaft fracture. Associated ipsilateral femoral neck fractures have been reported to occur in 1% to 9% of femoral shaft fractures [1] . These are challenging injuries to manage and often require modification of the routine shaft fracture treatment approach. Failure to recognize an associated ipsilateral femoral neck fracture may result in fracture displacement, delayed treatment, and a poorer outcome [2] . The injury mechanism is commonly an axially directed force against the distal femur with the hip and knee flexed, such as a motor vehicle accident in which the knee strikes the dashboard. It has been postulated that the femoral shaft absorbs the majority of injury energy [3] , as demonstrated by the shaft comminution, decreasing the amount of force transmitted across the neck. Most surgeons agree that treatment of the femoral neck should take priority because this is critical to the patient's long-term outcome. Although numerous options exist for the subsequent management of a femoral neck nonunion, the complications of osteonecrosis of the femoral head and nonunion of the femoral neck are more difficult to manage. Controversy exists about whether this combined injury pattern should be treated with a single implant or with separate implants. Low-level evidence from case series suggests that separate femoral neck and shaft implants may result in fewer reoperations [4] . Treatment options for ipsilateral femoral neck and shaft fractures include: reconstruction nail, antegrade nail, separate screws adjacent to the nail and Femoral neck screws combined retrograde femoral nail, Sliding hip screw with retrograde femoral nail, Femoral neck screws and plate fixation of the shaft, Sliding hip screw with Cephalomedullary Reconstruction Nail. Each method has its own advantages and disadvantages. Three major issues related to management of these fractures are optimal timing of surgery, which fracture to address first, and the optimal implant to use [5] .
The rate of avascular necrosis of the femoral head in ipsilateral femoral neck and shaft fractures is lower than that seen with isolated femoral neck fractures. In ipsilateral femoral neck and shaft fractures, the reported incidence in various series has ranged from 1.2% to 5%, with the highest rate reported in patients treated with reconstruction nailing. Nonunion of both the femoral neck and femoral shaft can occur. A short delay of 5-6 days in stabilizing femoral neck and shaft fractures does not seem to affect the ultimate functional outcome [6] .
Materials and Methods
This prospective study was conducted at our institution over a period of two years from 2018-2020 with an average follow up period was one year(10-24 months).An informed consent from patients and departmental permission were obtained according to this hospital guidelines. The study population was 20.The patients were selected randomly in to 2 group i.e. group 1(single implant) &group 2(double implant). Every patient signed an informed written consent for acceptance of the operation. Pre operatively all patients were evaluated carefully includes detailed history, clinical &radiological examination. Radiological assessment was done by taking X ray of pelvis with both hip AP view. & thigh with hip and knee AP &Lateral view. All surgical procedures were performed under spinal anesthesia. For the first group reconstruction nail was introduced for fixation of both neck and shaft femur after placing the patient in fracture table and preparing the appropriate size nail under C-arm image intensifier [7] . For the second group with non-displaced femoral neck fracture (6 patient), fixation of fracture neck femur was done at first followed by fixation of shaft fracture, while in displaced femoral neck fracture (4 patients); fixation of femoral shaft fracture was done at first followed by fixation of femoral neck fracture. Femoral neck fixation was performed according to the degree of displacement and anatomical location of femoral neck fracture (15-16-17). Postoperatively, all patients were followed up clinically and radiologically at regular intervals monthly for 3 month then every 3 monthly. Functional outcome of patient were assessed using Harris Hip Score [8] .
Result
In our study 20 patients with fracture shaft and ipsilateral neck femur were evaluated. The mean age for group I was 32.2±7.92 years & group II was 35.5 ±8.58 years. The majority of cases belonged to male gender (17:3) and mostly suffered from road traffic accidents (RTA).We performed proximal femur nail in 10 patients, DHS and retrograde nailing in 4 patients, CC Screw and Retrograde nailing in 6 patients. In group 1, average operation time is 80 min, mean follow up period is18 month. All femoral neck and shaft except 1 neck, unite with an average union time 4.3±.95. There were good to excellent result in 7(70%) cases, poor in 1(10%) cases. Avascular necrosis of head devloped in one case which needs revision surgery. Only in one case superficial infection devlop which is treated by dressing and appropiate antibiotics. Two case get coxa vara malunion but the pt is asymptomatic. In group 2 average operation time is 110 min, mean follow up period is18 month. All femoral neck and shaft except 2, unite with an average union time 4.9±.99. There were good to excellent result in 8(80%) cases, poor in 1(10%) cases.3 case devlop infection. out of which 2 was superficial treated by dressing and antibiotics and one needs debridement.
Discussion
Ipsilateral femoral neck and shaft fractures are challenging. Many methods have been recommended for managing ipsilateral neck and femoral shaft fractures [3][4][5][6] . Although biomechanical study and some clinical investigations have shown no significant differences between the various methods of fixation [7] , debate about the best methods of internal fixation for these fractures continues. Femoral neck fractures are often missed in initial dignosis up to 30% of cases. Hence a through rediological evaluation of pelvis with both hips should be done in all fracture shaft of femur cases. The majority of the patients in the present series were young males with high-energy trauma, as also reported in the literature. Emergency fixation of the fractured neck of femur in this combined injury pattern, unlike isolated femoral neck fractures, may be unnecessary [2] . Though there is confusion regarding which fracture should be managed first, there appears to be a general consensus regarding the seriousness of the complications involving femoral neck fractures. Hung et al. [8] . Reported that the order of fixation of the fractures may not be very important. We stabilized femoral neck fractures first in patients operated with double implant. This protocol is satisfactory in patients with un displaced neck fractures, as further displacement of the neck fracture is prevented. There is still no consensus on the optimal treatment method for these complex fractures. In a meta-analysis of the reports published in the literature, the locked intramedullary nails or reconstruction nails yielded results that were superior to double implant [9] . A cephalomedullary nail is advantageous in terms of possible closed antegrade nailing with minimal incision, reduced blood loss, decrease chance of infection. The dual implant was associated with more frequent infections and nonunion, while the nail fixations were complicated by rotatory malalignments and shortenings [10] . However, the difference between the two treatment methods with respect to union, complications and functional outcome was not significant in the present series. The average time for femoral neck and shaft union in the present series was consistent with that reported in other series [9] . The use of this IM nail for this fracture pattern was 'demanding'' and that technical errors with this implant will lead to fracture complications. Watson [11] . Fixation with plates for the shaft and screws or DHS for the hip is easy from a technical perspective. Cephalomedullary nailing is technically more demanding and challenging in completely displaced neck fracture. However, in most of cases, neck fracture is minimally displaced and where it is easier to antegrade nail. Fixation of both fractures with two implant is relatively easy in technique point of view. In our view both modalities of treatment give satisfactory results. In displaced neck fracture it is better to use double implant for both fractures.
Conclusion
Although combined ipsilateral femoral neck and shaft fractures are uncommon, it is essential to carefully evaluate the femoral neck in all patients sustaining high energy femoral shaft fracture. The goal of any treatment plan should be anatomic reduction of neck fracture and stable fixation of both fractures, so the patient can be mobilized early. Both of the treatment methods used in the present study achieved satisfactory functional outcome in these complex fractures. While each has its own merits and demerits. Although in the present study both method give satisfactory results, it is difficult to draw a definite conclusion as the sample size is very small and short term follow up. A large multicentric study is required to know the better functional outcome of the patients.
Conflict of interest: None declared
Ethical approval: The study was approved by the institutional ethics committee.
|
2020-12-31T09:08:19.159Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "62b82b67ba4283e6496b3055059bb964d2288c97",
"oa_license": null,
"oa_url": "https://www.orthopaper.com/archives/2020/vol6issue4/PartL/6-4-103-760.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c36b1f84998970e7b7bc4e80bfd2d845da4bbadc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
11465573
|
pes2o/s2orc
|
v3-fos-license
|
Predictions of Cleavability of Calpain Proteolysis by Quantitative Structure-Activity Relationship Analysis Using Newly Determined Cleavage Sites and Catalytic Efficiencies of an Oligopeptide Array*
Calpains are intracellular Ca2+-regulated cysteine proteases that are essential for various cellular functions. Mammalian conventional calpains (calpain-1 and calpain-2) modulate the structure and function of their substrates by limited proteolysis. Thus, it is critically important to determine the site(s) in proteins at which calpains cleave. However, the calpains' substrate specificity remains unclear, because the amino acid (aa) sequences around their cleavage sites are very diverse. To clarify calpains' substrate specificities, 84 20-mer oligopeptides, corresponding to P10-P10′ of reported cleavage site sequences, were proteolyzed by calpains, and the catalytic efficiencies (kcat/Km) were globally determined by LC/MS. This analysis revealed 483 cleavage site sequences, including 360 novel ones. The kcat/Kms for 119 sites ranged from 12.5–1,710 M−1s−1. Although most sites were cleaved by both calpain-1 and −2 with a similar kcat/Km, sequence comparisons revealed distinct aa preferences at P9-P7/P2/P5′. The aa compositions of the novel sites were not statistically different from those of previously reported sites as a whole, suggesting calpains have a strict implicit rule for sequence specificity, and that the limited proteolysis of intact substrates is because of substrates' higher-order structures. Cleavage position frequencies indicated that longer sequences N-terminal to the cleavage site (P-sites) were preferred for proteolysis over C-terminal (P′-sites). Quantitative structure-activity relationship (QSAR) analyses using partial least-squares regression and >1,300 aa descriptors achieved kcat/Km prediction with r = 0.834, and binary-QSAR modeling attained an 87.5% positive prediction value for 132 reported calpain cleavage sites independent of our model construction. These results outperformed previous calpain cleavage predictors, and revealed the importance of the P2, P3′, and P4′ sites, and P1-P2 cooperativity. Furthermore, using our binary-QSAR model, novel cleavage sites in myoglobin were identified, verifying our predictor. This study increases our understanding of calpain substrate specificities, and opens calpains to “next-generation,” i.e. activity-related quantitative and cooperativity-dependent analyses.
(C2), which are called the "conventional" calpains (in this paper, "calpains" refers to the conventional calpains unless otherwise indicated). C1 and C2 each forms a heterodimer composed of a larger (ϳ80 kDa) catalytic subunit (CAPN1 or CAPN2) and a common smaller (ϳ28 kDa) regulatory subunit (CAPNS1). Because CAPN1 and CAPN2 have more than 60% aa sequence identity, C1 and C2 show highly similar, if not identical, substrate specificities (1, 4 -6). They generally function by limited proteolysis, cleaving a few peptide bonds in their substrate protein, which changes the protein's function and/or structure to modulate cellular functions. Thus, calpains are called "modulator proteases." To understand the calpains' physiological functions, it is essential to clarify their substrate specificity/selectivity, i.e. what proteins calpains proteolytically process and at which position(s).
There have been many attempts to define calpains' substrate specificities. The initial studies, focusing on whether specific proteins are proteolyzed or not (6 -9), were followed by more detailed studies using substrate cleavage site amino acid (aa) sequence alignment and a position-specific scoring matrix (PSSM) method (10 -12). Next, peptide libraries were used (13,14). For example, Cuerrier and his colleagues used a peptide sequencing method to quantitatively determine calpains' preference for each aa residue (aar) at each position relative to the cleavage site (13), and developed a sensitive oligopeptidyl fluorescence substrate, H-E(EDANS)PLFAERK (DABCYL)-OH. More recently, machine-learning methods have been applied to the construction of calpain cleavage predictors (15)(16)(17)(18)(19)(20).
However, PSSM-based and machine-learning methods have so far yielded rather limited accuracy in predicting calpain cleavage sites. This is because, unlike with caspases and granzymes (19), there appears to be no explicit rule for calpain specificity, and the number of known aa sequences for calpain cleavage sites is rather small (Ͻ 200, before this study). Furthermore, the cleavage efficiency of most of the reported calpain cleavage sites is unknown, and the cleavage patterns change depending on the reaction conditions.
Notably, the most important question in identifying cleavage specificity is not whether a protein is cleaved. Technically, all peptide bonds can be cleaved by calpains (or any protease) with some efficiency, i.e. k cat /K m Ͼ 0, which depends on the cleavage conditions. In other words, the apparent "cleavability" of a bond is defined by the threshold k cat /K m determined by both the proteolytic conditions and the detection sensitivity. Therefore, the ultimate cleavage predictor should predict a k cat /K m value for each peptide bond within a given protein sequence under given cleavage conditions.
To address the above points, here we sought to identify calpain cleavage-site sequences through literature searches and by performing in vitro digestions of a concentrated, synthesized oligopeptide library. Using the identified cleavagesite sequences, we performed quantitative structure-activity relationship (QSAR) analyses, which revealed the important Pand PЈ-site positions (the positions N-and C-terminal to the cleavage site, respectively) on which to focus. Although the reaction conditions used in this study were slightly different from those used in typical calpain kinetics studies, several verification analyses confirmed that our results successfully elucidated the calpains' substrate specificity.
EXPERIMENTAL PROCEDURES
Peptides and Calpains-From 116 reports, 147 calpain substrates, and their 420 cleavage-site sequences (after excluding two overlapping sequences from a total of 422) were collected (supplemental Table S1). The substrate proteins were numbered SB0001 to SB0150 (substrates reported multiple times under different conditions were assigned different SB numbers; see supplemental Table S1), among which SB0001-SB0090 were already reported in our previous paper (15)). Next, a database, CaMP DB (Calpain for modulatory proteolysis database (21), http://www.calpain.org/), was constructed from the collected information, including all the cleavage sites, secondary structures, and references.
From the above collected site sequences, 86 were selected according to their position in the substrate protein (to have 10 or more P and PЈ site aars) and aa composition (to be not too hydrophobic), and the 20 aars surrounding the reported calpain cleavage site (10 on each side of the site) were selected for oligopeptide sequence preparation (there were several exceptions; see supplemental Table S2). Eight (ID031, 34,36,37,55,72,73, and 84) of these 86 sequences were randomly selected, scrambled, and used as control peptides (ID087-94) (supplemental Table S2).
In preliminary experiments, most of the peptides were detected as either or both of the following: (1) uncleaved (i.e. both N-and C termini capped with Ac and DKP, respectively [both-capped, BC]) peptides that were synthesized correctly and/or in truncated form; (2) fragments cleaved as previously reported (Rp), and/or not as reported (i.e. novel, Nv). The time course of the signals indicated that the optimal reaction time for most of the peptides was between 10 and 20 min (data not shown). Thus, the reaction time was set to 15 min for subsequent experiments. To maximize the number of cleaved peptides, the peptide concentration was increased to 1.7 mM (20 M each) in the reaction mixture. After testing several combinations of peptides and calpains, we decided to use 0.3-1.7 mM (3.3-20 M each) peptides and 2.5 M calpains in the following kinetics study. The ratio of calpain to each peptide was high compared with typical calpain proteolysis experiments. The most likely reason for the high calpain requirement is that the calpain activity was inhibited by impurities derived from the peptide synthesis process and by the high ionic concentration of the reaction mixture, which was because of the need for excess buffer to neutralize acetic acid present in peptide solvents. Although these assay conditions may not have been optimal for peptides with high-end and low-end k cat /K m values, they appeared to be appropriate for most of the peptides (see supplemental Fig. S1).
Among the clearly detected proteolytic fragments obtained by cleavages at Rp sites, oligopeptides corresponding to 78 C-terminal and 26 N-terminal fragments were newly synthesized with C-terminal DKP or N-terminal Ac modification, respectively, as described above (supplemental Table S3, ID0XX-Rp-C or -N series). Peptides corresponding to 39 (C-terminal) and 15 (N-terminal) fragments obtained by cleavages at Nv sites were also synthesized (supplemental Table S3, ID0XX-Nv series). These peptides (158 total peptides, named "P158mix") were used to quantify the generated calpain-cleaved peptides in the following kinetics experiments.
Peptide Proteolysis and MS Analysis-P87mix (for final concentrations, see Table I) in 100 mM HEPES (pH 8.5) and 1 mM TCEP was denatured at 60°C for 1 h, and digested with 2.5 M C1 or C2 in the presence of 1 mM or 5 mM CaCl 2 , respectively, at 30°C for 15 min in a 20-l volume (see Fig. 1 for the overview of the experiments). As a standard for quantification of the cleaved peptides, P158mix (each peptide at 5 M) was incubated under the same conditions, without calpains. After the reaction, TCEP, SDS, triethylammonium bicarbonate, and three control peptides for iTRAQ TM standardization (C001: NH 2 -EFILRVFSEKRNL-COOH, M r 1,649.93; C002: NH 2 -DFCIRVF-SEKKAD-COOH, M r 1,556.77; C003: NH 2 -DFVLRFFSEKSAG-COOH, M r 1,501.76) were added to final concentrations of 4.36 mM, 0.0952%, 167 mM, and 0.5 M each, respectively, and denatured at 60°C for 1 h.
Next, methyl methanethiosulfonate was added to a concentration of 8.33 mM; the reaction mixture was then incubated at room temperature for 10 min, and labeled with the iTRAQ TM 8-plex labeling kit (Sciex), according to the manufacturer's instructions (Table I). The resulting reaction mixture was subjected to 2D-LC-MALDI MS as described above. The same sample was also analyzed by 2D-LC/MS using the DiNa 2D nLC system and Sciex QSTAR Elite with Nano-Spray TM ESI. MS and MS/MS spectra were acquired with Analyst QS Ver. 2.0 software (Sciex), using the standard parameters recommended by the manufacturer. Peptides were identified using Protein-Pilot TM Ver.4.5 with the following Paragon parameters: Sample Type: iTRAQ 8plex (Peptide Labeled); Cys Alkylation: MMTS; Digestion: None; Instrument: QSTAR Elite ESI or 4800; Special Factors: "N-Ac and C-DKP" or "N-Ac and C-DKP, cleavable" (see below); Species: None; "N-Ac and C-DKP" and "N-Ac and C-DKP, cleavable" were added by describing them in the ParameterTranslation.xml and Protein-Pilot.DataDictionary.xml files of the ProteinPilot TM software (see Supplemental Experimental Procedures for the description). The database was constructed as described below. A global false discovery rate (FDR) above 5% (normal condition) or 1% (stringent condition) was used to define significant data. Identified peptides were exported as PeptideSummary.txt for further data processing by Microsoft Excel Ver. 2010. Peptide structures and their proteolytic sites were assigned according to whether Ac and/or DKP was present (see supplemental Experimental Procedures).
First, the C-terminal 20 aars were selected from the proteome database entries that had 20 or more aars, resulting in 90,858 entries (1,817,160 aars). Among these entries, those similar to Core DB entries when reversed, i.e. entries whose reverse sequence contained a four-aa block included among the Core DB sequences, were eliminated, to construct "Hs50K DB" (50,330 entries, 1,006,600 aars). Next, forward sequences containing a four-aa block included in the Core DB were also eliminated, reducing the number of entries to 30,317. From the remaining entries, 4,000 were randomly selected, resulting in "Hs4K DB" (4,000 entries, 800,000 aars). "Core DB ϩ Hs50K DB and FDR Ͻ 1%", and "Core DB ϩ Hs4K DB and FDR Ͻ 5%" were used as the "stringent" and the "normal" condition, respectively. In this study, the reported results were obtained under the normal condition, because both conditions gave essentially the same results (see supplemental Fig. S5C). Kinetics-A k cat /K m value for each cleavage was calculated using Lineweaver-Burk and Eadie-Hofstee plots. A comparison of the results revealed that the former gave much better estimations than the latter (data not shown), so the Lineweaver-Burk method was used. A For cleaved fragments, v 0 was calculated as I n /I 121 ϫ 5 ϫ 10 Ϫ6 M/900 s, where I 121 was the iTRAQ TM signal intensity (standardized by those of control peptides) of iTRAQ TM -121, which corresponded to 5 M standard fragment peptides. In general, calculations using the full-length values showed considerably larger variance than those obtained using the fragments. This may have been due to the some- supplemental Table S1). These data were summarized in the CaMP (Cleavage site sequences from Calpain for Modulatory Proteolysis) database (DB) web site (A). Next, 86 sequences corresponding to the P10-P10Ј of some of the above cleavage sites and 8 control scrambled sequences were selected for oligopeptide synthesis (P94mix) with the N-and C terminus capped by acetyl-and -DKP modifications, respectively (B). Shorter reference peptides corresponding to segments created by calpain cleavage were also synthesized (P158mix). Next, varying amounts of P87mix (7 peptides were excluded from P94mix because of insolubility and other reasons) were incubated with or without C1 or C2 at 30°C for 15 min (C). After the digestion, peptide solutions were labeled with iTRAQ TM reagents (D), and peptides that were cleaved or uncleaved (i.e. with both terminals capped) were identified and quantified by liquid chromatography-combined with MS (E). Finally, the v 0 (initial velocity of the cleavage reaction) values were calculated from the iTRAQ TM signals, and 1/v 0 was plotted against 1/[S] (where [S] was the substrate concentration) to determine the k cat /K m value for each cleavage (F). The identified peptide sequence was compared with the originally synthesized peptide sequence to determine the proteolytic site by calpains (g) associated with the determined k cat /K m . what high variances among the iTRAQ TM signals, and to their narrow dynamic range, as well as to unknown reasons. As verified in supplemental Fig. S1, k cat /K m values could be calculated with moderate errors, and the amounts of full-length peptides remaining after the reaction were smoothly distributed, supporting the appropriateness of the reaction time (15 min) in this study. For the rationale for calculating k cat /K m , see supplemental Experimental Procedures.
Determination of Cleavage Sites by N-terminal Sequencing and MS/MS Analysis-Human heart troponin T2 (Merck 648484 -100UGA, ca. 30 pmol) and horse myoglobin (Sigma-Aldrich, M0630, ca. 60 pmol) were digested with C1 (Merck Millipore #208712, 0.9 pmol) in 50 l of 100 mM Tris-HCl (pH 7.5), 1 mM DTT, and 5 mM CaCl 2 at 30°C for 20 min. The digested samples were directly separated by SDS-PAGE, and the proteolyzed fragments were then blotted onto a PVDF membrane and subjected to peptide sequencing analysis (Apro-Science Inc., Tokushima, Japan). For sequence analysis by MS, the same digestion reactions were performed, terminated by adding a 3-fold volume of 7% TCA followed by incubation on ice for 30 min, spun (20,000 ϫ g, 2°C, 10 min), and the supernatant was collected. An aliquot of the soluble fraction was desalted and concentrated to a few l using Zip-Tip C-18, and analyzed by Sciex 5600 ϩ with the Eksigent nanoLC system. The samples were analyzed in triplicate, the data were merged, and the peptide sequences were identified using ProteinPilot (Ver. 4.5) and Swiss-Prot DB (2015_08; 549,008 sequences; 195,692,017 aars) using the default parameters.
Determination of Cleavability of Synthetic Peptides by nLC-Peptides [tp1: Ac-QHLCGSHLVEALYLVCGERG (corresponding to ID014: INS); tp2: LEGNLYGSLFSVPSSKLLGN (ID040: GRIN2A), and tp3: GGGGYSASLHSEPPVYANLS (ID048: JUN)] for nLC analysis were synthesized and purified by Toray Research Center Inc. (Tokyo, Japan) with Ͼ 98% purity (determined by the manufacturer from the ratio of peak areas in HPLC), and were dissolved in distilled water. Each peptide (initial concentration: 6.7-20 M) was incubated with 1 pmol of either C1 (Merck Millipore #208712) or C2 in 50 l of 50 mM HEPES (pH 7.5), 1 mM TCEP, and 1 or 5 mM CaCl 2 at 30°C for 20 min. The digested sample was directly separated by DiNa nanoLC and monitored by a UV spectroscope MU701 (GL Sciences, Tokyo, Japan). Each peak sample was collected, and the contained peptide was determined by the Sciex 4800 MALDI MS system as described above. The areas of peaks were quantified using SmartChrom data analysis software Ver. 2.28J (KYA).
Statistics and QSAR Calculations-Statistical tests were performed using Excel 2010 (Microsoft), SAS Studio Release 3.1 of the SAS University Edition (SAS Institute Inc., Cary, NC), and Molecular Operating Environment (MOE, Ver. 2013.08, Chemical Computing Group Inc., Montoreal, Quebec, and Ryoka Systems Inc., Tokyo, Japan). Analyses for 3D structures and model constructions using the partial least squares (PLS) and binary-QSAR methods were performed by MOE.
A binary-QSAR model was constructed by Auto-QSAR (binary) of MOE software using default parameters and 812 aa descriptors at specific positions. The aa descriptors used were 3 secondary structure descriptors for each position (total of 3 ϫ 20 ϭ 60) and those that showed the largest r 2 values between the measured k cat /K m s and the corresponding aa descriptor's values (see supplemental Tables S11-S13). In the binary QSAR analysis, all of the cleaved and uncleaved sequences without measured k cat /K m values were assigned values of 1 and 0 M Ϫ1 s Ϫ1 , respectively, and a cut-off value of 0.5 M Ϫ1 s Ϫ1 was used so that all of the cleaved and uncleaved sequences were set as positive and negative samples, respectively. First, P10-P10Ј aars, which contained many missing aars close to both ends, were used for the construction. This resulted in a classification that placed unusual emphasis on whether an aar was missing or not, which was considered artifactual. Thus, only cleavage sequences with no missing aars in the varying ranges (P10-P10Ј, P9-P9Ј, P8-P8Ј, …) were used and tested. The trajectory of backward variable selection was analyzed manually, and the most balanced model was selected as having a leave-one-out (LOO) cross-validated accuracy (XA) of more than 0.7 and the lowest number of descriptors. The best model was found using the range P6-P6Ј with eight descriptors (see Table III) A PLS-QSAR model was constructed by Auto-QSAR (PLS) in the MOE software using default parameters and the same 812 aa descriptors at specific positions as above. After the first analysis, the calculated outliers were excluded by MOE, and the analysis was performed again. The trajectory of backward variable selection was analyzed manually, and the most balanced model, with eight descriptors, was selected as having an r 2 value cross-validated with LOO (Xr 2 ) of more than 0.6 and the lowest number of descriptors (see Table V).
For the standard aa compositions, the following values taken from Swiss-Prot DB release 2012_9 were used: Ala, 8
Literature Search and Peptide Library Digestion Followed by MS Detection Identified 420 and 483 Calpain Cleavage
Sites, Respectively-One of the major reasons for the previously incomplete accuracy of calpain cleavage predictors (15)(16)(17)(18)(19)(20) is the small number of positive (i.e. cleavage site sequence) samples. To increase the number of samples, we first searched the literature extensively for calpain cleavage site sequences, and picked up 420 sites from 147 substrates (supplemental Table S1).
To ensure that the reported (Rp) cleavage sites would be cleaved in the oligopeptide context, a mixture of oligopeptides (P87mix library), each of which corresponded to one of the above cleavage sites, was proteolyzed by either C1 or C2. The digests were then analyzed by LC/MS for the global identification of cleavage site sequences. In this analysis, most of the Rp sites (i.e. mostly the middle of each peptide) as well as many novel (Nv) sites were identified. Therefore, for the kinetics study (see below), peptides corresponding to some of the identified cleavage fragments (104 Rp and 54 Nv sites) were synthesized (P158mix library, supplemental Table S3).
Finally, 418 cleavage sites (106 Rp and 312 Nv) were identified for C1, 360 (107 Rp and 253 Nv) for C2, and a total of 483 (123 Rp and 360 Nv) for both combined (Tables II, supplemental Tables S7 and S8). In total, we found that 98 of the 131 Rp sites existing in the P87mix were proteolyzed by calpains (74 (out of 131) Rp sites were in the middle of the peptide [i.e. after position 10], and 70 of these were proteolyzed), even using oligopeptides (supplemental Table S4), indicating that the calpain substrate specificity was consistent and validating our experimental system.
All Cleavage Site Sequences Identified Using Oligopeptides Showed Similar Trends to Those Reported-To examine whether the Nv site sequences were distinct from those of Rp sites, the P10-P10Ј sequences for 420 sites from the literature ("Lit" sites) were compared with those of the 360 Nv sites identified above (Figs. 2A-2C). When the aa frequencies of all of the aars at all positions (P10-P10Ј) were compared for Lit and Nv, they showed significant correlation (p ϭ 2.1 ϫ 10 Ϫ38 ), with a Pearson's correlation coefficient (r) of 0.59 (Fig. 2C). Although the r at each position varied from less than 0.2 to more than 0.8, they all showed significant correlation (p Ͻ 0.05, supplemental Fig. S2A(1)). In addition, 123 Rp sites and 360 Nv sites also showed significant correlation by the same analysis (supplemental Fig. S2A(2) and S2B).
Therefore, we concluded that the calpains' preference for the Nv sites was not significantly different from that of Rp sites as a whole, although small differences in several specific aars were observed (data not shown). The slight differences were probably because of the fact that the aa composition at each position of the P87mix peptides was somewhat different from the standard, because most of these peptides were selected to have a calpain cleavage site in the middle. The aa preference of all of the cleavage sites (Lit ϩ Rp ϩ Nv) is shown in Fig. 2D.
To test whether Nv sites were cleavable in the context of a whole protein, purified cardiac troponin T (TNNT2, corresponding to ID007) was digested by calpain. MS and peptide sequencing analyses revealed that two of the three identified Nv sites [C-terminal to Phe 80 and Leu 84 (corresponding to mouse Phe 73 and Leu 77 , respectively)] were detected (supplemental Fig. S4). This experiment showed that some of the Nv sites, if not all, are cleaved by calpains in full-length proteins, and they have just not been reported yet.
These results strongly suggested that the calpains did not randomly proteolyze the oligopeptide mixture, but that all of the detected proteolytic sites strictly complied with an as-yetunknown rule for calpain substrate specificity. Therefore, the limited proteolytic activity of calpains observed in vivo is likely to depend on secondary and/or higher-order structures.
There were 123 and 65 sites that were specifically cleaved by C1 and C2, respectively, and were uncleaved by the other (supplemental Fig. S3C). Comparison of the aa preferences of these C1-and C2-specific sequences showed that both had significantly lower correlation (r ϭ 0.49, p Ͻ 0.001) than that for all sequences (Figs. 3A versus 3B), and that the above distinctive features at P9-P7, P2, and P5Ј were emphasized in these sequences (Figs. 3C-3E, and supplemental Table S6). Although there appeared to be a much greater difference between the C1-and C2-specific sequences than among the total sequences, more samples are required to clarify this issue.
The k cat /K m Values for 119 Calpain Cleavage Sites Ranged From 10 to 2000 M Ϫ1 s Ϫ1 -To shed further light on the calpain substrate specificity, the efficiency, i.e. the k cat /K m , for each cleavage site was determined. First, the decay of bothcapped ("BC"; i.e. "uncleaved") peptides was analyzed (because of the presence of truncated synthetic peptides, the number of BC peptides was much larger than 87; see supplemental Table S9). Although it was possible to calculate k cat /K m , the data were so variable that many signals could not be used for the calculation. There are several possible reasons for this variability, including the large variance in iTRAQ TM 8-plex signals, the rapid degradation of efficiently cleaved peptides (making them inappropriate for quantification), and probably other unknown reasons. The calculated k cat /K m values ranged from 1 to 600 M Ϫ1 s Ϫ1 (supplemental Table S9). These values correspond to the apparent k cat /K m of the total cleavages taking place in one peptide.
To obtain data for each cleavage site with more confidence, the cleaved peptides generated in the P158mix were quantified. In this case, the deviations in the data were mostly small, and 71 and 48 k cat /K m values were calculated for Rp and Nv cleavage sites, respectively, with modest standard deviations ( Fig. 4A and supplemental Table S8). The k cat /K m values for different sequences ranged widely, from 10 to 2,000 M Ϫ1 s Ϫ1 . To examine whether the k cat /K m values of Rp and Nv sites were distinct, those in the same peptides were compared (supplemental Table S10). The average k cat /K m values were 259.8 M Ϫ1 s Ϫ1 and 189.4 M Ϫ1 s Ϫ1 for the Rp and Nv sites, respectively, which were not significantly different (p ϭ 0.33), supporting the above conclusion that the Nv sites are not essentially different from Rp sites.
Most of these sites were cut by both C1 and C2 with a similar k cat /K m value (r ϭ 0.92; Fig. 4B), indicating that C1 and C2 share highly similar cleavage site efficiencies as well as highly similar sequence dependences. A few peptides, however, showed apparently different k cat /K m values for C1 and C2 (Fig. 4A). However, when we examined three peptides independently for their cleavability (tp1-tp3, see Experimental Procedures), no clear difference between C1 and C2 was observed (data not shown). It is possible that the relatively Table S6] between C1 and C2 (red dots; some are labeled with their position, aa, and P). For the r at each position, see supplemental Fig. S3D. C, D, The P10-P10Ј cleavage site sequences specific for C1 (C, 123 sequences) or C2 (D, 65 sequences) were aligned, and the occurrence of each aar at each position was shown as in Fig. 2. Several aars that did not occur at some positions and are not shown in (C) and (D), are listed in (E). Red bold underlining indicates that the aa's absence represented a significant difference (p Ͻ 0.05; yellow marked: p Ͻ 0.01, binomial probability). Calpain-1 Calpain-2 large deviations obtained using the iTRAQ TM -MS method were responsible for the apparent differences between C1 and C2. Thus, although C1 and C2 have distinct aa preferences, we have not yet observed a clear difference in their cleavage efficiency. Further studies are required to clarify the distinct substrate specificities of C1 and C2.
Calpains Significantly Prefer Longer P-site Sequences (Nterminal Side of the Cleavage Site) Than PЈ-site Sequences (C-terminal)-
To investigate whether the P-and PЈ-sites have distinct features, the positions of calpain cleavage sites in the oligopeptides were analyzed statistically. If the peptides were randomly cleaved by calpains without specificity, all of the positions should show an ϳ5% frequency (Fig. 5, gray line). However, the peptides were designed to contain a calpain cleavage site mostly in the middle (between positions 10 and 11), and, as expected, this site showed a significantly higher cleavage frequency (Fig. 5, black line between 10 and 11).
Unexpectedly, the site after position 11 showed a significantly higher cleavage frequency than expected (Fig. 5, dashed line between 11 and 12), and those after positions 12-14 had the same tendency as position 11, although the difference was not significant. On the other hand, sites N-terminal to position 8 and C-terminal to position 15 tended to be cleaved less frequently than expected. In summary, the sites between positions 10 and 14 are preferred by calpains, and those after the N-terminal 7 aars and before C-terminal 5 aars are cut poorly by calpains. These asymmetric features of cleavability suggest that calpains require a longer P-site se-quence than PЈ-site sequence. In addition, there was no difference in these trends between C1 and C2 in this analysis.
Binary-QSAR Model Constructed with Cleavage Site Sequences Showed a Better Prediction Performance Than
Previous Models-To predict calpain cleavage sites, we used a binary-QSAR model (see Discussion for advantages of this model) with the information gathered in the experiments above.
For aa descriptors, we used the AAindex (26), predicted secondary structures, and molecular descriptors in the MOE package (see supplemental Tables S11 and S12). Several ranges of sequences were tried, and P6-P6Ј were used, because longer and shorter ranges did not perform well, probably because there were too many missing values and the sequences were too short, respectively. Of all the possible P87mix site sequences (1,703), 806 (314 cleaved and 492 uncleaved) sequences did not contain any missing values between P6 and P6', and were used for training data to construct a predictor. The best-balanced binary-QSAR model achieved was constructed with eight descriptors, associated with P6, P2, and P1 (Table III). This predictor performed with a leave-one-out (LOO) accuracy of 74.9% (Table IV, versus P87 P6-P6Ј).
To test the real prediction performance of the binary-QSAR model, 331 cleavage site sequences from the literature ("Lit" data set) that were not used in its construction were analyzed with our model. The 331 reversed sequences were used as negative control samples. The model had 63.1% total accuracy Occurrence rates of the number of cleavage sites detected at each position were plotted along with those expected by random cleavages. Cleavages before and after position 11 showed significantly increased occurrences (P was calculated by the Z-test for a proportion). (Fig. 6A). It should be noted that our model achieved a positive prediction value (the ratio of true positives to those predicted as positive) of 84.0% when the classification threshold was set to 0.95 (Fig. 6A, thin line at threshold ϭ 0.95 crossing the PPV line). This means that sites predicted by our binary-QSAR model with a threshold of 0.95 are very likely to be cleaved by calpains at the cost of sensitivity.
Next, using 132 cleavage site sequences that were not used for training any of previous calpain predictors, the predictors' performance was compared. The results showed that Table S14. our model outperformed all other reported prediction methods (Tables IV (versus Lit) and S14; note that reversed sequences were not necessarily true negative samples, and might be cleavable, implying that the accuracy of our model would be better than the value shown).
Finally, to identify calpain cleavage sites in a novel substrate protein, the sequence of horse myoglobin (MYO) was subjected to our prediction analysis. Among 12 sites predicted (Fig. 7A, red horizontal bars), three sites (arrows) were in loop/unstructured regions according to the 3D structure of MYO. Identification of the fragments generated by the calpain digestion of MYO showed that two of these sites were cleaved by calpains in actuality (Fig. 7A, red arrows, 7B-7D).
The First PLS QSAR model for Calpain Cleavage Site Efficiency-Finally, to predict quantitatively the cleavage efficiency of calpains for any peptide bond, the QSAR analysis of 119 site sequences with k cat /K m values was performed using the partial least squares regression (PLS) method. Using the LOO method, the most balanced PLS model had eight descriptors associated with P10, P2, P1, P3Ј, and P4Ј (Table V). This model showed a LOO r of 0.78 (total r ϭ 0.83, after excluding three outliers) (Fig. 6B).
Because the PLS model was constructed using the data from only 119 sequences from the P87mix data set, all the rest of the P87mix data (364 "cleaved" and 1220 "uncleaved" data without k cat /K m ) were evaluated by the model. As shown in Table VI (versus P87 unused), the average k cat /K m of the "cleaved" data set was significantly greater than that of "uncleaved" set (180.8 M Ϫ1 s Ϫ1 versus 114.4 M Ϫ1 s Ϫ1 , p ϭ 0.00049). These results indicated that our PLS model appropriately describes at least a portion of the calpain cleavage efficiencies. In other words, these findings indicate that the selections of aa descriptors and their weights by the MOE program are appropriate and reflect calpains' substrate specificity.
DISCUSSION
First Report of the Comprehensive Measurement of k cat /K m values-In this study, using an oligopeptide library and the iTRAQ TM proteomic method, 483 calpain cleavage sites were identified in addition to the 420 sites previously reported in the literature. Among the identified sites, 360 are novel, and the k cat /K m was determined for 119. These findings enabled us to analyze calpain substrate specificity not only precisely but also quantitatively. This is the first report to address calpain substrate specificity from the viewpoint of proteomewide quantitative structure-activity relationships.
To date, the k cat /K m values for fewer than 10 calpain substrates have been reported (6,38), which range from 41.7 to 141 M Ϫ1 s Ϫ1 . These values are consistent with those obtained in this study. Because the proteolytic conditions used in this study were somewhat unusual because of the use of concentrated calpains and unpurified peptides, the k cat /K m values determined here may be underestimated compared with those obtained under more typical conditions. However, the smooth distribution of the k cat /K m values that we obtained (see Fig. 4A) indicates that at least the relative k cat /K m values among the 119 determined values hold true.
Calpains also show amidase-like activity, but surprisingly, the k cat /K m for hydrolysis of the NH 2 group at the C terminus of substance P (RPKPQQFFGLM-NH 2 ) is 10 6 M Ϫ1 s Ϫ1 (39). This activity is mainly achieved by an ϳ10 4 -fold increase in the k cat without a significant change in the K m (39), by an unknown mechanism. Although this amidase-like calpain activity may be involved in as-yet-unknown physiological functions, there has been no further report on it. We did not detect any C-terminal DKP hydrolyzing activity in this study (data not shown; see supplemental Experimental Procedures).
Confirmation that the Substrate Sequence Selectivity of Calpains is Rather Weak-Consistent with all previous PSSM-type studies of calpain substrate sequences, both C1 and C2 showed weak sequence selectivity in this study (see supplemental Fig. S3). In terms of the 3D structure (40 -42), the substrate recognition by calpains is mainly determined by relatively weak interactions between an atom in the peptide bonds of a substrate and an atom of calpains' subsite residues. For example, Gly 198 of CAPN2 (supplemental Fig. S6A, corresponding to Gly 208 of CAPN1 (supplemental Fig. S6C)) interacts with the O (-2.0 kcal/mol) and NH (-1.7 kcal/mol) of the P1-P2 and P2-P3 peptide bonds, respectively, whereas Gly 261 of CAPN2 (S6A, corresponding to Gly 271 of CAPN1 (S6C)) interacts with the NH (-4.7 kcal/ mol) of P1-P2.
TABLE V Descriptors used in the partial least squares regression (PLS) model
For the values of aars for each descriptor, see supplemental Tables S11 and S12. In other words, most of the side-chains of the substrate residues are exposed to the solvent without forming a strong interaction with calpain atoms. These features, which are common to both C1 and C2, are in sharp contrast to caspases, which strongly interact with P1 and P4 Asp side chains (supplemental Fig. S6D). These weak interactions contribute to the calpains' recognition of highly divergent substrate sequences. Exceptions are the P2 and P3Ј positions, where the side-chains of Leu and Pro, respectively, are deeply encompassed by the active site cleft of the calpains (supplemental Fig. S7). This point will be discussed further, below.
Existence of Many Nv Sites Suggests that Substrate Protein Cleavages By Calpains are Regulated By Both Primary and
Higher-order Structures-The literature contains reports of 420 unique calpain cleavage sites in 147 substrate proteins. Most of these sites are cleaved in the context of a whole protein or part of a protein that is expected to have a proper 3-D structure. On the other hand, the 483 sites identified in this study were in 20-mer peptides, which are unlikely to contain potential cleavable sites that were inaccessible by steric hindrance. Thus, the 360 Nv sites identified in this study are considered calpain-cleavable, not artifactual, sites that are not exposed in the context of a whole protein structure. The lack of significant differences in the aa preferences and k cat /K m values between the Rp and Nv sites supports this idea (see Fig. 2 and supplemental Table S10).
Therefore, most substrates have many sites that are potentially cleavable by calpains that escape cleavage when the substrate protein retains its higher-order structures. We thus conclude that the calpains' substrate specificity is defined by both primary and higher-order structures. The limited proteolysis by calpains that is often observed under physiological conditions probably reflects the fact that only extremely small amounts of calpains are activated in vivo.
Sequences Proximal to the Cleavage Sites Were Highly Similar for C1 and C2, and Both Preferred Longer Sequences in the P-than the PЈ-region-As in almost all previous reports, the aa sequence preferences around the cleavage sites for C1 and C2 were almost identical in this study, which is supported by the calpains' 3D-structural features, as described above. Surprisingly, however, detailed analysis revealed that the preferences for specific positions (P9-P7, P2, and P5Ј) were significantly different between C1 and C2 (Figs. 3C and 3D, and supplemental Table S5). Among them, the calpain aars most proximate to P8-P7 and P5Ј are different between C1 and C2, i.e. Asp 256 , Ile 257 , and Leu 260 of C1 are within 5 Å of Ser 169 -Thr 170 (corresponding to P8-P7) of calpastatin, whereas the corresponding residues of C2 (Ser 246 , Ala 247 , and Ser 250 , respectively) are not (supplemental Fig. S8A); Glu 172 of C2 and Met 329 of C1 are close to Glu 185 (P5Ј) of calpastatin, whereas the corresponding Gln 182 of C1 and Gln 319 of C2, respectively, are not (supplemental Fig. S8B). How these differences lead to distinct aa preferences is unknown at present. Moreover, there appears to be no significant difference in the P9-and P2-proximate aars between C1 and C2. To clarify the different substrate specificities of C1 and C2, further studies with more sample numbers are required.
The cleavage positions showed asymmetric frequencies (see Fig. 5), suggesting that calpains require a longer segment of P-site than PЈ-site residues. The P10-P5 sites are mainly recognized by the calpain CBSW domain (19,40,41), which may play a crucial role in substrate recognition (see supplemental Fig. S7A; the right side surface corresponds to CAPN2's CBSW domain). These results are in concert with calpains' amidase-like activity, for which only the P-site region plays a role (39).
Binary-QSAR Analyses of Calpain Substrate Cleavages Suggest That Discrete Positions (P6, P2, P1) Determine "Cleavability"-Many attempts have been made to predict calpain cleavage sites, including studies using PSSM, support vector machine (SVM), multiple kernel learning (MKL), a form of hierarchical clustering, and other methods (12)(13)(14)(15)(16)(17)(18)(19)(20), each of which has advantages and disadvantages. Here, we used the binary-QSAR model, which uses Bayes' theorem. It is a robust method that is low in computational cost and high in performance. In addition, it is easy to interpret the relative importance of various factors using a binary-QSAR model (43,44).
Our binary-QSAR model showed that the aa properties of only sites P6, P2, and P1 could reasonably predict the macro "cleavability" of a substrate by calpains (Table III, Fig. 8). That is, these sites are primarily involved in the cleavage efficiency of substrates by calpains with a certain hierarchy. Consistent with previous studies, P2 was the most important, and in the binary-QSAR model, P2 was associated with six descriptors, which are all related to hydrophobicity (NADH010102, BIOV880101 and 102, vsurf_W2 and _W3, and GUOD860101) (Table III). In brief, the model predicts that sequences with Leu at P2 will always be cleaved, regardless of P1 or P6; those with Ile, Val, Phe, Thr, Gln, Asn, Asp, Ser, Tyr, or Met at P2 are dependent on P1 and P6; and those with Glu, Lys, Trp, Cys, Gly, His, Ala, Arg, or Pro at P2 are predicted to be uncleaved regardless of P1 or P6 (Figs. 8A-8C).
P6 and P1, which are associated with one descriptor each, contribute only moderately to the cleavability, compared with P2. At P1, a water-accessible surface area (probe radius of 1.4 Å) with a partial positive charge (ASAϩ) yields the maximum cleavage probability at 138 Å 2 (Asn, Gln, Lys, Phe, and Tyr are close to this value; Fig. 8D). Larger and smaller ASAϩ values decrease the probability (by about 0.26 at maximum), suggesting that the condition at the S1 subsite of calpains is not very flexible; thus, Ile, Pro, or Leu at P1 markedly decreases cleavability.
A lower probability of a random coil secondary structure at P6 slightly increased the cleavability (by less than 0.2, Figs. 8A-8C, 8E). The 3-D structures of C2/calpastatin co-crystals revealed that calpains' S6 subsite is on the surface of the CBSW domain, and S3-S10 are almost aligned (19,40,41) (supplemental Fig. S7A). Therefore, our results support the idea that the secondary structure in the middle of this region may decrease a substrate's affinity for the CBSW domain by reducing flexibility, resulting in lower cleavability.
PLS QSAR Analyses Suggest That P3Ј-P4Ј Most Affects
Cleavage Efficiency, Followed By P2, P1, and P10 -To our surprise, the P3Ј and P4Ј positions had the most effect on the k cat /K m values, which changed by ca. 1,000 M Ϫ1 s Ϫ1 , depending on the aars at P3Ј-P4Ј (Fig. 9A).
The k cat /K m values predicted by our PLS QSAR model showed the best correlation with the partial specific volume and mass density of the aar at P3Ј (Fig. 9C). This finding is consistent with the 3D-structural observations that the sidechain of P3Ј has no specific interaction with calpain atoms, and is buried in a calpain surface cleft surrounded by a relatively hydrophobic environment (supplemental Fig. S7B).
P2 and P1 are also important (each k cat /K m change Ͼ 300 M Ϫ1 s Ϫ1 ), and Leu, Ile, and Val at P2, which gave high cleavage probability in the binary-QSAR model, were also associated with high efficiency (Fig. 9B). On the other hand, Asn and Asp at P2, which moderately increased cleavability, showed rather low efficiency. The predicted k cat /K m values were dependent on the sum of the van der Waals surface area of aars at P2 and P1, where the atomic partial charge is less than Ϫ0.3 (Table V, PEOE_VSA-6). The preference of P2 site was also related to the 3D-structure; the P2 residue side-chain penetrates the cleft beside the calpain active site, making weak hydrophobic interactions with calpain atoms (supplemental Fig. S7A, green surfaces).
Notably, Pro at P1, which markedly lowered the cleavability, caused the greatest increase in efficiency, among the 20 aars. This result suggests that most substrates with a Pro at P1 are not easily cleaved, whereas they are rather efficiently cleaved if the aars at other positions are favorable for cleavage. The accessible surface area, which is related to hydrophilicity, of the aar at P10 also contributes to the calpain cleavage efficiency, by 290 M Ϫ1 s Ϫ1 .
Cuerrier and his colleagues developed a highly sensitive fluorescent oligopeptide substrate, H-E(EDANS)PLFAERK (DABCYL)-OH (13), which is cleaved after Phe (F) (4). Our PLS model predicted that PLFAER for P3-P3Ј would have a k cat /K m of 763 M Ϫ1 s Ϫ1 , which is almost the maximum value (822 M Ϫ1 s Ϫ1 ) for all possible P3-P3Ј peptides, consistent with Fig. 6B and Table V), the change in k cat /K m value (⌬k cat /K m ) was calculated as a function of the aars at P3Ј and P4Ј (A), or P2 and P1 (B). C, ⌬k cat /K m was plotted as a function of BULH740102 (see below) value for each aar at P3Ј. A k cat /K m value for each aar was calculated by entering each of the 20 aars for P3Ј into our PLS-QSAR model equation assuming that all other positions are fixed; i.e. for each aar, aa i (i ϭ 1-20; aa 1 ϭ Ala (A), aa 2 ϭ Cys (C), . . . aa 20 ϭ Trp (W)), P3Ј(aa i ) ϭ 62.6⅐E_ang(aa i ) ϩ average[-8.60⅐Q_VSA_PNEG(aa i ,aa j ) (j ϭ 1-20)]. The difference between the maximum (Ile) and minimum (His) values at the P3Ј position was calculated to be 537 M Ϫ1 s Ϫ1 . Next, the most correlated aa descriptor was determined: first, r and were calculated between the k cat /K m estimated above and each of the 1,315 aa descriptors; then, the descriptors were ranked independently for r and , and the sums of the ranks of both were again ranked; the best descriptor was BULH740102 (r ϭ 0.896, ϭ 0.869).
was used in addition to r, because is robust against abnormal distributions with outliers, which are features of some aa descriptors, whereas r is greatly affected by the outliers. For the values of the aars of each descriptor, see supplemental Tables S11 and S12. the sensitivity of the PLFAER substrate and supporting the effectiveness of our PLS QSAR approach. Indeed, Leu-Phe at P2-P1 and Arg at P3Ј was one of the best combinations of these positions (see Figs. 9A and 9B). PSSM-based methods count cleavages equally, regardless of the sequences' cleavage efficiencies, whereas the peptide sequencing-based method used by Cuerrier et al. (13) as well as our PLS method take the cleavability of each peptide into account. Thus, further PLS studies with more k cat /K m data should eventually reveal the ultimate substrate specificities of calpains.
Taken together, our PLS QSAR analyses showed that substrates having (Leu or Ile) (Val, Pro, or Ala) at P3Ј-P4Ј and P2-P1 are cleaved with high efficiency by calpains, and those with Glu or Asp at P3Ј, P2, and P1 are cleaved with the least efficiency. This information may be useful for mutation studies seeking to change calpain substrates to be uncleavable and/or to insert de novo calpain cleavage sites. Therefore, this study opens new avenues into the study of calpain substrates. Further elucidation of the context-dependent and quantitative structure-activity relationships of calpains and their substrates will improve our understanding of calpain substrate specificity.
|
2018-04-03T05:07:33.152Z
|
2016-01-21T00:00:00.000
|
{
"year": 2016,
"sha1": "e68e94e8ada5570cd0e1e40d16164672dd319ce7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1074/mcp.m115.053413",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc55dfe5915ff5d02388e2a65ac105347efabca2",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
224838998
|
pes2o/s2orc
|
v3-fos-license
|
A Facility-Based Cross-Sectional Study on the Implementation of the IMNCI Program in Public Health Centers of Soro District, Hadiya Zone, Southern Ethiopia
Background Integrated Management of Neonatal and Childhood Illnesses (IMNCI) is one of the child health programs and it provides an integrated approach and focuses on the well-being of the whole child. Globally, nearly nine million children pass away every year with preventable and treatable conditions. IMNCI program is provided by the health facilities to aid children under five years of age from illness. This study is aimed at assessing the implementation of the IMNCI program in public health centers of Soro District, Hadiya Zone, Southern Ethiopia. Methods The implementation of the IMNCI program was studied using a facility-based cross-sectional study design integrating both qualitative and quantitative data collected from 9 public health centers in Soro district, Hadiya Zone, Southern Ethiopia. A total of 390 (92%) caregivers were included in the study by the proportion of under-five outpatient coverage from each public health center. Data were collected through face to face interviewer-administered questionnaires, document review checklist, observation checklist, and in-depth interview guide. Results Based on agreed criteria resources' availability was 80.11% and judged as fair. Less than 50% of health centers (HCs) had cotrimoxazole and gentamycin. The compliance of health workers was 85.5% and judged as good. Below 85% of prescribed drugs were given correctly for the classified disease. Counseling on medication and follow updates were given for less than 80% of caretakers. The overall satisfaction of clients on IMNCI was 79.5% according to the judging criteria. The caretakers who took less than 30 minutes to reach the health center on foot (AOR=7.7, 95% CI [3.787–15.593]), caretakers who waited for less than 30 minutes to see the health care provider (AOR=2, 95% CI [1.00–3.77]), the caretakers who found prescribed drugs in HCs pharmacy (AOR = 3.7,95% CI [1.91–7.34]), the caretakers who have less than four family size (AOR=2, 95% [1.109–4.061]) were more satisfied in IMNCI services, whereas, caregivers who measured the weight of child were negatively associated with satisfaction (AOR= 0.24, 95% CI [0.13–0.45]). Conclusion This study found that the overall implementation of the Integrated Management of Neonatal and Childhood Illnesses was good. All health centers had trained health workers, ORS, paracetamol, vitamin A, chart booklet, and IMNCI guidelines were available; however, cotrimoxazole, gentamycin, ampicillin, and mebendazole were less abundant drugs in health centers. Further, a large-scale study is required to be conducted in future in other districts to ensure proper implementation of the IMNCI program in Ethiopia.
Introduction
IMNCI is a program provided by a health facility to aid children under five years of age suffering from illness. It emphasizes the wellbeing of children who suffer from illness and it promotes prevention mechanisms to the caregivers. The preventive and curative feature is included in the program which supports to diminish loss, infection, and disability of children. 1 Globally, nearly nine million children die every year with preventable and treatable conditions. 2 In sub-Saharan countries especially Tanzania, the mortality rate of under five years of age is lower from 133 per 1000 live births in 2005 to 81 in 2010 in Mainland, Tanzania. 3,4 Compliance is the major problem in the implementation of the program specially observed on six key sections of the protocol, such as adjacent seating of the child/caretaker, obtaining the history, checking immunization status, measuring temperature, checking weight, and counseling caretakers. 5,6 Low health care worker compliance, inadequate referral and counseling, and imperfect training are major constraints for service. 4,7 Understanding of mothers on identifying the illness of their children that is relate to poor counseling has a major impact on under-five mortality. Regarding caregiver satisfaction, major factors are related to the shortage of availability of drugs in the facility. 8 In Ethiopia, the mortality rate of under-five children is one of the highest, which is more than 321,000 every year; among them, more than 70% is caused by diarrhea, pneumonia, measles, malnutrition, and malaria. 9 Most of the deaths are frequently occurred due to pneumonia and diarrhea, so preventing these diseases will decreases under-five mortality up to 90% within the year of 2020. 10 According to the Ethiopian demographic health survey (2016) report the health facilities in Southern Nations, Nationalities, and Peoples' Region (SNNPR) who are giving treatment for under five-year children from symptoms of diarrhea, fever, and acute respiratory infection were 43.2, 36.7, and 46.5, respectively, 9,11 which indicated that less number of clients were served.
The implementation of the IMNCI program in the SNNPR program was active in 684 health centers in 2016. 12 However, there is still a gap in addressing the program to all the cases of under-five children. Annual health report of Hadiya zone 2016, Soro district had only 45% of the under-five case were seen at health centers and IMNCI implementation constraints were a shortage of essential drugs in HCs, shortage of IMNCI trained health professional. This data indicate the annual report of the Zonal Health Department (ZHD) from supportive supervision and inventory assessment.
There was no similar research conducted in the study area, therefore, this study findings will provide basic information for the regional health office, Hadiya zone, and Soro district to make an informed decision on availability, compliance, and satisfaction dimensions. The findings will also help the health center managers to fill gaps for improvement of the program to meet the client's need and of the stakeholders. It also provides baseline information on the program for researchers.
Study Area
The study was conducted in the Soro district Hadiya zone, southern Ethiopia. Soro district is one of the 10 districts in the Hadiya zone, which is located 32 kilometers far from zonal town, Hosanna 235 kilometer from Addis Ababa, the capital city of Ethiopia. The total population of the study area is 188,858, out of which, 94,363 and 94,495 are men and women, respectively. 13 The study was conducted from March 5-April 3, 2017.
Study Design
The facility-based cross-sectional study design was employed. Both qualitative and quantitative methods were applied. Explanatory concurrent triangulation was used as a complementarity purpose. The data were collected concurrently, but quantitative data was weighed more heavily in the analysis than the qualitative. 14,15 The three dimensions used for this study were availability, compliance, and satisfaction.
Populations and Sampling
The target population of this study included all public health centers in Soro district and all under-five years age child caretakers/mothers found in Soro district. The source populations were all health centers in Soro district, all caretakers who accompany the under-five child attended in selected health centers, all under-five children who received the IMNCI program, all documents related to IMNCI program in the health centers, all health workers in selected HCs of Soro district and all HCs managers in Soro district and all case team leaders in Soro district HCs were the source of population.
The study populations were under-five year's age caretakers/mothers who come to IMNCI service in selected HCs during the study period for an exit interview, selected under-five year's age cases from documents of IMNCI register book in the selected health center, and selected health workers who implement IMNCI program in selected HCs for the quantitative method of analysis. The study populations for the qualitative method were Head of HCs in selected HCs, case team coordinators in selected HCs, district health office maternal, and child health focal person.
Sample Size Determination
All nine health centers found in the Soro district were selected for this study. Two key informants per health center and one district health office focal person were selected for in-depth interviews. A total of 90 observations were conducted at the time of data collections. Two health workers were observed per health center who specifically involved in the IMNCI service. Eighteen IMNCI service providers were observed by PI when they provided the service. Each service provider was observed for five clients consecutively starting from the first client during direct observation sessions. Resource inventory is accompanied by interviews for assessing the availability of resources in the health center. The sample size of documents was similar to observed cases, 90 cases of underfive year's of age were reviewed from IMNCI register books to support observation results. The sample size for mothers/caretakers determined by using a single population proportion formula considering 50% population proportions. A total of 95% confidence level and 0.05 margin of error used for sample size determination. In total, 50% was considered because of no study done in the study area on the IMNCI implementation evaluation.
Sampling Procedure/Technique
Hadiya zone had 11 districts from which, Soro district was selected with non-probability sampling method purposively, because of having a large population size than that of other districts. Soro district has 9 public HCs, and all 9 HCs were selected for this study. The selected sample of exit interviews from each health center was drawn by proportional of outpatient coverage of under-five clinic in each health center. The mother/caregiver was included for exit interviews after completing the service. The first case was selected conveniently, and the rest cases were included consecutively in to study until the sample size reached.
Observation of two HWs that provided IMNCI service was selected conveniently at the time of study per HCs. In the case of two or more health workers found in the IMNCI service room; two of them were selected randomly by lottery method, and then observed one after the other consecutively. The selected HWs were observed for 5 cases for their performance on compliance of the IMNCI guideline. The document review was used to support the observation study; Six observed under-five children were selected during observation and reviewed from the IMNCI register book per health center. Two key-informants per health center and one from the district health office MCH coordinator were selected purposely. The purposes of selection for key informants were based on having more information about the program related to their position.
Inclusion and Exclusion Criteria
Inclusion criteria: All mothers/Caretakers came to health centers for IMNCI service during the study period and all health care providers specifically working in IMNCI were involved.
Exclusion criteria: Service providers with less than 3 months of the service year, whose documents did not have full information, caretakers unable to respond due to health problems, and those below 15 years old were excluded from the study.
Data Collection
Based on study objectives and research questions, document review and observation checklist were adapted from national IMNCI guidelines, and the UNICEF survey checklist. 16 The structured questionnaire for the exit interview was adapted from the program evaluation study books. 15 The tool was written in English then translated to Amharic and then retranslated to English to check the consistency.
Data Quality Assurance
A pretest was done for 21 (5%) of the total sample size to keep the reliability of the data. Additionally, Cronbach's alpha (0.873) was used for exit interview questionnaires. Document review and observation checklists were checked manually, some variables and terminologies were also adjusted.
The collected data were reviewed and checked for completeness before data entry. Data were coded and entered into Epidata version 3.1 then transported to SPSS version 20 for processing. The overall satisfaction level was calculated by the demarcation threshold formula; the satisfied categories were taken from the threshold score 45 and above, and dissatisfied categories were registered from the threshold score lower than 45. 17 Five points of likers items (1 to 5) show from least to the highest level of satisfaction.
Highest total score À lowest total score þ lowest score 2 Binary logistic analysis was used to check for association with the single variable with the outcome variable. Hosmer and Lemeshow test was used to check the goodness-of-fit of the model. Variables having an association with dependent variable P< 0.25 were selected for multivariate logistic regression analysis and used to assess by statistical significance (adjusted odds ratio). The strength of association was measured with AOR at 95% CI, and its significance variables were less than p-value 0.05. The qualitative data was analyzed manually using thematic analysis with particular dimensions and results were presented in a narrative form.
The transcribed data reports were checked by the peer debriefing method and member check.
Ethical Considerations
Ethical clearance was granted from Jimma University Institutional Review Board with approval number IHRPGH/668/2017. A permission letter was given by the Soro district health office and informed consent was obtained from participants. Kept the participant's confidentiality, and participation was voluntary and gave the right to withdraw at any point in the study. Jimma University Institutional Review Board approved participants under the age of 18 years to provide informed consent on their own behalf, and the informed consent included publication of anonymised responses.
Results
The response rate of caretakers was 390 (92.4%), and 90 (100%) direct observation was undergone for 18 HWs at nine health centers and supported by document review of observed cases. Fifty-four observed cases were analyzed and recruited to this study.
Compliance with Guideline
Out of 18 (100%) observed health workers 9.6 (53.7%) were female and all 18 (100%) health workers were trained with the IMNCI guidelines. About 47 (87%) and 38 (70.4) of clients were measured for their weight and temperature, respectively. Only three cases were observed for infants less than 2 months, from which all of them were correctly assessed for breathing count in 1 minute, umbilical cord redness and draining pus, skin pustules, movement of the child, Jaundice, diarrhea, and feeding problems, however, one infant did not measure for axillary temperature and immunization status. From observed three cases very severe disease and local bacterial infection were correctly classified, also two jaundice cases were correctly classified but misclassified for one local bacterial infection and one local infection. Considering that one case was classified for two or more diseases at a time ( Figure 1).
Counseling of Caretakers
The caretaker counseled on how to administer medication were 39 (72.2%), and 39 (72.2%) were received counseling on when to give medication. About 36 (66.7%) of cases were given first dose medication at the health center. Follow updates were given for about 42 (77.8%) sick children. HWs Observed for using chart booklets while giving the service were 53 (98.1%) ( Figure 3). An interview from 35 years' female case team coordinator said that . . .. assessment of the sick child was sometimes missed with the unavailability of some medical equipment. It is so difficult to follow the steps in the guideline without the availability of medical equipment.
Socio-Demographic Characteristics of Participants
About 186 (45%) caretaker age was between 26 and 30 years old. More than half 270 (65.4%) of the caretakers were living in rural areas (Table 1).
Service-Related Factors
About 271 (69.5%) caretakers were responded that waiting time stayed to get the service was less than 30 minutes. The temperature was taken for 249 (63.8%) of the sick child and who get the prescribed drugs inside the health center pharmacy were 205 (52. 6%) ( Table 2).
Satisfaction Level of Caretakers
The overall satisfaction of IMNCI services was 80.9% calculated by the demarcation threshold. More than two-thirds (78.5%) of caretakers were satisfied with the waiting time of the IMNCI service and 82.6% of caregivers were satisfied with getting counseling on identifying danger signs of sick under-five children to return HCs immediately. About 84.4% of caregiver was satisfied with overall service to decide to come to this health center by next time. Caretakers were satisfied with consultation time, availability of drugs in HC pharmacy, and availability of medical equipment's 91.5%, 62.3%, and 73.6%, respectively.
Factors Associated with Caretaker's Satisfaction
Educational status, weight measured, waiting time, availability of the prescribed drug, consultation time, family size (child), counseling received giving extra fluid, continue feeding and time to reach HC on foot (walking) were variables selected for multivariate analysis of client satisfaction on IMNCI services.
Multivariate Analysis of Variables Associated with IMNCI Service Satisfaction
Caretakers who were measured their weight of sick children were 58% less satisfied than those who measured their child weight AOR=0.42, CI 95% (0.19, 0.94). Caretakers who waited less than 30 minutes preceding consultation with health care providers were 2 times more satisfied than waited more than 30 minutes AOR= 2, CI 95% (1.01, 3.77). Caretakers who got prescribed drugs from health center pharmacy were nearly 4 times more satisfied than those who did not get AOR= 3.7, CI 95% (1.91, 7.34). The caretaker who took less than 30 minutes to reach the health center were7.7 times more satisfied than compared to those who took greater than 30-minute AOR=7.7, CI 95% (3.79, 15.59). The caretaker who had three and less family size was more satisfied compared to those who had larger than three family size AOR=2 CI, 95% (1.10, 4.06) ( Table 3).
Judgment Matrix for the Overall Implementation of IMNCI Program IMNCI service was measured by looking at three dimensions (availability, compliance, and satisfaction). From 100%, availability (30), compliance (35), and satisfaction (35) were given and the result found was 24.0%, 30.4%, and 27.8% for the above three dimensions, respectively. The percentage found in each dimension was converted to their respective weight and summation was made.
Discussion
Overall satisfaction level of the caregiver in the IMNCI service of Soro district, Hadiya zone was 80.9%, however, studies in Ethiopia have reported that overall satisfaction levels were ranging from 52% to 57% in 2006. 18 Predictor of caregiver satisfaction was waiting time, availability of prescribed medications, time taken to reach health center from home on foot, Family size, and weight measurements of the sick child was found to have a statistically significant association. The caretaker who took less than 30 minutes to reach the health center on foot was nearly 8 times more satisfied than those who took more than 30 minutes. A similar finding was observed in the Jimma zone on health service utilization indicated that clients who were a shorter distance to the health center were 2.9 times higher chance to get health service. 19 Caretakers who waited less than 30 minutes to get service of IMNCI in HCs were 2 times more satisfied than those who were waited greater than 30 minutes. This finding was consistent to study finding in Wolayta Teaching Hospital on the satisfaction of Caretakers/ mothers in outpatient service including under-five clinics indicates that caretakers waiting for the time less than or equal to 30 minutes in waiting area preceding consultation were to be more satisfied than those who were waited for 60 minutes. 20 Caretakers were more frustrated proceeding to consultation with health workers about to know their child's health status. The studies used to compare with this finding were not conducted specifically in under-five children, but it assessed satisfaction in all outpatient services including under-five services. Regarding the availability of prescribed drugs for a sick child, caretakers who got all prescribed drugs in health center pharmacy were 3.7 times more satisfied than those who did not get, and this finding is almost similar compared to another study finding on associated factors among outpatient department in Wolayita Sodo University teaching hospital, southern Ethiopia, 2015, nearly twothirds (64.3%) of the respondents did get all prescribed drugs from the hospital pharmacy were more satisfied. 20 Caretakers whose sick child's weight not measured were 76% less likely satisfied than those who were measured. As the researcher's best knowledge there were no studies found for this finding, but it needs further study to explain the finding. Caregivers who had three and less family sizes were 2 times more satisfied than those who had four and above family size. This might be related to the economic status of the caregiver. As to the best knowledge of the researcher, there was no similar finding, and so further studies will be necessary to explain it.
Conclusion
Based on judgment criteria overall implementation of the IMNCI program study was judged as good. All health centers had trained health workers, ORS, paracetamol, vitamin A, chart booklet, and IMNCI guideline were available, however, cotrimoxazole, gentamycin, ampicillin, and mebendazole were less abundant drugs in health centers. Medical equipments like a thermometer, weight scale, and stethoscope were not available in all health centers.
Compliance of health workers toward the IMNCI guideline was judged as good. Health workers less complied with counseling of caregivers on feeding, prescribing drugs, and follow updates, also, there were over and under classification, treatment and follow of pneumonia, diarrhea, anemia, malaria, and malnutrition were observed. According to judgment criteria, the satisfaction of caregivers in the IMNCI service was fair. The satisfaction of caretakers was affected by the long waiting time to get a consultation to health workers in the health center, availability of prescribed drugs in health center pharmacy, long walking time to get service of IMNCI, family size, and temperature measurement for the child. Further, a large-scale study is required to be conducted in the future in other districts to ensure proper implementation of the IMNCI program in Ethiopia.
Limitations of This Study
Hawthorne effect would cause health workers to improve performance solely as a result of being observed. However, the bias minimized by increasing the number of observation sessions, and those extra observations were not included in this analysis. Caretakers satisfaction was mainly related to their perception of the service that might not relate to the standards of national guidelines or policy of the nations. Social disability bias due to a consecutive sampling of caregivers.
Data Sharing Statement
The dataset used for this study cannot be shared and in the future, interested parties may request the approval to access the data by writing to Jimma University Institutional Review Board.
|
2020-10-19T18:06:15.489Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "8e75b06a9c95de651b7e38ba4de997a63267bf42",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=62114",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c6ada0535399b7a402d0a6d6b28885e24957397",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1110595
|
pes2o/s2orc
|
v3-fos-license
|
YASS: enhancing the sensitivity of DNA similarity search
YASS is a DNA local alignment tool based on an efficient and sensitive filtering algorithm. It applies transition-constrained seeds to specify the most probable conserved motifs between homologous sequences, combined with a flexible hit criterion used to identify groups of seeds that are likely to exhibit significant alignments. A web interface () is available to upload input sequences in fasta format, query the program and visualize the results obtained in several forms (dot-plot, tabular output and others). A standalone version is available for download from the web page.
INTRODUCTION
Modern bioinformatics relies heavily on alignment programs and motif discovery tools, and numerous comparative genomics projects need ever more precise and faster tools for comparing two or several genomic sequences with different resolutions.
Except for small sequences, the exact local alignment algorithm of Smith and Waterman (1) is not frequently used, and most alignments are obtained using heuristic alignment tools such as FASTA (2), FLASH (3), BLAST (4,5), BLASTZ (6) and PatternHunter (7,8). All these methods introduce a tradeoff between two competing parameters: selectivity (or specificity) directly affecting the speed of the algorithm and sensitivity affecting its precision (i.e. the number of relevant alignments missed). Achieving a good trade-off between sensitivity and selectivity is the key issue in local alignment tools. The recently introduced spaced seeds technique (7,8) allows an increase in sensitivity without loss in selectivity. This innovation triggered various studies (9)(10)(11)(12)(13)(14)(15) related to the usage, design and generalizations of spaced seeds.
In this note, we present YASS (Yet Another Similarity Searcher)-a new software for computing local alignments of two DNA sequences-and its web server (http://yass. loria.fr/interface.php). Compared with other tools, YASS is based on two innovations. The first is a new spaced seed model called transition-constrained seeds that takes advantage of statistical properties of real genomic sequences. The second feature is a new statistically founded hit criterion that controls the formation of groups of closely located seeds that are likely to belong to the same alignment. An implementation of these improvements, reported here, provides a fast and sensitive tool for local alignment of large genomic sequences.
Web interface
The main user input ( Figure 1A) consists of one or two sequences in fasta format either chosen from a predefined database or uploaded to the web server.
Once sequences have been selected, the user can run the program right away with all other parameters set by default. Alternatively, the user can set other parameters such as the scoring matrix or gap penalties (preselected matrices are proposed), and specify the DNA strain to be processed (direct, complementary or both). The user can also choose to display complete alignments rather than only alignment positions.
More advanced parameters are available for expert users. For example, the right choice of the seed pattern can increase the search sensitivity considerably provided that some knowledge of target alignments is available (10)(11)(12)(13)(14). The web interface provides a preselection of seeds including three transition-constrained seeds, one providing a good performance compromise between coding and non-coding sequences, and the other two tuned respectively for non-coding and coding regions. The accompanying Hedera program (http:// www.loria.fr/projects/YASS/hedera.html) is also provided for advanced users in order to design new seed patterns according to different probabilistic models of alignments (15).
Finally, the user can specify some statistical parameters of target alignments, such as the assumed substitution rate or indel rate. These parameters control the hit criterion, i.e. the rules for grouping together closely located seeds to detect similarities. Once the results are obtained, it is possible to generate a clickable dot-plot ( Figure 2) where each alignment is linked to a URL with its text representation ( Figure 1C). A tabular output ( Figure 1B) is also available: alignments are sorted according to their E-value and linked to their text representation. Finally, the YASS output can also be downloaded in text format for further analysis.
Technical issues. The YASS server available at http:// yass.loria.fr/interface.php currently runs Apache 2.0.47 (PHP and Perl-CGI modules) on a Linux Mandrake 9.2. Dot-plots are obtained with the GD graphical library interfaced to PHP. The YASS program has been developed in C and is distributed under the Gnu General Public License.
Owing to limitations of computational resources, some restrictions have been made on the web interface. For example, uploaded files are currently limited to 3 Mb, scoring systems can be chosen only among preselected ones and for each parameter a fixed range of possible values has been settled.
Standalone version
The standalone version is recommended for frequent users or those who need specific parameters to be set outside preselected values. It provides access to two other output formats, including a BLAST-like tabular format that can be used by existing postprocessing parsers. Note that YASS does not need one of the sequences to be preprocessed ( formatdb command of BLAST), rather, it treats both sequences on the fly.
METHODS
Here we briefly outline the underlying principles of the YASS algorithm, including some novel features. For a more detailed presentation the reader is referred to (16) (http://www. biomedcentral.com/1471-2105/5/149/).
Seed model
Seeds are specified using a seed pattern built over a three-letter alphabet #, @ and -, where # stands for a nucleotide match,for a don't care symbol and @ for a match or a transition (mutation A$G or C$T). The weight of a pattern is defined as the number of # plus half the number of @. The weight is the main characteristic of seed selectivity.
The advantage of transition-constrained seeds stems from the biological observation that transition mutations are relatively more frequent than transversions, in both coding and non-coding regions. Typically, biologically relevant alignments contain about the same number of transitions and transversions, whereas transitions are half as frequent in independently and identically distributed random sequences.
Transition-constrained seeds increase the possible number of transitions in a hit relative to spaced seeds without the transition constraint, and this is done without loss of sensitivity or efficiency.
The sensitivity of a given seed has been estimated using the algorithm of (15), which is a generalization of the one proposed in (11). Two main alignment models have been considered: a Bernoulli model (13) assumed to simulate alignments of non-coding DNA and a hidden Markov model (10) assumed to simulate alignments of coding DNA. By default, YASS currently uses the seed #@# --## --# -##@# of weight 9, which provides a good compromise in detecting similarities in both coding and non-coding sequences. The standalone version of YASS allows users to specify their own seeds. Several preselected seeds are provided by the YASS web interface.
Hit criterion
YASS is based on a multi-seed hit criterion that defines a hit as a group of closely located and possibly overlapping seeds. Two seeds belong to the same group if they occur within a bounded distance or, on the other hand, are located at close dot-plot diagonals. Distance threshold parameters are computed according to probabilistic sequence models taking into account substitution and indel rates, similarly to models used in (17). Note that seeds of a group are allowed to overlap. An additional group size parameter sets a lower bound on the total number of individual matches and transitions of the group. Using the group size results in a flexible criterion that combines a guaranteed selectivity with a good sensitivity on both short and long similarities. More details on the hit criterion can be found in (16).
Comparative tests
To validate the better performance of transition-constrained seeds compared with ordinary spaced seeds, several comparative experiments have been presented in (16). Transitionconstrained seeds have been shown to be more sensitive with respect to some Bernoulli and hidden Markov models of alignments of coding and non-coding DNA [Tables 1 and 2 in Figure 2. Three YASS dot-plots are shown, each obtained from pairs of closely related bacterial sequences. Green segments represent alignments of forward reads and red segments correspond to alignments between the reverse complement of one sequence and the forward read of the other. (16)]. Moreover, transition-constrained seeds have been shown to be more sensitive in detecting alignments of real genomic sequences [ Table 3 in (16)].
YASS has been compared with bl2seq (NCBI BLAST 2.2.6) according to several criteria: running time, number of significant alignments found (with E-value < 10 À6 ) and number of significant alignments found exclusively by one program and their total length [ Table 4 in (16)]. The results show that YASS detects more significant alignments than bl2seq, within a smaller time for large DNA sequences.
CONCLUSIONS
In this paper, we have described YASS-a new DNA local alignment tool. The proposed web interface features several output formats suitable for a coup d'oeil analysis as well as for a deeper analysis of alignments. An upcoming release of YASS will include multi-seed indexing strategies and an optimized processor-cache algorithm.
|
2014-10-01T00:00:00.000Z
|
2005-06-27T00:00:00.000
|
{
"year": 2005,
"sha1": "da17740d5e4a8ba65c9aa2c27a240a9b1de1abf2",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/nar/article-pdf/33/suppl_2/W540/7623590/gki478.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8416cef2d84becf0fe3ea14711e75345b88e815",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
]
}
|
247399956
|
pes2o/s2orc
|
v3-fos-license
|
IBEXES ON BLACK STONES: NEW PETROGLYPHS IN SURKHANDARYA (South Uzbekistan)
ock art is not unique in the regions of Central Asia; in south Uzbekistan, this phenomenon has not been closely studied yet for the simple reason that no sites with rock art have been discovered.1) The discovery of the petroglyphs occurred during the Czech-Uzbekistani-French archaeological expedition in the autumn of 2015, in the Sherabad District in Surkhanddarya Province, south Uzbekistan. Several clusters with petroglyphs were discovered during two seasons of research (2015, 2016), and all of them were carefully documented and have been analysed.
INTRODUCTION
ock art is not unique in the regions of Central Asia; in south Uzbekistan, this phenomenon has not been closely studied yet for the simple reason that no sites with rock art have been discovered. 1) The discovery of the petroglyphs occurred during the Czech-Uzbekistani-French archaeological expedition in the autumn of 2015, in the Sherabad District in Surkhanddarya Province, south Uzbekistan. Several clusters with petroglyphs were discovered during two seasons of research (2015,2016), and all of them were carefully documented and have been analysed.
GEOGRAPHICAL CONTEXT AND ENVIRONMENTAL CONDITIONS
The research area is situated in the western part of the Sherabad District of the Sukrhandarya Province, south Uzbekistan, in the vicinity of the Zarabag oasis. The border between Uzbekistan and Turkmenistan runs nearby the research area, 15 km to the west of the Zarabag village, along the ridge of the Kugitang Mountains.
It is to be found in the foothills of the Kugitang Mountains, which belong to the Gissar range-one of the ranges of the Pamir-Alays. 2) The slopes of the Kugitang Mountains (with the highest peak being Airibaba at 3138 m.s.l.) are long and gentle in the west of the range and short and steep in the east, which is typical for the Gissar Ranges. 3) The research area has the form of a foothill steppe, at an approximate elevation of between 700 -1500 m.s.l., with a continental arid climate, where high summer and low winter temperatures are usual. The precipitation (approximately 200 mm) is concentrated mainly in the winter and spring seasons; there is none or very little during the rest of the year. 4) The landscape consists of vast rocky outcrops and elongated low ridges up the dry riverbeds, comprising of stone soils with sparse vegetation. The loose lying stones occur on the slopes of the ridges (Fig. 1). Some of them have a surface covered by a black patina, which strongly contrasts with the surrounding landscape, and on which the majority of the study petroglyphs occur. A geological analysis of these stones has shown that the petroglyphs are not associated with any specific type of stone or the black patina deposits on various types of rocks. The three samples that were analysed were found to be igneous rocks, specifically: granodiorite, lightly metamorphosed basalt, and peridotite and skarn deposits.
The steppe belt in the Kugitang Piedmonts is irrigated by several water sources. The three main seasonal rivers-the Shalkan Darya, the Machayly Darya, and the Kyzylalmi Darya-run from the mountain range mainly to the south-east, and the intensity of the flow rate depends on the precipitation in the mountains. Besides the seasonal rivers, there are springs in the still inhabited oasis, and karezes 5) had also been used during the past in several places in the foothills of the Kugitang Mountains. 6)
5)
Karezes are the underground tunnels that collect the underground water and bring it to the places where it is needed-Chelebi (1983: 234).
TWO SEASONS OF RESEARCH
A local herder, Rustam Sukhrobov, from the village of Zarabag, showed the first of the previously unknown petroglyphs to us in the autumn of 2015. Based on this find, we focused on the detection of further petroglyphs. The first season of prospecting and documentation took six days in the field at the turn of September and October 2015. 7) The research continued the following season, in the summer of 2016, and it also took six days in the field at the turn of August and September. The activities were supported by the student project named 'Petroglyphs in Surkhandarya Province (South Uzbekistan)' by the Faculty of Arts at Charles University, led by Anna Augustinová, and supervised by Ladislav Stančo. The project will also continue in the season of 2017.
METHODS OF DOCUMENTATION
The documentation of the petroglyphs was conducted in the same manner in both seasons of the research and was carried out in several steps. We paid attention to every stone with any sign of human intervention-not only to the clearly recognizable motifs. This detailed documentation was necessary for the projection of the placement of the petroglyphs in the landscape.
Each of the stones was localized by GPS (Garmin eTrex and Topcon GMS-2), and its normalized description was made with an emphasis on the specific characteristics Each of the stones was also photographically documented, 8) and selected stones were documented in multiple photographs. This manner of documentation has enabled the creation of a 3D model of the stone or the creation of an orthophotograph 9) (Fig. 2). Afterwards, the spatial data was processed in QGis software, the data for the particular stones was processed in MS Access, and the clearly recognizable petroglyphs chosen were redrawn in Adobe Illustrator. 10) During the evaluation of the motifs on the stones, we used the 7) Augustinová-Stančo (2016: 122 -138).
8)
The photographic documentation of the petroglyphs was made by A. Augustinová, L. Stančo and J. Tlustá (in the season 2015) and by A. Augustinová, J. Kysela, K. Paclíková and L. Stančo (in the season 2016).
9)
The orthophotographs have been created by K. Paclíková.
10)
The redrawing of the petroglyphs is in progress. The aim is to create a complete software DStretch 11) (Fig. 3). Thanks to the high contrast of the engraving/ paintings on the retouched photograph, this has created an entirely new perspective on the documented petroglyphs.
The only available topographic map for the research area had been created by the Soviet Military in 1983 (1:100 000) and was not sufficient for our aims. We used the satellite imagery of Google Earth as the underlying map for our work.
THE PETROGLYPHS IN THE KUGITANG PIEDMONTS
During both seasons (autumn 2015 and summer 2016), we detected 144 stones and up to now we can recognize six clusters (Za_01 -06) where the stones with petroglyphs are concentrated (Fig. 5). We expect the number of petroglyphs to increase during following seasons. The petroglyphs occur on loose lying stones with black patina, which strongly contrasts with the surrounding landscape. Each of the documented stones has its own number (P001 -144). 12) They are situated at an average altitude of 1072 m.s.l.; however, in fact, each of the clusters lies at quite a different altitude. 13) The average size of the surface with a motif is 52 × 38 cm, and the size of the motifs/compositions of motifs corresponds to the fact that the petroglyphs occur on loose lying stones, not on the rock walls.
The first cluster (Za_01) is situated on the slopes of a low range up the dry riverbed and contained 31 stones with 64 petroglyphs. The range runs from the south-east end of the Zarabag village in the direction of the Burgut Kurgan site. The average elevation, where the stones of this cluster are situated is 922 m.s.l. (P012-the highest lying stone is 920 m.s.l.; P008-the lowest catalogue of the documented petroglyphs supplemented by the redrawing at the end of the project 'Petroglyphs in Surkhandarya Province (South Uzbekistan)'.
12)
In the case that there are more motifs on different parts of the stone that are clearly separated, the single sites are characterised by the letter (e.g. P061a, P061b).
13)
The lowest altitude of stones occurs in the cluster Za_02 (approximately 864 m.s.l.) and the highest stones with petroglyphs are situated in the cluster Za_05 (approximately 1183 m.s.l.). lying stone is 866 m.s.l.). Apart from one (P046), all of them occur on the left bank slopes of the dry riverbed-only the petroglyph P046 is situated on the upper third of the opposite slope.
The second group (Za_02) is concentrated on the range that runs from the Burgut Kurgan site to the east. The slopes are noticeably steeper than in the cluster Za_01, and it is often very difficult to move on this slope. It is necessary to take into consideration the same conditions at the moment of the creation of the petroglyphs. The cluster contains 11 stones, with 32 petroglyphs lying on the south slopes. The average elevation of the stones is 856 m.s.l. (P063-the highest lying stone is 876 m.s.l.; P066-the lowest lying stone is 854 m.s.l.).
As of now, the most extended cluster (Za_03) occurs on the slopes on the range that runs from the north-west end of the Zarabag village to the village of Kampyrtepa and then to the north of this village. Up to the present, 68 stones with 181 petroglyphs have been documented, at an average altitude of 1129 m.s.l. (P104-the highest lying stone is 1,225 m.s.l.; P061-the lowest lying stone is 1068 m.s.l.). The stones are situated on the main range (running in the north-west to south-east direction) as well as on the smaller ranges (running from the main range in the south-west direction). Most of the stones with petroglyphs occur on the slopes facing the south or alternatively to the south-west.
The fourth cluster (Za_04) is situated in the vicinity of the first two clusters (Za_01 and Za_02). The slopes are orientated-as are other clusters-to the south or a little less to the south-west, and they are considerably gentler than in other clusters. They slope down to the wide river bed of the seasonal river Shalkan, flowing from the Zarabag oasis to the village of Kayrit. Twelve stones with 24 petroglyphs have been documented in this cluster, at an average altitude of 886 m.s.l. (P051-the highest lying stone is 903 m.s.l.; P059-the lowest lying stone is 860 m.s.l.).
The fifth concentration represents the cluster Za_05. There is a very high probability there will be a lot more stones with petroglyphs, while the slopes of this group have not yet been observed completely. For now, there have been twelve stones with 24 petroglyphs documented, and this cluster represents the highest lying cluster at an average altitude of 1183 m.s.l. (P130-the highest lying stone is 1229 m.s.l.; P127-the lowest lying stone is 1100 m).
The last concentration (Za_06) is situated to the north of the village of Karabag, and ten stones with 27 motifs have been detected there.
MOTIFS
Among the 333 motifs, there are 166 clear objects (animals, human figures, chariots etc.), 52 motifs have the form of unspecified animals, and 91 motifs represent geometric patterns or not clearly recognizable objects.
Among the recognizable animals, the most plentifully represented ones are of various species of ibexes and goats (115 depictions) 14) . The second most frequently depicted animal on the petroglyphs represent camels (15 depictions), even though, in comparison with ibexes and goats, this number is almost negligible. There are also human figures depicted on the petroglyphs (9 depictions)-sometimes alone and sometimes in interaction with animals or objects. The next group of motifs represents chariots (4 depictions), which are sometimes with draught animals and depictions of loose wheels (17 depictions). Other recognizable motifs represent cattle or snakes.
Among the geometric patterns, the motif of a quadrangle with a point in the middle is often depicted. Sometimes, this motif represents the lower part of the ibex or goat (chest and legs connected with the line that represents the ground and the point in the middle of this quadrangle). Several other motifs represent geometric patterns that are reminiscent of the number eight or glasses.
DATING OF THE PETROGLYPHS AND THE LANDSCAPE CONTEXT
The dating of the petroglyphs is a complicated issue. If we focused on the absolute dating of the petroglyphs, we could take into consideration dating through XRF analysis (X-ray fluorescent analysis). This method enables the time of the creation of the petroglyph to be determined through measuring the level of accumulated manganese on the surface of the stones/petroglyph. The dating of petroglyphs using this method has been carried out in the USA
in Coso Range 15) and Colorado Plateau. 16) The deposition of manganese on the surface depends on many factors (orientation of the stone to the cardinal points; type of the stone; amount of dust; local climatic conditions etc.) and, because in Central Asia this method has not been used yet, there is no available calibrated curve.
As can be seen on one of the petroglyphs in the surroundings of Zarabag (P111, Za_03; Fig. 4), it is not possible to state that the rate of patina deposited on the surface clearly speaks for the age of the petroglyph. The whole creation of the engraving-based on the style, technique and composition of motifscan be dated to a single moment. But the contrast between the petroglyph and the surrounding surface is very different on the other sides of the stone. While on a more vertical one, the contrast between the petroglyphs and the surrounding surface is very high-even on the top of the side the patina is not yet deposited-on the more horizontal side, which is more exposed to the sun, rain and other external factors, the difference between the engraved motifs and the surrounding surface is almost invisible.
XRF analysis represents an interesting possibility of how to treat the petroglyphs, but the use of this method in the study area is conditioned by the creation of the calibrated curve beforehand. On top of that, the example of the stone P111 shows how strong the influence of external conditions on the rate of the patina on the stone and petroglyph surface is.
Because there was not any possibility of dating the petroglyphs in the Piedmonts of the Kugitang Mountains using natural science methods, we analysed the stylistic and iconographic aspects of the motifs in comparison with similar depictions in other regions. Most of the depicted motifs evince similar characteristics (techniques, style, theme). Based on such analogies, we can date the majority of the petroglyphs in the study to the Late Bronze Age and Early Iron Age.
Besides the rock art in numerous sites, these two periods are represented in the study region: at excavated sites such as Tilla Bulak, 17) Kayrit, 18) and Burgut Kurgan,19) at sites detected during the prospecting in the piedmont 15) Lytle et al. (2008).
19)
The following examples of analogies, which could help us to date the petroglyphs in the study area, represent only a sample of the wide collection of analogous rock art in the area of Eurasia.
The first cases of similar motifs can be found in the extended complex of petroglyphs in the Chu-Ili Mountains in the Kulzhabasy Range in the Zhambyl Region, south Kazakhstan. 22) Among the motifs dated to the Late Bronze Age, the frequently depicted goats and ibexes can be seen as well as the camels, chariots and wheels. 23) Even here, there is evidence of human activities during the Bronze Age. 24) Another analogy of the depicted goats/ibexes and wheels can be seen in the complex of petroglyphs at the site of Jorbat in north Khorasan Province, northern Iran. A stylistic and iconographic similarity between motifs here and the motifs in Kugitang Piedmonts is obvious. The petroglyph complex lies in the vicinity of the site called Rafteh, which is dated to the Bronze Age and early Iron Age, 25) and it represents a similar landscape context of these two periods, as in the piedmonts of the Kugitang.
Yet another analogy comes from the more distant region-Ukok Plateau in the Altai Mountains-the stylistic similarity is obvious and in the context of the nomadic societies that cover long distances, it is not incomprehensible. Here, depicted goats and ibexes, similar to those in the piedmonts of the Kugitang, have been dated to the south Siberian Late Bronze Age, represented in the area by the Afanasievo culture. 26) 20) Stančo (2016: 73 -85).
CONCLUSION
The petroglyphs are an inseparable part of the cultural landscape, which was created and used by the inhabitants in the past. They represent an important source for the study of settlement patterns, and they have the same significance as the settlements or burial sites.
This brief report focused on the introduction of the newly discovered rock art in Surkhandarya Province, research on which is ongoing. For now, it is obvious that the majority of the petroglyphs in the microregion of the Zarabag oasis in the piedmonts of the Kugitang Mountains could be based on stylistic and iconographic analogies, dated to the Late Bronze and Early Iron Age-to those periods that are numerically represented in the study region by numerous archaeological sites. All of this archaeological evidence allows us to slowly reconstruct the form of the cultural landscape in these periods.
|
2022-03-12T16:14:36.969Z
|
2018-12-31T00:00:00.000
|
{
"year": 2018,
"sha1": "3117dc4668c91aecfdb2c5240e75634b66d7cf8a",
"oa_license": null,
"oa_url": "http://czasopisma.marszalek.com.pl/images/pliki/aoto/7/aoto705.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f12d41359923629863915bd2b42653876d77c372",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
214428991
|
pes2o/s2orc
|
v3-fos-license
|
FROM STATE PLANNING TO PUBLIC CONTRACTING: A NECESSARY OPTION FOR SUSTEINABLE DEVELOPMENT
This article is about analyzing the state dynamics that goes from planning, through the public budget, to materialize with public procurement. Its focus is to find, in each of these institutes linked to the economic and financial order, its point of intersection with sustainable development, defended in the present work as one of the fundamental principles of the Republic, through the intertwining of arts. 1º, 3º, 170 and 225 of the Constitution. From this, it sheds some light on what is called sustainable public procurement, calling attention to the state purchasing power, which, because of the large volume of resources it moves, can be an important tool to induce and influence the market to a more sustainable behavior. Thus, recognizing the socio-environmental crisis suffered by the planet, it is intended to articulate forces to develop a culture oriented to reconcile the dimensions of sustainability, especially its economic, social and environmental aspects. Thus, adopting a qualitative research methodology of bibliographic review, it was concluded that, from planning to hiring, the State necessarily needs to opt for sustainable development, under penalty of aggravation of the socio-environmental crisis or even make the effort in sustainable public procurement mere initiatives in the field of public administration.
INTRODUCTION
The world is currently undergoing severe economic, social and environmental disturbances which, combined, have made us question the future-oriented development model.
In this sense, the environmental crisis issue is no longer hermetic and linked only to environmental problems. The issue is cross-sectional, connecting itself to other societal aspects and elements.
One can no longer say that a nation is developed when its ecosystem languishes, and the individual quality of life is threatened. A country cannot be considered rich if an abyss of social inequality stands among its citizens.
In Brazil, our Political Charter establishes, in its 3 rd article, that goals of our Republic are the construction of a free, just and solidary society which guarantees, in addition to that, national development, the eradication of poverty and the reduction of social and regional inequality.
Linked to these purposes, the Constitutional Congress, in article 225, has shown that, as a pathway to development, everyone has the right to a ecologically balanced environment, crucial to a healthy quality of life, being a duty of the public authorities and the collectivity to defend and preserve it for the present and future generations.
Besides, in observance of the section on economic and financial order and considering the aforementioned State duties and ethic objectives, there's nothing more evident than demanding from it a necessarily planned performance, as exactly stated on article 174.
Given these points, the purpose with the present article is to verify if the economic and financial order from the 1988 Constitution, beginning with the activity of state planning up to its accomplishment -with public contracting -would be necessarily linked to a road leading to sustainable development.
For this purpose, it is necessary to inform that the choice of planning and public contracting institutes was not random. It concerns two extremes of state dynamics, in which the first deals with generic and abstract actions, geared towards establishing aims and broad objectives for the State. The second, in turn, reveals itself as one of the most concrete acts of the Public Administration, having in its ultimate extent, the goal of accomplishing effectively and positively the planning set before.
In this desideratum, this article had as a first objective expounding on the evolution of the concept of economic development, going through the study of sustainable development dimensions to an analysis of state planning and public contracting, with reference to the evaluative and axiological directives of the 1988 Constitution.
Furthermore, concerning the methodology, qualitative researching was adopted, through a technique of bibliographical theme revision, in a clear intent to link sustainable development, State planning and public contracting.
Finally, the present article is divided into five topics, in which the first and the last deal with the introduction and final considerations, respectively. The second concerns the principle of sustainable development in the 1988 financial and economic order, while the third examines state planning and its connection to development, and the fourth focuses in public contracting from a perspective of sustainability.
SUSTEINABLE DEVELOPMENT IN THE 1988 FINANCIAL AND ECONOMIC ORDER.
This topic is justified by the need of analyzing the grounds and origins of development, notably its breaking away from an ideal of exclusively economic development until what is currently known as sustainable development.
From there, an evaluation of the very postulate of sustainability from the standpoint of the Federal Constitution is intended, especially in the title relating to Economic and Financial Order, thereby laying ground for the study of state planning and public contracting.
From growth to development
According to Miranda (2018), up until the half of the past century, development has been synonymous with economic growth, in which the main way of attaining it would be through the accomplishment of progress. This led governments to put faith in polities and actions geared toward intensive use of resources to increase production and wealth, by creating industries and fostering consumption.
In this sense, countless transformations have managed to mark the mentioned period, especially due to a surge in conscientization that begun to question this growth model, based, so far, in the irrational and unsustainable increase in production and consumption. Environmental disasters and catastrophes revealed mankind's inability in managing such exploitation of natural and social resources, drawing attention to the plight suffered by the planet.
From there, Varella (2003) explains that the confrontation with the environmental crisis has sparked a standoff between Northern and Southern countries. The former, remarking the loss of its populations' quality of life, became proponents of a global pact in defense of the environment. The latter, however, still suffering from severe social problems, demand the right to development as a case for searching economic growth.
On this matter, Granziera (2014, p. 36) reveals that the shock of interests was so intense that Southern countries have formally disavowed any action that might impede its growth, "even if it meant having to cope with environmental problems." On the other hand, movements such as the Rome Club (1968), the Stockholm Conference (1972), the Brundtland Report (1987), ECO-92, the Millennium Declaration (2000), RIO+10 and the United Nations Summit (2015) have been of significant importance in questioning and making a stand against the then-current model, creating a split between the concepts of growth and development.
The international community, in its turn, began demanding a paradigm shift, in a way that development would no longer be measured and linked only to strictly economic criteria, claiming a multidimensional amalgamation with social and environmental aspects.
Following such trend, the right to development begins to lose ground in the realm of International Economics while gaining, at the same time, relevance in International Environmental Law. This made the growing decay of the planet become a matter of concern to all, no longer representing an economic conflict between North and South (VARELLA, 2003, p. 31).
Corroborating with this analysis, Milaré (2014) clarifies that the myth of mere growth is being increasingly (albeit slowly) reconsidered by society, scanning for alternative measures able to conciliate full development, environmental preservation and quality of life improvement.
Following this rationale, Nusdeo (2002) advocates the need of imposing a conceptual distinction between growth and development. The latter, apropos, would be deeper than the former, revealing itself as a pathway or passage, composed by many steps, starting from a state of underdevelopment to attain a "developed" status. In this desideratum, development is linked to structural modifications not only of economic, but also environmental and social nature.
In addition, to the author growth concerns solely the availability of goods and services, without effectively implying structural and qualitative change in a nation. This would represent, therefore, "an upsurge, a cycle, and not a stable process." (NUSDEO, 2002, p. 19) Development, effectively, is more complex, accounting for ideas of wellbeing, life conditions improvement, freedom, acquisition of human capacities, wealth distribution, poverty reduction, plurality, environmental protection, HDI, among others. (FEITOSA, 2013). Feitosa (2013), with this outline, sums up this conceptual shift from growth to development given an inflection of economic into law. According to the author, this trajectory begins with the illuminist ideal of progress, going through the understanding of growth as a durable, accumulative and speculative strategy, arriving at the present concept of development, "considered as a plural inclusion and capacity recuperation process, which guarantees and (is) guaranteed by rights" (FEITOSA, 2013, P. 177).
Therefore, from this context and this conceptual shift the definition of economic development takes form, attaching itself to new aspects, notably social and environmental ones, creating the idea of sustainable development, which is the subject of the next topic.
Susteinable development: environmental, social and economic dimensions
After its origins on the narrow definition of economic growth, sustainable development undergoes a conceptual evolution, relating to other aspects of life and society, and gaining notoriety with the Brundtland Report (Our Common Future), elaborated by the United Nations World Commission on Environment and Development.
According to the mentioned document, sustainable development is that "which allows the present generation to satisfy its needs, without compromising other generations' satisfaction of their own." (BRUNDTLAND COMISSION, 1991, p. 46) Such definition, although criticized by many authors (such as Arthur Lyon Dahl, Reinaldo Dias, Amartya Sen, Maria Helena Martins Brasileiro, and others), had an important role in publicizing the debate on the matter, drawing attention, in its' first moments, to the search for balance between economic development and environmental preservation. (MIRANDA, 2018) Further on, the discussion concerning sustainable development has attained greater breadth and cross-secionality, evoking Human Rights as well. According to Granziera (2014), the Rio/92 Declaration, for instance, has set a close relation between poverty and environmental decay, in a such wat that one can only speak of a healthy environment if social rights are considered as well.
It is exactly on this expansion and broadening that Feitosa (2013) explains that it was exactly because of the collapse of the market's deregulating structures, the worsening of the environmental crisis and the greater awareness of the dangers of inequality, that the concept of sustainable growth detaches itself from a merely environmental standpoint to embrace human and social aspects.
With this, sustainable development is then understood in a way aiming at conciliating social, environmental and economic dimensions, forming the so-called the tripe bottom line. (MIRANDA, 2018) Under such a perspective, Mendonça (2018), emphasizing the environmental dimension, explains that in this task of balancing, the promotion of sustainable development is related to the search for sustainable patterns of consumption and production. Its purpose should head towards an increase in the efficiency of energy usage with the goal of reducing the pressure on the environment, as well as pollution and the depletion of natural resources, given that the current model places the very survival of mankind at risk. Freitas (2019, p. 74), following this trend, warns that "there can be no worthy longevity in a degraded environment" and that "there can't even be human life with no careful protection of environmental sustainability", as "either the quality of the environment must be protected or there will simply be no future for our species".
From the standpoint of the economic dimension, sustainability poses a double challenge, as stated by Cruz and Ferrer (2015): on one hand, there's the intention of increasing wealth generation in an environmentally sustainable way and, on the other, finding mechanisms for its fair distribution. Some measures refer to these purposes, such as the need to invest in "green" technology sectors and in renewable energy sources, prohibiting abusive and unsustainable practices, as well as having in mind a new model for generation of wealth, away from classic consumerist patterns. Freitas (2019, p. 75), in this way, explains that the economic dimension of sustainability cannot be detached from rationality and the measurement of ecosystemic consequences. That is, transactions must deal directly with externalities in order to repress dysfunctionalities, reducing waste and promoting robust investments in educational improvement, increasing individual income, "in a cost-benefit equation that leans towards positive externalities".
From the social dimension standpoint, however, sustainable development is linked, more intensely, to the principle of human dignity, in a way that, according Freitas (2019, p. 65), "an excluding, insensible and inequitable development model cannot be tolerated".
In this phase, the task at hand is aligning the exercise of fundamental rights with the formulation of public policies that can effectively reduce poverty and inequality, without implicating a mere growth process, dissociated from environmental protection.
Such dimensions, as observed by the author, are entangled, as they are not a mere union of sparse characteristics, but of articulated and systematized aspects, in a mutual and complex relationship between individuals, as well as between them and the planet.
Therefore, it is from this study of sustainability dimensions that the importance of discussing whether Brazil is observing, through its Political Charter, environmental and social problems, becomes clear. That given, it is essential to analyze if the Economical and Financial Order established by the 1988 Constitution is aligned with sustainable development, a relation that will be addressed in the following subtopic.
An economic and financial order geared toward sustainable development
With the unveiling of the origins and dimensions of sustainable development, it is now suitable to investigate if the Constitution, specially in its 7 th title, concerning the economic and financial order, has embraced sustainability as an orienting vector of state activity, which will allow us to lay the ground for the analysis of our following topics.
Indeed, article 1, clause III of the Constitution already stated its first sign of favoring sustainability, by funding the Democratic Rule of Law in the dignity of the human person. In this continuous act, article 3 states the fundamental objectives of the Republic, highlighting, among them, the making of a free, just, and solidary society, which can uphold national development, eradicate poverty and reduce social and regional inequalities.
Following this line of thought one may say that the referred article is a synthesis of all dimensions of sustainable development. After all, it would not be possible to plan for a just society if economic activity is detached from the reduction of poverty and environmental problems.
Moreover, a nation cannot be classified as developed if there linger social and regional inequalities among its citizens. There won't be quality of life if the ecosystem languishes. There is no possibility in sight to ensure national development should human dignity and a solidary society not be ensured as well.
Therefore, it is possible to affirm that the Constitution, through a simple interpretation of its fundamental principles, as expressed in its first and fourth articles, is committed to sustainable development. Such an inference, it should be said, is quite relevant, since it is usual for the interpreter, while analyzing the constitutional section on economic and financial order, to make a narrow and non-systematized rendition, forgetting the foundations and objectives.
From the article 170, dealing with general principles of economic activity, the Constitution did not forget its' founding goals by stating in its beginning that the purpose of economic activity is ensuring a dignified existence to all, as demanded by social justice.
Besides, clauses VI and VII highlight the status the Constitution grants to sustainable development, explicitly indicating that the economic order should observe environmental protection and the reduction of social and regional inequalities.
This circumstance, consequently, reveals the concern of the founding constitutional congress in correcting the mistakes of an economic model that ignores biological, physical and chemical limitations of the ecosystem. (PAIVA JÚNIOR, 2018) With this desideratum, article 225, although concerning matters outside the section of economic order, attaches itself, in a systematical and teleological way, to the beginning and clause VI of article 170, establishing that everyone has the right to an ecologically balanced environment that ensures a healthy quality of life, imposing to the public authorities and the collectivity the duty to defend and preserve it for the present and future generations.
Paiva Júnior (2018, p. 115), in this sense, states that "if the economic order is geared towards the accomplishment of a dignified existence and social justice, its practice cannot result in the reduction of the population's quality of life", being necessary, therefore, to strive for balance among the dimensions of sustainable development. Freitas (2019), in a study on the constitutional value of sustainable, explains that the systematic topic intertwining of articles 3, 170, VI and 225 of the Charter is exactly what adamantly determines the orientation of the Brazilian State, along with its economic and financial order, towards continuous and lasting development, able to reduce inequality; in other words, sustainable development.
Corroborating this understanding, Silva (2004, p. 63) highlights that the constitutional section on Economic Order shows that the national wealth and production goods should be compatible "with the attainment of quality of life throughout the entire population, considering the perspective of working in equitable conditions with other social strata." The author, furthermore, concludes that, in order to make this possible, state action should be guided by the fundamental principle of national development.
Additionally, he states that "the entire constitutional order, concerning Section VII of the Economic Order, was conceived and constitutionally structured in order to make national development possible", seeking to accomplish the fundamental tenets of the Republic. (SILVA, 2004, p. 63) Here, however, we have made a small addendum to Silva's thinking, given that, as explained before by Freitas, such national development can only be the sustainable one. Adri (2007, p. 91), while analyzing the matter, advocates that the entire constitutional corpus "shows that economic order and the State can only have their raison d'être in serving the human being, its citizens, and not the other way around." In this way, development and, consequently, the constitutional economic order, observed by a strictly economic perspective, ignoring dignity, life quality and environmental balance tenets, would have no legitimacy to accomplish the founding goals of the Republic, especially considering articles 1, III and 3.
Vieira (2010), on this matter, explains that the telos of economic order is to ensure a dignified existence to all, as dictated by social justice, observing environmental protection (beginning and clause Vi of article 170), it is therefore an objective of an ecologically balanced environment and essential to a healthy quality of life.
Finally, the referred author concludes that, through a contextualized analysis, the constitutional economic system brings forth, "in an irreversible and uncontestable way, the sustainable development model is the preferred one by systematical interpretation of the norms stated in articles 170 and 225". (VIEIRA, 2010, p. 11) Therefore, considering the foregoing, it is possible to affirm that sustainable development is more than compatible with economic and financial order, it is the orientation vector to be sought by the Brazilian state. As such, there is no possibility for development that is not in line with the multidimensionality of sustainability, conciliating the complex environmental, social and economic aspects.
STATE PLANNING
Considering the prior investigation had as its goal demonstrating the necessary option by the constitutional congress for sustainable development, the present topic intends to analyze one of these economic and financial order institutes, namely state planning.
Therefore, some preliminary general observations will be made concerning its historical evolution, definition and nature, in the scope of the Brazilian legal system. Afterwards, it is intended to relate development to planning, with greater attention in connecting the matter with public contracting.
Historical evolution, concept and juridical nature
It is curious to observe that the Brazilian state planning is a relatively young matter to be treated in the constitutional corpus. Cardoso Júnior (2011) notes that from 1889 (the First Republic) to 1930 there was no state planning, with complete omissions of the then-effective Constitutions about the subject.
Afterwards, in the period between 1933 and 1955, known as the Vargas Era, Brazil lived the period of Non-Systemic Planning, marked by the SALTE Plan, which aimed to answer an industrialization context, by creating the first state companies (such as Petrobras, BNDE and Vale do Rio Doce).
From 1956 to 1964, the author points that the country pursued Discretionary Planning, which had a more present and lasting character, albeit freely adopted by the incumbent. This period was marked by developmentalism and the 31 Goals Plan of Juscelino Kubitschek. From 1964 to 1979, during the military regime, Planning became technocratic and bureaucratic, when the government assumed everything could be solved by the know-how conceived by its authoritarian bureaucrats. In this period the Superior School of War (Escola Superior de Guerra) was created and the PAEG (Government Economic Action Program -Programa de Ação Econômica do Governo) and the second PND (National Development Plan -Plano Nacional do Desenvolvimento).
From 1980 to 1989, the referred author defines the moment of planning as a series of attempts to implement stabilization plans, given that in the redemocratization period there were attempts to implement plans Cruzado (1986), Bresser (1987), Verão (1988) and Maílson (1989), which were, in their turn, not successful as promised. However, from 1990, an year marked by democratic consolidation and managerial reforms in the state, the scholar notes that since 1994, with plan Real, state planning was marked, effectively, by its stabilization plans, especially with the obligatoriness of the public authorities to elaborate, every 4 years, pluriannual plans.
After the very idea of planning having evolved throughout Brazilian history, the 1988 Constitution has addressed it in the economic and financial order section, especially in article 174.
Art. 174. As a normative agent and regulator of economic activity, the State will exercise, as written in law, the functions of surveillance, stimulation and planning, the latter being determinant for the public sector and advisory for the private sector. § 1º The law will establish the directives and grounds of the planning of balanced national development, which will incorporate and make the national and regional developments plans compatible.
Hence, it became a subject of greater study by the doctrine. Meirelles (2014, p. 844), for instance, defines planning as the "study and establishing of directives and goals that will orient government action, through a general government plan (…)" Grau (2006), in the same vein, states that planning is a rational action characterized by the signaling of future social and economic behaviors, with goal-making and the setting of coordinated means of action.
By such definition, the scholar (2006) advocates that the nature of planning is not intervention, but a technique or rational action method that aims to qualify state intervention. Such line of thought however, is not unanimous, with the opposing argument that with article 174 itself the constitutional congress intended to characterize planning as one the intervention forms. According to Adri, the legal corpus does not confer to the mentioned institute a generic idea of a technical act of mere administrative and financial act. His position follows: It does not suit dynamism and efficacy that the juridical order confers planning the idea that it is a technical act void of ideological content, being granted, solely, the nature of A NECESSARY OPTION FOR SUSTEINABLE DEVELOPMENT Direito e Desenvolvimento, João Pessoa, v. 10, n. 2, p. 300-321, jul./dez. 2019.
administrative and public-resource financial action, with no interference or association with social control and demands, which would result in its neutrality (…) Planning presupposes political action with its own purpose and dynamizing status, which assimilates the diversity of choices facing certain objectives as identified by standards chosen by society. (ADRI, 2007, p. 123) However, for the purposes of the present article, the definition of Eros Roberto Grau will be adopted, since, for us, planning is a mean of the State to achieve sustainable development. It does not seem possible to intervene in economy with the purpose of planning for the sake of it, as an end on itself. The activity of planning does not derive from Law, but from other sciences, such as Economy, Political Science, Statistics, Accounting and many others, which seek to establish techniques and methods so that the public authorities can, prior to intervention, rationally outline practicable goals.
Besides, from such conception, planning becomes the directive vector of the entire Administration, spreading it across the state structure, avoiding its distance from reality and the administrative routine, in a way to encourage the public manager to adopt a more professional and managerial performance, closer to the State goals and objectives.
Furthermore, as warned by Marrara (2011), from adopting this definition, the theme of planning is no longer linked to an ideological discussion between economic liberals and socialists, given that its conception is primarily technical, inasmuch the State cannot profit from disorganization, randomness and inefficiency in the achievement of its public policies.
Thus planning can be understood as a technique or method that allows the State, in rational, systematized and cross-sectional action, to project goals and objectives geared towards formulating public policies that, necessarily, pursue sustainable development, a theme that will be further described in the next topic.
State planning: a necessary option for the sustainable development of the Brazilian state
As described, state planning was considered in the economic and financial order, specially in article 174 of the Constitution. However, the provision represents solely the opening of the debate, since the act of planning is found throughout the entire constitutional text and, as intended to be demonstrated here, the entire Charter is oriented towards sustainable development.
Anyway, the starting point will necessarily be article 174. In its beginning, it is written that "As a normative agent and regulator of economic activity, the State will exercise, as written in law, the functions of surveillance, stimulation and planning, the latter being determinant for the public sector and advisory for the private sector." According to Silva (2004), by reading the referred provision, the Constitution has listed two roles of state intervention: normative and regulating agent, along with the three functions to carry them out: surveillance, stimulation and planning. Here, four relevant observations should be made.
The first one is that the State, by exercising its planning function, has the objective of qualifying and rationalizing the interventive roles of the State, either as a normative agent or as a regulator of economic activity, which indicates the appropriateness of the concept by Eros Roberto Grau. Secondly, to the author (SILVA, 2004, p. 105) although the Constitution has declared that surveillance, stimulation and planning functions will be exercised as written in law, "by foreseeing the determining character of planning to the public sector, the constitutional congress has vested the norm of article 174 with a self-executing character.
Thirdly, when the text informs that the act of planning is determinant for the public sector and advisory for the private sector, it means that the State may suggest goals and ways, to which he will be attached and must therefore strive to enforce them; it, however, may not coerce them in any way from the private initiative. (TAVARES, 2011) Besides, the public authorities may create mechanisms and incentives so that the private agent may collaborate with its planning, but always voluntarily, given that the principle of free initiative is also a constitutional tenet.
In a fourth commentary on the beginning of article 174, Tavares (2011) explains that some scholars, such as Miguel Reale and Oscar Dias Corrêa, distinguish planning from planification. While the former is advisory to the private sector, planification is compulsory and enforceable on the entire collectivity, as experienced in the former USSR. Now, heading for § 1 of article 174, the true intention of the constitutional congress is revealed, that being: planning must be geared towards sustainable development. According to the provision, "the law will establish directives and grounds for planning the national balanced development, which will incorporate and make compatible regional and national development plans". Grau (2006), while commenting the text, argues that § 1 has the purpose of defining and qualifying planning as addressed in the beginning of article 174. Thus, the state will not exercise a function of planning whatever or indiscriminately planning, but of planning "balanced national development", which, as explained by Freitas, can only be sustainable development: Sustainability is, in the Brazilian legal system, among its values, one of constitutional magnitude. Furthermore, it is a "supreme value", when interpreting the Charter as an instrument of long-term social and biological balance. It is easy to justify: from the introductory statement of the Constitution, development stands out as one of the "supreme values". Which development, may we ask? It cannot be that of the imperious and nature-degrading anthropocentric view, nor that of the insensibility typical of parasitic relations. It is sustainable development, or preferably, sustainability, that appears as a supreme value (FREITAS, 2019, p. 121) Supporting this, Adri (2007, p. 142) also holds that "not any political provision will answer this demand from the constitutional text, but planning geared towards (sustainable) development", which presupposes an articulation of interests guided towards conciliating economic (wealth production), social (wealth distribution) and environmental (importance to healthy quality of life) factors.
Further in this context, Silva (2004, p. 112) comparing the French and Italian experiences, explains that the Brazilian development model "seeks, precisely, the diminishing of local, regional and national differences", since the Federative Republic of Brazil has the ontological and inexcusable duty of ensuring development.
In this view, the public authorities have the moral and legal duty of executing its plan and adopting every providence necessary to its execution, but from the decision to plan, going especially through the (content) elaboration phase, up to its implementation, planning must be oriented towards sustainable development.
That is, fundamental principles, chartered in human dignity, in the making of a just, free and solidary society, in the eradication of poverty and the goal of reducing regional and social inequalities, are the gravitational center of planning.
From there, every other sphere relating to planning will be necessarily linked to thinking and obliging the duty of sustainability, considering the evident intertwining of articles 3, 170 and 225 of the Constitution.
For instance, when discussing urban planning, article 182 establishes the duty of the state to order urban policy to "the full development of the city's social functions and ensuring its inhabitants wellbeing". That said, there's a clear link with the aforementioned provisions, in a way that observing the city's social function and ensuring wellbeing is a matter inexcusably linked to sustainable development, in an indicative task of conciliating the social, economic and environmental instances.
In the same way, in thinking economic planning one cannot forget article 170, which is based in the cherishing of human labor and in free initiative, having as a goal ensuring a dignified existence to all, as demanded by social justice, observing furthermore, environmental protection.
Similarly, by mentioning the planning of the national financing system, article 192 of the Charter states that it will be "structured so as to promote balanced development in the country and serving the collective interests".
If one considers furthermore the educational planning, article 205 states that education "will be promoted and fostered with societal collaboration, seeking full personal development, its preparation to the exercise of citizenship and qualification for work", associating itself clearly with articles 3, 170 and 225 of the Constitution, given that there will be no quality education if no heed is paid, respectively, to the diminishing of inequality, respect to social norms and healthy quality of life (with a balanced environment).
The same happens with scientific and technological planning and innovation, given that articles 218 and 219 set the state duty to relate them to the "country's cultural and socioeconomic development, along with the population's wellbeing and national technologic autonomy", which, combined, represent the dimensions of sustainable development.
The state budgetary and financial planning of the state will be directed towards sustainability as well; the theme, however, due to its strong connection to public contracting, will be addressed in the next topic.
Anyhow, what is evident is the clear and explicit constitutional will in directing state planning, along with all its correlated matters, to sustainable development, thus guided to the ethical environmental duty, the purpose of ensuring everyone a dignified existence, the eradication of poverty and the diminishing of inequality.
Budgetary and financial planning: paving the way to sustainable public contracting
Given that economic and financial order, as well as its institute of state planning, follows the way of sustainable development, it must, coherently, be observing in budgetary planning.
From it, the analysis on financial resource allocation, intended to attain the public policies planned by the state, proves to be of the utmost importance.
As observed by Leite (2017), initially the budget was considered to be a mere accounting piece, detached from the idea of planning, with no goal or objective setting. Hence there was no concern with the true needs of the collectivity. In evolving, discussion began on the budgetprogram model, adopted in Brazil, by which the resource allocation is linked to the objectives, goals and projects of the state plan. Pereira (2008), on the other hand, argues that the environmental crisis and social problems experienced by the planet demand another evolution, pursuing the so-called "sustainable budget". With this, it was initially suggested greater investment in personnel improvement (obviously geared towards sustainability) so that there is greater quality in control and evaluation of public expenses. For him, any budgetary destination should be considered in a holistic and integrated way, as performing isolated expenses, without considering the whole (especially its externalities).
Pereira's viewpoint is an interesting one, as the constitutional order would allow such an evolution and budgetary planning, needing only a paradigm shift; not that it is an easy or simple task, especially in the political field, engaging the incumbents, responsible for drafting the budget, in a mindset centered around sustainability.
In Brazil, the budget is expressed, in general terms, in three budgetary laws: the Pluriannual Plan (PPA), the Budgetary Directives Law (Lei de Diretrizes Orçamentárias -LDO) and the Annual Budgetary Law (Lei Orçamentária Annual -LOA).
Indeed, the PPA represents long-term strategical planning, elaborated every 4 years (non-coinciding with the presidential mandate), establishing, in generic terms, the goals and objectives of the Administration for capital expenses (relating to investments). The LDO, on the other hand, is short term strategic planning, representing a link between PPA and LOA, establishing, in a summarized way, goals and priorities to the Administration, on top of the financial stimulation agencies application policy (e.g. BNDES, Caixa and Banco do Brasil), LOA, in turn, is the budgetary piece that foresees revenues and sets the expenses of the following accounting period, as guided by the PPA and LDO.
Moreover, they are all linked to article 174, § 1 of the Constitution. Therefore, budgetary planning is tasked with making the national and regional development plans compatible, as, pursuant to article 165, § 4, these plans will be elaborated in accord with the PPA.
In addition, § 7 of article 165 meets all that is advocated in this study, since the LOA will have among its functions that of "diminishing interregional inequalities", thus linking itself to article 3 of the Constitution.
In other words, the attainment of fundamental principles of the Republic do not escape the public budget. Plans and programs, along with their expression in budgetary laws, are not an end in themselves. They are means of attaining human dignity, of building a just society, of ensuring a healthy quality of life (with a balanced ecosystem), bent on diminishing inequality.
From there, one may say that an ineffective allocation of resources in programs and actions seeking to accomplish the environmental ethical duty of ensuring an ecologically balanced environment or having the purpose of diminishing social inequalities, would result in standing against the very fundamental principles chartered in the Constitution.
The budget has, therefore, as a mean of making public policies possible, the duty to seek sustainable development and, as observed by Pereira, the analysis of public expense demands a holistic approach, that may integrate social, environmental and economic dimensions.
All things considered; budgetary planning is the strongest link of public contracting. As stated in article 165, § 10 of the Charter, "the Public Administration has the duty of executing budgetary programs, adopting the necessary means and measures, with the goal of ensuring the effective provision of goods and services to society." Additionally, one of the ways by which the public authorities may deliver goods and services to society is through public tenders and contracts. Here, one may say, the broad goals and objectives of the State (planning) are joined with the concrete actions meant for the collectivity in the effective acquisition of goods and services.
It is in this moment that the two extremes of this complex, interdependent state dynamic are linked, with planning on one end and public contracting on the other, the latter being the most tangible way of concretizing the policies and plans established by the Brazilian state.
Given this, if the starting point is an approach of the economical and financial order in a sustainable manner, the following step is that state planning and, consequently, its budgetary plan, be oriented towards attaining sustainable development. Therefore, concrete action, materialized in public purchases, should also gravitate around such provision, which has been currently referred to as sustainable public contracting.
SUSTAINABLE PUBLIC CONTRACTING
From understanding state planning as a mean by which the public authorities elaborate their most broad and general goals and objectives, the public contract is, in its turn, one the used tools for the State to concretize and achieve its plan.
Therefore, as we have already covered in the course of planning, the present topic is justified in order to analyze the other end of this dynamic, also linked to sustainable development, hence currently known as sustainable public contracting (or Sustainable Public Tender or Green Public Tender)
A new administrative law and sustainable public contracting, a renewed form
Initially, it is necessary to understand that by thinking sustainable public contracting one speaks of renewed Administrative Law, in which traditional institutes are hence intensely associated with the value of sustainability.
In the words of Freitas (2019), even strict legality, the bulwark of the most basic Administrative Law coursebook, is now associated with sustainability and reappears as a duty of critical observance of norms, setting aside the "all or nothing" discourse, making its interpretation closer to that of the values of the 1988 Political Charter. Henceforth, the public interest itself is not necessarily revealed as an objective of tending the administrative machinery, but the interests of present and future generations.
According to the aforementioned scholar, this new Administrative law has sustainable development as a booster for state dynamics, crossed by new values, with a holistic approach and participative stake holding, in lieu of constricting bureaucracy, power centralization, authority cult and patrimonialism.
Confirming this new tendency, Moreira (2017, p. 18) explains that the conventional standards which regulate the public sector are normally elaborated and executed under static conditions, not considering the relevance of these new changes in the way of governance or the performance of public services. With that, "the new Public Law intends to harmonize state conduct with the changes and vicissitudes of current society", coming closer to "constitutional demands, fundamental principles and rights". This is justified not only by all the international outcry warning about the risks of the current societal model, founded in unsustainable patterns of consumption and production, but also by values chartered in the Constitution, especially, by the already mentioned intertwining of human dignity and articles 3, 170 and 225.
That considered, the State may not excuse itself from fulfilling its environmental and ethical duty, being up to it the preservation and protection of a ecologically balanced environment, which is in turn essential for the healthy quality of life of present and future generations. This task cannot be performed without seeking an economical order which ensures a dignified existence to all, geared towards the construction of a free, just and solidary society.
Consequently, the Administrative Law cannot be without this constitutional essence, notably its fundamental principles. And, being so, all its institutes demand a new perception, including Public Tenders. Freitas (2011), in this line of thought, defines the so-named Sustainable Public Tender as an administrative procedure that, with equality and the effective search for sustainable development, seeks the selection of the most advantageous proposal to the Administration, while reckoning, with utmost objectivity, the cost and social, economic and environmental benefits.
It is in this context that the infra-constitutional lawmakers have promoted a change in article 3 of law 8.666/83, including the promotion of national sustainable development as one of the goals of public contracting.
Art. 3 Public tenders are intended for ensuring the observance of the constitutional principle of equality, the selection of the most advantageous proposal to the Administration and the promotion of sustainable national development, and it will be processed and judged in strict accordance to the basic principles of legality, impersonality, morality, equality, publicity, administrative integrity, entailment to the calling instrument, objective judgement and those related (as written in law 12. 349,2010) Cherishing the aforementioned legal change, Furtado (2017, p .32) explains that the reformed provision has attained the very object sought by the State, which "started to contain elements which do not strictly relate to the utility that the good or service will provide to the Administration, but to the effects by which their purchase will favor Brazilian society".
It was for no other reason that the law 12.305 of 2010, in instituting the National Policy on Solid Waste, defined that, in article 7, clause XI, one of its objectives is that governmental contracting prioritize the purchase of recyclable products and goods and services considered compatible with patterns of social consumption and environmental sustainability. Art. 7 The objectives of the National Policy on Solid Waste are: XI -prioritization, in governmental purchases and contracting, of: a) recycled and recyclable products. b) goods, services and works considered to be compatible with patterns of social consumption and environmental sustainability.
Following this trend, in the federal sphere, the President of the Republic has edited Decree 7746 of 2012, to establish criteria and practices for the promotion of sustainable development in Public Administration contracting.
Theme was further elaborated in non-statutory legislation, having the Ministry of Planning (currently Ministry of Economics) published the Normative Instruction nº 01 of 2010 (on sustainability criteria); nº 10, of 2012 (Plan of Sustainable Logistics Management) and nº 05 of 2017, although it had no direct editions on its theme, it was extremely important in demanding sustainability criteria in service contracting under the indirect execution regime.
Hence, public purchases started to be constituted under the directive of sustainability, and thus, to seek goods and services with a lesser impact on natural resources, giving preference to local-sourced technologies and raw materials, as well as those which show greater efficiency in use of water and energy, those which have a greater service life and lower cost of maintenance. Those which reduce pressure on resources and so on.
Therefore, it is under this new Administrative Law that Public Tenders are no longer thought as a mean of satisfying the strict necessities of the administrative machinery, but now, given its new name as Sustainable Public Contracting, it is its duty to promote sustainable development.
Bearing this in mind, the indication is already made that with governmental contracting on one end, in line with state planning on the other, both should follow the same way, that is, a necessary option for sustainability.
The planning of public contracting: a necessary option for sustainable development
We have seen, so far, that the Public Authorities have the duty of preserving and defending an ecologically balanced environment. Equally, the fundamental objective of the Republic, and, therefore, of the Brazilian state, is to build a free, just and solidary society, which ensures national development, eradicates poverty and reduces social inequality.
In order to attain that, the State can make use of one the instruments prescribed in the economic and financial order, state planning, whose article 174, § 1, qualifies it as necessarily geared towards balanced national development, which, as we insisted and evinced, can only be the sustainable one.
We have seen that while planning finds itself in one of the extremes of this developmental dynamic, public contracting is in the other, with the function of instrumentalizing and fulfilling state planning.
The Budget, in turn, would be the middle point of this track, demanding intensive political articulation able to correlate the use of financial resources to public purchases with a sustainable approach.
But, after all, what would be the strategy or element that can best integrate all this process and dynamics, joining them in favor of sustainable development?
The answer, for us, is the good usage of the State's purchasing capability. It is estimated that the Union alone turns over resources in the magnitude of 10% of the GDP (SOUZA, 2015). In consulting the federal government purchase webpage (paineldepreços. planejamento.gov.br), it is declared that, from January to August 2019 more than R$ 116 billion in public tenders were homologated for the acquisition of goods and service provision for the Federal Public Administration.
Given this, one may note that the State detains a great volume of financial resources, which must meet the demands of the administration and the collectivity. The question, however, is if this power should only tend exclusively to the interests of the administrative machinery or the Administration has an environmental and ethical duty of preserving the planet for present and future generations?
Here, the present work strives in advocating that sustainable public contracting does not lead the public manager to the option of meeting the mere desires of the administration nor obliging the whims of the manager. There is a necessary choice for sustainable development, lest the planetary crisis worsens.
The public authorities, while consumer of goods and services, has, with its purchasing power, the possibility of inducing the market to more sustainable behaviors. Biderman (2008, p. 23), by the way, indicates that, in the hands of public authorities concerned with the ecosystem, public contracting represents a powerful instrument for environmental protection, as, given the high sums it turns over, its purchasing power must be used to promote the production of sustainable goods and services. One may even "expect considerable improvement and change in the short-and medium-term market structures".
However, the author warns that it is no use for the State to hold great sums of money if this power is not duly and well used. She observes, concerning this point, that an authority, in general, does not generate innovation, but when several public authorities combine their purchasing power in favor of sustainability criteria, it is possible to think about a change of the consumption and production model (BIDERMAN, 2008) Carvalho (2009), in this sense, observes that the purchasing power makes the Public Administration a great user and consumer of natural resources, capable of making new forms of production possible, nudging practices in the consuming market, creating demands headed towards sustainability, fostering innovation, generating a multiplying effect and reducing negative socio-environmental impact.
Franco (2013), as well, states that the new article 3 of law 8.666/93, which demands the promotion of sustainable national development, went on to establish that the State purchasing power must be inexcusably linked to socioenvironmental matters. Mendonça (2018), while advocating that this power must be geared towards the inclusion of sustainability criteria, reports, with examples, how public tenders can conciliate the dimensions of sustainable development. In this sense, the scholar proposes, concerning economic sustainability criteria, scale gains with the adoption of a shared purchase system (gathering several administrative units), process rationality, greater participation and accountability, innovation incentives and preference and fomentation to micro and smalls companies.
Respectively to environmental sustainability criteria, whose field is currently further developed, it is recommended that the public authorities demand from their tenders products of lower environmental impact. For instance, the author mentions pencils, envelopes and other items derived from paper made of legal-sourced wood, preferably recycled. He also advises the purchase of cleaning products with biodegradable tensioactive agents, electrical equipment with the best efficiency ENCE (National Energy Saving Label -Etiqueta Nacional de Conservação de Energia) label, LED lamps, "flex"-type vehicles, electronic versions of newspapers and magazines, batteries that observe the maximum limits of lead, cadmium and mercury and whose supplier mandatorily commits to perform the reverse logistic.
Concerning social sustainability criteria, Mendonça (2018) also mentions a more efficient administrative surveillance on the labor of workers younger than 14 years old, forced or compulsory labor, both forbidden by the legislation. On control, further auditing on the confirmation that the company observes social and working rights, prohibiting racial, ethnic and religious discrimination practices, instructing the outsourced employees on harassment (moral and sexual) practices, demanding a safety plan with supply of EPI, a drinking water supply, mess hall and protection measures for workers against incidents.
In other words, should the State insert in its calls and contracts a significant share of the aforementioned criteria (with no pretension of a complete list), it is possible to say that the public authorities would have, in fact, fulfilled their role in the making of a new consumption model. From this, it is possible to understand that the sustainability-oriented state purchasing power must integrate and handle the development dynamics from planning to public contracting.
That is confirmed by Normative Instruction 05 of 2017, published by the Ministry of Planning (current Ministry of Economics), which is, doubtlessly, the most important federal public tender norm. It deals with the regulations concerning service provision under the indirect contracting regime, known as mean-activity outsourcing, broadly used by the direct and indirect Federal Public Administration.
By the mentioned instruction, it is clearly noticeable that public contracting must be linked to state planning, according to clause III of article 1: Art. 1 Service contracting of the performance of executive tasks under the indirect execution regime, by organs or entities of the direct, foundational and autarchic Federal Public Administration will observe, when applicable: I -the phases of Contract Planning, Supplier Selection and Contract Management; II -The sustainability criteria and practices; and III -alignment with the organ or entity's Strategic Planning, when available.
In addition, the administrative unit must, when performing the initial procedures of contract planning, justify the need for the tender, relating it to the State's strategic planning.
Here is article 21, clause I, 'a': Art. 21. The initial procedutres of Contract Planning consist of the following activities: I -document elaboration for formalizing the demand by the service-requesting sector, compliant with the model of Attachement II, which observes: a) the justification for the need of contracting, explaining the option for outsourcing the services and considering strategic planning, if that's the case; Lima (2017), in commenting the provision, explains that said Instruction went on to demand a more professionalized, efficient and improved public management, especially in developing the process of planning.
The public manager, thus, is now understood as the one who observer and analyzes state planning and seeks to make it concrete through several contracts, which, intertwined, must ensure the effective provision of goods and services to society (article 165, §10 of the Constitution); Therefore, if the purchasing power is not well utilized by the public authorities, this provision of goods and services to society will only reproduce the current consumption model, unsustainable and coherent with the idea of mere growth.
Besides, as already stated, the state purchasing power must be the strategic element that integrates this entire dynamic, from planning to contracting. This means that if the planning is not imbued with the purpose of ensuring sustainable development, it is possible and probable that public contracting won't be as well.
That is, should the purchasing power not be purposed for attaining sustainability, the managers themselves won't make efforts to fulfill said purpose. With this, sustainable public contracting will be no more than mere, faint initiatives in the field of public administration.
That is why, from planning to public contracting, there's a necessary option for sustainable development, lest not only we breach fundamental principles, but worsen the socioenvironmental crisis the planet experiences.
FINAL CONSIDERATIONS
In recognizing that the planet suffers under severe social, economic and environmental problems, the international community has debated ways of facing them, especially with the effort of leaving behind the old mere economic growth practices and going on to influence a new model, based on sustainable development.
In Brazil, a good part of the doctrine has noted the need for the country to adopt a new role which can, effectively, fight the social and environmental crisis, which, one may say, are not limited to geographical markings, but permeate all nations in the world.
Given that, a new interpretative approach is cast on the 1988 Constitution, articulating a logical and systematical intertwining of articles 1, 3, 170 and 225, in a way that sustainable development is made into a fundamental principle to be attained by the Brazilian republic.
That is, the ethical duty of the state to preserve and protect an ecologically balanced environment is necessarily linked to an economic order directed towards ensuring a dignified existence, along with the fundamental objectives of our Charter: the making of a free, just and solidary society, which ensures national development, eradicates poverty and reduces social and regional inequality. All this, obviously, linked to the principle of human dignity.
From there, the present study evinced that one of the institutes of the economic and financial order, namely state planning, has an important role of reshaping state action. Therefore, efforts must be made in promoting it as a technique of rationalizing public policy, making it more efficient and effective.
Besides, by interpreting article 174, its beginning and § 1, of the Constitution, state planning must be necessarily geared towards balanced national development, which was shown to be the sustainable one. In this way, all matters which contact it must also take this provision into consideration.
Budgetary planning, for instance, if analyzed in the present study by its strong link to public contracting, also shows its link to sustainability, as the budgetary laws (PPA, LDO and LOA) are oriented towards reducing social and regional inequality.
In addition, considering that planning is the most broad and abstract point, by declaring the budget's goals and broad objectives, the public contract, on the other hand, shows itself to be the most concrete and specific act, whose objective is precisely toe make the state plan concrete and effective.
With this intention, if the whole legal system is linked to sustainable development, the same should be with Administrative Law. New renewed, it goes on to reform its most traditional institutes, including public tenders, re-signifying them as Sustainable Public Contracting.
Given this outlook, it was shown that the State, given the large sums of public resources it turns over, holds the so-called purchasing power, capable of influencing and nudging the market to adopt a new consumption model, being sustainable and complying with the Charter values.
In order to make it so, it was noted that the purchasing power must integrate the state development dynamic, promoting interaction from planning to public contracting.
That is, if sustainable development is not the crux of planning, possibly it won't be so for public contracting, since the managers, those of have the role of making the state plan concrete (by mean of several contracts), they will be not be engaged in this purpose.
Therefore, if the purchasing power , even if obviously oriented towards attaining sustainable development, does not correspond to the strategic planning of this complex and interdependent process, one may affirm that sustainable public contracting will be no more than a faint initiative of a few agents in the scope of the Public Administration.
Thus, the conclusion that, from state planning to public contracting, there's a necessary option for sustainable development, is drawn.
|
2020-01-02T21:12:07.086Z
|
2019-12-19T00:00:00.000
|
{
"year": 2019,
"sha1": "85a95029b4ad20daaec052ad827ef8f793080755",
"oa_license": "CCBY",
"oa_url": "https://periodicos.unipe.br/index.php/direitoedesenvolvimento/article/download/1133/664",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "87a4db2b2fc5563361d36ac18f46770a440be89c",
"s2fieldsofstudy": [
"Economics",
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
252130798
|
pes2o/s2orc
|
v3-fos-license
|
Ontology-Based Linked Data to Support Decision-Making within Universities
: In recent years, educational institutions have worked hard to automate their work using more trending technologies that prove the success in supporting decision-making processes. Most of the decisions in educational institutions rely on rating the academic research profiles of their staff. An enormous amount of scholarly data is produced continuously by online libraries that contain data about publications, citations, and research activities. This kind of data can change the accuracy of the academic decisions, if linked with the local data of universities. The linked data technique in this study is applied to generate a link between university semantic data and a scientific knowledge graph, to enrich the local data and improve academic decisions. As a proof of concept, a case study was conducted to allocate the best academic staff to teach a course regarding their profile, including research records. Further, the resulting data are available to be reused in the future for different purposes in the academic domain. Finally, we compared the results of this link with previous work, as evidence of the accuracy of leveraging this technology to improve decisions within universities.
Introduction
In the last few years, higher education institutions (HEIs), such as universities, are increasingly using more modern technologies to automate different activities and improve the quality of their data. One of these technologies is representing academic data semantically in RDF format. Research, employment, and decision-making are examples of the challenging activities that higher education (HE) entails. Due to the nature of and frequent increase in academic data, semantic representation succeeds in solving several challenges in the educational domain. Although semantics have proven effective in many aspects, some shortages were diagnosed, such as dealing with missing information and the continuous update of data.
On the other hand, HEIs, such as universities, are increasingly using linked data (LD) to make public information (academic programs, research outputs, facilities, etc.) available as linked data on the Web. This trend opens the opportunities to use these data to automate the accomplishment of main processes within several institutions. Digital libraries are one of the institutions that use LD to publish scientific data and make them available to be used freely by others.
This research examines the outcome of a linked data creation cycle in the context of academic scientific research. It relies on Saudi university quality accreditation regulations. The study investigates the added value of leveraging the semantic technology of linked data in decision-making to produce accurate results for different tasks. The conducted scenario is applied to the local data of the Faculty of Computing and Information Technology 1.
Identify a use case and reveal the main objectives of LD.
2.
Present a methodology to generate the link between university ontology and external academic staff scholarly data. 3.
Conduct a survey to investigate the elements that most affect the course-teacher assignment process. 4.
Demonstrate SPARQL queries for testing the resulting dataset to illustrate the success of using LD technology, by presenting SPARQL queries according to the most elements that affect the decision. 5.
Compare the results with previous work that uses semantic technology only to solve the same problem.
This work is organized as follows; Section 2 identifies this study's motivation and background. Then, Section 3 discusses the related works that used the LD technique in education. After that, the applied methodology to generate LD is illustrated in Section 4, followed by the results and discussion presented in Section 5. Finally, the conclusion is demonstrated in Section 6.
Challenges in Higher Education
Since it creates workers with a variety of specialties for all institutions, HE is considered the foundation for constructing the future globally. Therefore, supporting it with all the supplements that ensure effective performance is essential. Research, employment, and decision-making are all significant components of HE, in addition to teaching. Most HEIs, especially universities, improve their traditional processes of managing these components and solve the challenges related to them.
The challenge of allocating the best possible academic teacher to teach a new course is addressed in our previous studies [1,2]. It is one of the most common challenges universities are facing continuously under their decision-making processes. It depends on matching course contents with academic resource qualifications. A study [2] proposed an educational ontology to replace the traditional processes that heads of departments follow to decide the best matching. It summarizes the long steps of reviewing the contents of the course to be taught and the profiles of faculty members. In addition, the proposed solution solved the problem of time-consumption, which is caused by manually performing this job on a huge amount of data, and produced more accurate results.
The decision-making process for course distribution is made more challenging by the rise in the number of Ph.D. holders working in HEIs and the diversity of their research interests. Therefore, more values are required to improve this process.
The author in [3] has highlighted that the proper use of the information that is accessible across institutional repositories and the definition of what information may be shared are the first steps in resolving challenges in HE.
Educational Ontology
Traditionally, data are presented in semi-structured formats such as tabular representation, spreadsheets, and Web databases. These data types, unlike relational databases, are simple structures that are not in schema form. Humans can easily understand this unformal representation of data while machines cannot, since they are not framed in a specific schema [4]. Data are made available in digital form through ontologies. Thus, they are prepared to be shared and utilized to create knowledge-based systems for both humans and machines [5].
The nature of educational data might provide beneficial possibilities for the educational institutions if represented semantically to enhance their performance, making the usage of Semantic Web (SW) technologies in education crucial [6]. Due to the ability of educational ontology to solve major problems such as knowledge modeling and information overload, it could be essential to employ it to solve many challenges in the education domain. SW has been extensively engaged in many studies within the education field over the past 10 years. These studies have played a key role in resolving some of the most challenging problems in various fields, including information integration and sharing, Web service annotation and discovery, and knowledge representation and reasoning [7].
Linked Data
Several governmental organizations have produced a large amount of data in the previous decade, leading to active research in various data engineering disciplines such as data representation, storage, and access. One of these research areas is linked data (LD).
The benefits of using this technique are:
1.
Uniformity: Linked data are published in the form of a Resource Description Framework (RDF). This representation is expressed as triples that consist of subject, predicate, and object. All the triple components are defined as Uniform Resource Identifiers (URIs) that make each a unique identifier.
2.
Dereferenceability: Each URI can be used to retrieve and locate information on the Web.
3.
Coherence: URIs as triples can be used to establish a link between two datasets via the URI that represents a subject in a source dataset and the URI that represents an object in another (target) dataset.
5.
Timeliness: Publishing and updating LD is straightforward, since it does not require loading and transforming.
Tim Berners-Lee proposed a five-star scheme [8] for encouraging individuals to publish in a linked open data (LOD) environment in 2010:
1.
Open data that are available on the Web with an open license; 2.
As structured data that can be read by machines;
All of the above, plus use W3C open standards to identify things (RDF and SPARQL), so that people may point to your content; 5.
All of the above, plus link your data to the data of others to add context.
With the expansion of SW technologies, many research centers, institutions, and enterprises are publishing their data on the Web as LOD. Due to the spread of this technique's usage around the world, there was a need to create a global data cloud, and this was the main idea behind inventing the LOD. The LOD cloud began with 12 datasets in 2007. As of May 2020, this network contained 16,283 links from 1301 produced open datasets from different domains, such as government, companies, media, life science, publications, social media, scholarly data, etc. On the other hand, this gives a chance to third parties to take advantage of these open data to expand their information. Large institutions such as universities and HEIs compete to use these revolutions to improve their information systems.
Although many LD researchers face challenges in using this technique, benefits cannot be ignored, such as transparency, reusability, knowledge discovery, and interoperability [9] for different application areas.
LOD is the result of releasing LD under open licenses [10], which increases data reuse [11]. Integrated data often aids in the formation of comprehensive knowledge, which in turn supports decision-making. In addition, LD can answer complex queries that single datasets cannot answer, by using combined data from different sources.
One of the particular strengths of the LD approach is that it accepts heterogeneity and provides interoperability based on links between different datasets [12].
The goal of LD is to provide machine-readable connectivity between various data sources on the Internet. As a result, LD has been regarded as one of the most successful components in resolving many issues that educational institutions confront throughout decision-making processes. Collecting content from resources, looking for missing academic information, and so on can be made easier and more precise, ensuring quality in HEIs.
Within universities and research centers, academic-teacher-related judgments are frequently influenced by their publication data first and foremost. Evaluating the content of academic teachers rules a variety of decisions within universities. That includes the positions the academic teacher can take, the courses they can teach, the projects they can be involved in, the training courses they require, and many more. In this research, these goals are readily accomplished by finding the proper scholarly open data and linking it to the local data of universities.
This work uses the local data of the Faculty of Computing and Information Technology (FCIT) within King Abdulaziz University (KAU), which is represented semantically to propose applying the LD technique to automatically enrich the local data and support the decision-making process for assigning courses to the most proper academic references.
Related Work
LD is one of the most powerful frameworks in the data management field; so, there is a significant presence of this golden research subject in different domains. Many researchers reported in their publications different approaches to automatically enrich and populate their ontology models.
Recently, LD and open data techniques seem very promising in HE and propose notable research in this area. Since 2009, LD has been established by educational domains to be used in many aspects to overcome many challenges [13].
One of the early tasks proposed in this domain that serves both students and academic teachers is leveraging LD to develop open universities. Many educational institutions are offering free open access to their educational resources to make online learning more widespread. On the other hand, they can find accurate information available as open educational data to enrich their data. Open universities in [14][15][16] are produced using source data of universities and external repositories of educational datasets. In [15], the researchers have applied some scenarios for the proposed architecture. Firstly, students need to check the related materials that support their decisions about their university and facilities' offered choices. Generating links between these choices and the opened educational materials and providing them in one dataset offers significant benefits for students. A student may become interested in certain topics or courses and will need to specialize in the supporting materials to supplement their knowledge with high-quality resources. On the other hand, the student could find some difficulties in studying some courses, which would make them change their decision. Secondly, the faculty member can have the chance to develop or renew the curriculum of the course they teach, after comparing it with the syllabus that is provided by the other linked universities.
The open university in [17] has described information about published materials, teacher research work, titles, courses, and audio-visual educational resources using semantic technology. By establishing a SPARQL endpoint, these data can be reused and made available to others. Since some universities have transitioned from traditional to digital learning by providing open educational resources (OERs), the LD vision exemplified by the software interface enables a new generation of OERs and open course ware (OCW) that can be semantically described and connected with other data and discoverable sources. These resources contain tools and materials that can be freely accessed, reused, modified, adapted, and shared in order to promote education. Linked open course ware data (LOCWD) is a language created by the researcher utilizing W3C's RDF technology. It uses the Internet to connect OERs, open licenses, OCW repositories, and other academic materials. The fundamental goal of these vocabularies is to link the stated OCW domain to LOD cloud datasets.
The study in [18] proposes a task-interaction framework for mobile learning to aid educational decision-making. The framework is built on the links between the various sorts of interactions that occur in a mobile learning activity and the pedagogically relevant tasks for the activity. A case study has been created to show how the task-interaction framework might be applied to learning scenarios using mobile devices. The researchers have used MeLOD8, a mobile environment for learning with LOD, to apply the scenarios.
The researcher in [10] has examined the capability of LD and the sufficiency of the existing data source to promote student retention, progression, and completion. The researcher in this work used LD technology to develop an academic predictive model that targets first-year students at universities. They have applied two experiments. The first one predicts the students' likelihood of being at-risk. The second experiment uses easily accessible data from internal institutional data sources/repositories and external open data sources to forecast the academic performance/marks of the students. The sufficiency of LD and external opened data sources has been examined using questionnaires (surveys).
Under the fast growth of scholarly data, a significant number of studies have used LD to enrich the quality of the available researchers' data. In [19], a subset of scientific publications called CONICET Digital is published as LOD. The producers of this work have used the strength of SW and LD technologies to improve the recovery and reuse of data in the domain of scientific publications. Moreover, they considered the SW standards and reference RDF schemas such as Dublin Core, FOAF, and VoID. They convert and publish their data using the same guidelines for publishing government-linked data. On the Web of data, the data is linked with the external repositories DBLP, WIKIDATA, and DBpedia. The resulting platform particularly retrieves information from the scientific domain by combining data from different sources. Moreover, it allows users to view the resulting information related to the available data and run queries using the SPARQL language.
Ontura-Net [20] is a research project that employs LD approaches to explain the scientific activity of Ecuadorian university scholars. Under the realm of university scientific research activities, this study demonstrates the outcome of the LD manufacturing cycle. It is a legal term that refers to Ecuadorian university quality accreditation regulations. The main objective of this project is to assist universities in improving certain aspects, such as incorporating scattered teacher-researcher production into the network, which is crucial when establishing scientific and academic research information metrics from individuals or groups at the institutional level. It also aids in the identification and formation of scientific collaboration networks as well as the detection of priority potential domains in which legislators can assist in the formulation of science and technology policies.
Another Ecuadorian study [21] generated links between multiple bibliographic sources to find similar research areas and prospective collaboration networks through a combination of ontologies, vocabularies, and LD that enrich a base data model. The researchers linked diverse Ecuadorian HEIs with external scholarly data from bibliographic sources, such as Microsoft Academics, Google Scholar, DBLP, and Scopus, which make available their data via APIs. The resulting links are utilized to create a prototype that provides a centralized repository with bibliographic sources and allows academics throughout Ecuador to locate similar knowledge areas using data mining techniques.
The proposed work in [22] has solved the most common problems related to publications such as incomplete information, lack of semantic information, and author ambiguity, when two or more authors could share the same name or two or more names belong to one author. The external sources I-Scover and DBPedia datasets are utilized, considering the names in English and Japanese to deduplicate records and reduce data redundancy in publication data, extract more information about authors of articles, and tackle the problem of author ambiguity. The authors first normalize entity names before searching DBPedia for all available candidates. Then, they use semantic data from I-Scover and DBPedia to create semantic profiles for both entities and applicants. Finally, they use a combination of lexical and semantic profile similarities to find the equivalent DBPedia entity.
The researchers in [23] have developed a search engine called WISER. This system uses the benefits of the semantic approach and LD to find academic experts in the academic domain. It retrieves academic authors whose expertise is described through the publications they have produced, and it is relevant for a user query.
ScholarLens is an approach that is described in [24]. It aims to extract competencies from research publications using SW and LD techniques to generate user profiles automatically.
In [25], the study investigates the use of ontologies and LD to support the representation of researcher profiles in the academic environment. It describes an ontology model that is automatically populated. Bibliographic records are extracted from the DBLP repository to enrich the proposed ontology.
Based on the review of the related works, we can establish that the use of ontology and the LD technique has proven itself in the academic domain for different tasks. In addition, open-linked scholarly data were a solution for many problems related to publications, such as detecting similarities between authors' publications for scientific collaborations. On the other hand, we can state that no research from the related works finds the similarity between the academic staff publications and the topics of the taught courses and employs it to support the academic decision process, especially not to improve the decisions of course-teacher assignment.
Methodology
A massive amount of educational data is produced by different educational institutions every year [10]. These materials would be hard to discover or integrate into traditional information systems. That means, everything we need is available, but it is hard to find. As a result, wading into applying semantic and LD technologies in education would be crucial, since the nature of such educational data can generate opportunities for educational institutions to improve their performance and support the decision-making process.
Course-teacher assignment is one of the most common considerations that universities face regularly. It incorporates evaluating the academic teacher and determining their capacity to perform an assigned task, which traditionally passes through complicated processes, similar to many other educational decisions. Performing this task manually on this amount of data is inefficient, ambiguous, and time-consuming. Furthermore, some academic profiles have missing data or materials that are overburdened.
There is also a necessity to match the academic teachers' various qualifications to the course specifications. This step requires collecting more information from external sources.
As a proof of concept, this research uses King Abdulaziz University (KAU) data, with the Faculty of Computer and Information Technologies (FCIT) serving as the case study. KAU's staff committees and courses description are presented semantically in our previous work in [1]. This will be updated regarding most aspect elements that affect the course-teacher assignment decision. After that, external repositories will be searched to select the most appropriate dataset that enriches the needed information by generating link data.
Choosing the proper methodology for generating LD relies on different factors such as the case study or the scenario of the problem to be solved by this technique, the nature of the data, and the characteristics of the domain.
In the literature, few researchers have briefly described the methods and tools they use to operate in generating, linking, publishing, and using LD. One of the first studies [26], titled "A Cookbook for Publishing Linked Government Data on the Web", was published in 2011 and discussed the applied methodology. Most of the studies have followed the main steps mentioned in this book and can be summarized in the three following steps:
1.
Initialization: This step includes specifying requirements and business objectives and then analyzing the datasets used in LD generation. Moreover, it involves selecting vocabularies and developing other specifications for metadata description.
2.
Innovation: The process of combining datasets into a knowledge graph style. This includes data access, transformation, and enrichment.
a. For pilot applications, the developer needs to select the generic component and customize the needed tools, i.e., specifying LD components required in the domain of interest.
b. Development of specific tools: implementing security measures to deal with the risk of communication.
3.
Validation: The last phase, which is a continuous process. It comprises the reuse of open-source tools, improving components based on feedback, and testing data.
The LD in this study is created by employing the selected method and expanding it (as shown in Figure 1) in the following phases: 2. Innovation: The process of combining datasets into a knowledge graph style. This in-cludes data access, transformation, and enrichment. a. For pilot applications, the developer needs to select the generic component and customize the needed tools, i.e., specifying LD components required in the domain of interest. b. Development of specific tools: implementing security measures to deal with the risk of communication. 3. Validation: The last phase, which is a continuous process. It comprises the reuse of open-source tools, improving components based on feedback, and testing data. The LD in this study is created by employing the selected method and expanding it (as shown in Figure 1) in the following phases: Covering the data that describes the scholarly activities of the selected academic teachers in the case study.
Like the first step, a survey is conducted to find the most elements that affect the course-teacher assignment decision. The results in Section 5.1 are employed to improve the research on three axes: 1. Select the local dataset and present it semantically; 2. Choose the most proper external dataset (scholarly data); 3. Query the resulting dataset to find the best course-teacher assignment.
The survey consists of four sections. The first section investigates the general information and measures, the experience of each department head. It is followed by two Covering the data that describes the scholarly activities of the selected academic teachers in the case study.
Like the first step, a survey is conducted to find the most elements that affect the course-teacher assignment decision. The results in Section 5.1 are employed to improve the research on three axes:
1.
Select the local dataset and present it semantically; 2.
Choose the most proper external dataset (scholarly data); 3.
Query the resulting dataset to find the best course-teacher assignment.
The survey consists of four sections. The first section investigates the general information and measures, the experience of each department head. It is followed by two sections that examine how elements of both the courses and the academic reference influence the course-teacher assignment decision. The last section measures how students' feedback affects the decision. The main elements tested in the survey are described in Table 1. As mentioned previously, this research uses the dataset of KAU, particularly the courses' details and members' profiles of the three departments of the FCIT. Since this targeted academic data was not available in RDF format, it was presented semantically in our previous work [1], based on the accreditation categorization of HE in Saudi Arabia.
The use of SW technology to support the decision-making process within universities was proposed in our previous research [2]. The ontology is called KAUONT, and it is created using Protégé. We have queried the data using SPARQL queries under the rule of not publicly disclosing information about members and academic data within educational institutions. Therefore, the results were published as quantitative data instead of qualitative. Using LD to improve the results and automatically enrich the local data was one of the future works that have been mentioned in our previous research, and it is the main task of this paper.
Therefore, it is used to characterize the local data. In addition, it is improved, regarding the survey results, by adding some classes and properties. Results from both studies will be compared in Section 6 to prove the success of using the LD technique.
Select the External Data Source: Scholarly Data
Under the fast and continuous growth of the scientific literature that brings difficulties among the high volume of published papers that need annotations and management, the number of novel technological infrastructures are found to help researchers and research institutions to easily browse, analyze, and forecast scientific research. Therefore, wellknown bibliographic repositories are available online to extract scientific publications data from, such as DBLP and DBpedia. Using semantic repositories that use ontology as semantic schemata increase the possibility of automated reasoning about the data and allow implementation, since the most essential relationships between concepts are incorporated into the ontology.
On the other hand, another innovation is found, known as scientific knowledge graphs, which concentrates on the bibliographic domain and consists of metadata that describe research publications such as authors, venues, affiliations, research areas, and citations. This type of data representation contains a large number of entities and relations that are usually structured as RDF triples. These structured representations can support different tasks such as question answering, summarization, and decision systems. Some examples of scientific knowledge graphs are Open Academic Graph, Scholarlydata.org, Microsoft Academic Graph (MAG), Scopus, Semantic Scholar, Aminer, Core, OpenCitations, and Dimensions.
To choose the most proper external dataset, several scholarly repositories and scientific knowledge graphs are reviewed:
1.
Databases and logic programming (DBLP): DBLP is a bibliography that specializes in the computer science area. It contains the metadata of publications, authors, journals, and conference proceedings series. After a thorough examination of the selected scholarly data sources, MAKG is chosen as the source for the data extraction, due to the huge size of researchers' data and the detailed structure of the dataset that is available on the MAKG website. In particular, the dataset offers the needed information about the authors, publications, and citations as well, and it is easy to query the dataset using the available SPARQL endpoint to select the authors from KAU and count them. To test MAKG, two queries were run, as follows:
1.
Count the authors from KAU by finding the number of the authors in each affiliation, as shown in Figure 2.
2.
Check the availability of all the needed information in the MAKG endpoint. To select all the data of the authors from KAU, a SPARQL query is run on the MAKG endpoint, as described in Figure 3. 2. Check the availability of all the needed information in the MAKG endpoint. To select all the data of the authors from KAU, a SPARQL query is run on the MAKG endpoint, as described in Figure 3. 2. Check the availability of all the needed information in the MAKG endpoint. To sele all the data of the authors from KAU, a SPARQL query is run on the MAKG endpoi as described in Figure 3.
Specify LD Dataset
This is the most sensitive step for link generation. It includes the following: • Specify the access method to the datasets. Since the link is generated between an RDF file and an online knowledge graph, the KAU RDF local dataset (KAUONT) is loaded, and the SPARQL endpoint of MAKG is pointed using the silk workbench editor.
•
Identify classes with instances that can be the subject of linking. The link is performed by connecting the two datasets by the academic staff name and the affiliation name.
Innovation
The link can be generated manually in the case of the small datasets, but, because this study is applied to a larger dataset, performing the manual link is not feasible. Silk [27] is the chosen tool in this research. It is an open-source tool that has a discovery engine that offers very significant features, as follows:
Identify Restrictions
This step limits the link to the target set (MAKG) of the external data and reduces the linkage time in Silk. Since the case study is applied to KAU academic staff, the restriction aimed to limit the link to the members of KAU only, as shown in Figure 4.
•
High performance and scalability data management; • Network load reduction by caching and reusing SPARQL result sets.
Identify Restrictions
This step limits the link to the target set (MAKG) of the external data and reduce linkage time in Silk. Since the case study is applied to KAU academic staff, the restri aimed to limit the link to the members of KAU only, as shown in Figure 4.
Write Linkage Rules
To generate the link the following rules were applied to datasets: • Specify how resources would be compared. The two main entities to be comp are kau:memberName from KAUONT with foaf:name from the MAKG dataset. will be transformed into lowercase to prevent any mismatch caused by the u lower and upper cases.
•
The output will be compared using the "Levenshtein distance" metric in the workbench to guarantee the exact match of similarity (as shown in Figure 5).
Write Linkage Rules
To generate the link the following rules were applied to datasets: • Specify how resources would be compared. The two main entities to be compared are kau:memberName from KAUONT with foaf:name from the MAKG dataset. Both will be transformed into lowercase to prevent any mismatch caused by the use of lower and upper cases.
•
The output will be compared using the "Levenshtein distance" metric in the SILK workbench to guarantee the exact match of similarity (as shown in Figure 5). The results, as shown in Figures 6 and 7, include 150 links. The results, as shown in Figures 6 and 7, include 150 links.
Validation
Validation is the process that follows link generation and guarantees the effective use of the resulting link data. It consists of the following: 1. Publication: The resulting linked dataset (KAULD) is published to provide machine access to it, using tools such as GraphDB. 2. Evaluation: KAULD is evaluated to retrieve the new data. This task is performed using a federated SPARQL query that includes the 'SERVICE' statement, because of the need to send a query to a remote site (MAKG endpoint). The prefixes in Figure 8 are used every time KAULD is queried.
Validation
Validation is the process that follows link generation and guarantees the effective use of the resulting link data. It consists of the following:
1.
Publication: The resulting linked dataset (KAULD) is published to provide machine access to it, using tools such as GraphDB.
2.
Evaluation: KAULD is evaluated to retrieve the new data. This task is performed using a federated SPARQL query that includes the 'SERVICE' statement, because of the need to send a query to a remote site (MAKG endpoint). The prefixes in Figure
Survey
The survey collected the response of 41 heads of departments at KAU. Most of the participants have been in their current position for at least 2 to 4 years, which indicates that most of them have a reasonable understanding of the system. When asked if they have faced problems with the current course-distribution method, 73.2% find that the process is complicated, since it is processed manually, while only 10% feel that the course assignment procedure is smooth. This response supports the main motivations of this research.
Regarding the course elements shown in Figure 9, the survey proved that course topics strongly affect the decision of assigning the teacher to teach a specific course, since more than 45% of the participants support it. Course type is also indicated as another important element, as about 30 participants strongly support it. On the other hand, the majority of the participants find that the other course elements are not significant, and they do not rely on them when producing their decisions. 26 26 30 Figure 8. Prefixes used when querying the resulting data.
Survey
The survey collected the response of 41 heads of departments at KAU. Most of the participants have been in their current position for at least 2 to 4 years, which indicates that most of them have a reasonable understanding of the system. When asked if they have faced problems with the current course-distribution method, 73.2% find that the process is complicated, since it is processed manually, while only 10% feel that the course assignment procedure is smooth. This response supports the main motivations of this research.
Regarding the course elements shown in Figure 9, the survey proved that course topics strongly affect the decision of assigning the teacher to teach a specific course, since more than 45% of the participants support it. Course type is also indicated as another important element, as about 30 participants strongly support it. On the other hand, the majority of the participants find that the other course elements are not significant, and they do not rely on them when producing their decisions.
search.
Regarding the course elements shown in Figure 9, the survey proved that course topics strongly affect the decision of assigning the teacher to teach a specific course, since more than 45% of the participants support it. Course type is also indicated as another important element, as about 30 participants strongly support it. On the other hand, the majority of the participants find that the other course elements are not significant, and they do not rely on them when producing their decisions. According to the testing of the academic profile elements (summarized in Figure 10), the survey depicts the impact of a teacher having taught the course before. The majority of participants believe it is a significant factor in deciding course-teacher distribution. No participant gave a vote against it. In addition, it proves that the research area of the academic teacher is seriously a considerable element that controls the decision, as more than 60% of the votes strongly support it. The academic rank was considered by more than half of the participants. This element can be used to set the teachers' priority, when more than one teacher is allocated to teach a specific course. Furthermore, the survey indicated the importance of the certificates the academic teachers have in the course-teacher assignment. For a teacher to be the course coordinator, the result shows how essential it is, since According to the testing of the academic profile elements (summarized in Figure 10), the survey depicts the impact of a teacher having taught the course before. The majority of participants believe it is a significant factor in deciding course-teacher distribution. No participant gave a vote against it. In addition, it proves that the research area of the academic teacher is seriously a considerable element that controls the decision, as more than 60% of the votes strongly support it. The academic rank was considered by more than half of the participants. This element can be used to set the teachers' priority, when more than one teacher is allocated to teach a specific course. Furthermore, the survey indicated the importance of the certificates the academic teachers have in the course-teacher assignment. For a teacher to be the course coordinator, the result shows how essential it is, since 20% strongly believe, and 56% support it, since course coordination plays a significant role in course distribution, while only 12% of people disagree with it. Regarding these results, the affective teacher elements were teaching the course before, the research area of the academic staff, and coordinating the course.
hematics 2022, 10, x FOR PEER REVIEW 15 of 20% strongly believe, and 56% support it, since course coordination plays a significa role in course distribution, while only 12% of people disagree with it. Regarding the results, the affective teacher elements were teaching the course before, the research ar of the academic staff, and coordinating the course. On the other hand, Figure 11 proves that students' achievement and feedback a usually not considered when assigning teachers to courses. On the other hand, Figure 11 proves that students' achievement and feedback are usually not considered when assigning teachers to courses. On the other hand, Figure 11 proves that students' achievement and feedback are usually not considered when assigning teachers to courses. Figure 11. Comparing students' feedback elements.
Resulting Linked Data
To judge the success of using the link data technique in improving educational decisions, KAULD is tested using federated queries to select all the academic teachers who can teach courses from the same department. The selection relies on the factors from the survey result mentioned in Section 5.1 and the elements extracted from MAKG, as shown in Figure 12.
Students' Elements
Strongly Disagree Somewhat Disagree Neither Agree Nor Disagree Somewhat Agree Strongly Agree Figure 11. Comparing students' feedback elements.
Resulting Linked Data
To judge the success of using the link data technique in improving educational decisions, KAULD is tested using federated queries to select all the academic teachers who can teach courses from the same department. The selection relies on the factors from the survey result mentioned in Section 5.1 and the elements extracted from MAKG, as shown in Figure 12. Table 2 summarizes the quantitative data from each department's courses and academic staff. The evaluation was limited to Ph.D. holders, because they are typically involved in research and publish in journals and conferences. Furthermore, they teach the courses, while the majority of non-Ph.D. holders are deemed teaching assistants. As a result, it is rare to find scholarly records of non-Ph.D. holders. Simultaneously, all academic instructor profiles in the faculty, from all degrees, were translated into RDF format for future processing. Table 2 summarizes the quantitative data from each department's courses and academic staff. The evaluation was limited to Ph.D. holders, because they are typically involved in research and publish in journals and conferences. Furthermore, they teach the courses, while the majority of non-Ph.D. holders are deemed teaching assistants. As a result, it is rare to find scholarly records of non-Ph.D. holders. Simultaneously, all academic instructor profiles in the faculty, from all degrees, were translated into RDF format for future processing. Our previous work [2] solved the problem of the course-teacher assignment by developing an educational ontology that models the semantics of the courses and academic profiles in universities. The results depend on two factors only: who taught the course before and the match between the course topic and the research interest of the academic teacher. Figures 13 and 14 summarize the results of course-teacher assignment using SW techniques only from the previous study [2]. It is shown in Figure 13 that a significant number of courses are not included. Around 7% only of courses in the Computer Science department are assigned to qualified teachers, while 29% of courses in the Information System department and 62% of courses in the Information Technology department are included in the course-teacher assignment results. Figure 14 shows that 40% of Ph.D. holders are assigned to teach courses in the Computer Science department, while 43% of Ph.D. holders in the Information System department and 58% in the Information Technology department are assigned to teach courses from the same department. Assigned courses from the previous study [2].
Figure 14.
Allocate teachers from the previous study [2].
The query in Figure 15 counts the numbers of courses assigned to the most proper teachers in the possible matching, between the courses and academic teachers in this study. The query retrieves the courses from the KAULD dataset regarding the most effective elements mentioned in Section 5.1. Figure 13. Assigned courses from the previous study [2].
Mathematics 2022, 10, x FOR PEER REVIEW 17 of 22 Figure 13. Assigned courses from the previous study [2].
Figure 14.
Allocate teachers from the previous study [2].
The query in Figure 15 counts the numbers of courses assigned to the most proper teachers in the possible matching, between the courses and academic teachers in this study. The query retrieves the courses from the KAULD dataset regarding the most effective elements mentioned in Section 5.1. Allocate teachers from the previous study [2].
The query in Figure 15 counts the numbers of courses assigned to the most proper teachers in the possible matching, between the courses and academic teachers in this study. The query retrieves the courses from the KAULD dataset regarding the most effective elements mentioned in Section 5.1. As illustrated in Figure 16, more than 81% of Computer Science courses are assigned to teachers, with less than 20% remaining unassigned. It is also stated that teachers are assigned to around 83% of the courses in the Information System department. On the other hand, teachers in the Information Technology department are assigned to 68% of the courses. This result demonstrates that most of the courses matched the best academic references regarding their elements.
Compared to that previous analysis from Figure 13, it is found that a larger number of courses are assigned to more qualified teachers, after enhancing the academic teachers' profiles using the LD technique, as more factors that affect the decision are considered in the query. As illustrated in Figure 16, more than 81% of Computer Science courses are assigned to teachers, with less than 20% remaining unassigned. It is also stated that teachers are assigned to around 83% of the courses in the Information System department. On the other hand, teachers in the Information Technology department are assigned to 68% of the courses. This result demonstrates that most of the courses matched the best academic references regarding their elements. As illustrated in Figure 16, more than 81% of Computer Science courses are assigned to teachers, with less than 20% remaining unassigned. It is also stated that teachers are assigned to around 83% of the courses in the Information System department. On the other hand, teachers in the Information Technology department are assigned to 68% of the courses. This result demonstrates that most of the courses matched the best academic references regarding their elements.
Compared to that previous analysis from Figure 13, it is found that a larger number of courses are assigned to more qualified teachers, after enhancing the academic teachers' profiles using the LD technique, as more factors that affect the decision are considered in the query. Compared to that previous analysis from Figure 13, it is found that a larger number of courses are assigned to more qualified teachers, after enhancing the academic teachers' profiles using the LD technique, as more factors that affect the decision are considered in the query.
The query in Figure 17 counts the number of academic teachers from each department that are qualified to teach courses related to their qualifications in this study. The query matches the teachers with the related courses from the KAULD dataset, regarding the most effective elements mentioned in Section 5.1.
FOR PEER REVIEW
19 of 22 Figure 16. Comparing number of courses assigned to teachers using LD to the total number of courses.
The query in Figure 17 counts the number of academic teachers from each department that are qualified to teach courses related to their qualifications in this study. The query matches the teachers with the related courses from the KAULD dataset, regarding the most effective elements mentioned in Section 5.1. Figure 18 shows that approximately 88% of the academic teachers in the Computer Science department were assigned to teach courses, compared to roughly 70% in the Information Systems department. On the other hand, 68% of the academic staff in the Information Technology department are assigned to teach courses from the same department. Compared to the results from the previous study from Figure 14, most of the academic teachers in each department are assigned to teach courses that match their qualifications. Figure 18 shows that approximately 88% of the academic teachers in the Computer Science department were assigned to teach courses, compared to roughly 70% in the Information Systems department. On the other hand, 68% of the academic staff in the Information Technology department are assigned to teach courses from the same department. Compared to the results from the previous study from Figure 14, most of the academic teachers in each department are assigned to teach courses that match their qualifications. Mathematics 2022, 10, x FOR PEER REVIEW 20 of 22 Figure 18. Comparing number of allocated teachers using LD to the total number of teachers.
To summarize the results, leveraging LD with SW techniques has succeeded in giving sufficiently accurate decisions. This proves that LD adds value to SW when employed to solve decision-making challenges within HE. Although the percentages that are shown in the evaluation process cover the most teachers and courses, there is still a need to solve some shortages such as allocating the rest of the teachers, setting priorities to organize choosing the most proper reference to teach the course in the case of locating more than one teacher, and assigning teachers to new courses. In the future, techniques such as machine learning and data mining can be applied to the resulting dataset to solve these issues.
Conclusions
Currently, many HEIs are modernizing decision support processes. This step leads to trendy research subjects. Therefore, several researchers have examined different techniques to solve the challenges caused by this modernization. LD is one of the most successful technologies, proposed by significant amounts of the related literature, solving many challenges in HE. The academic teacher is the crux of most decisions in HEIs. Since ranking academic teachers relies on their academic and research experience, there is a need to find a solution to enrich the academic teachers' profiles, especially their research records. This work enhances the decision-making process within universities by generating a link between a university ontology that represents courses and academic profiles semantically and an open scholarly dataset. Engaging LD technology enhances university data with needed or missed researchers' data related to their research activities, such as publications and citations.
The study aims to improve the previous results of mapping the most qualified academic teacher with a new course to teach. A survey was conducted to find the most effective elements that control this process. This experiment is applied using the Silk tool to generate the link between the semantic data of the Faculty of Computing and Information Technology with its three departments at KAU and the scientific knowledge graph MAKG. KAULD is the resulting dataset, and it was published using GraphDB.
A statistical analysis of the results was performed and compared to the results from the previous work. The comparison showed that LD succeeded in improving the decisionmaking process and, unlike using SW alone, the results of leveraging LD with SW included the majority of the courses and teachers. Most of the courses in each department are assigned to more qualified teachers from the same department, while teachers are allocated to teach courses most related to their qualifications.
Although most teachers have matched with most courses, several shortages have appeared, especially when providing new courses in a department or when more than one To summarize the results, leveraging LD with SW techniques has succeeded in giving sufficiently accurate decisions. This proves that LD adds value to SW when employed to solve decision-making challenges within HE. Although the percentages that are shown in the evaluation process cover the most teachers and courses, there is still a need to solve some shortages such as allocating the rest of the teachers, setting priorities to organize choosing the most proper reference to teach the course in the case of locating more than one teacher, and assigning teachers to new courses. In the future, techniques such as machine learning and data mining can be applied to the resulting dataset to solve these issues.
Conclusions
Currently, many HEIs are modernizing decision support processes. This step leads to trendy research subjects. Therefore, several researchers have examined different techniques to solve the challenges caused by this modernization. LD is one of the most successful technologies, proposed by significant amounts of the related literature, solving many challenges in HE. The academic teacher is the crux of most decisions in HEIs. Since ranking academic teachers relies on their academic and research experience, there is a need to find a solution to enrich the academic teachers' profiles, especially their research records. This work enhances the decision-making process within universities by generating a link between a university ontology that represents courses and academic profiles semantically and an open scholarly dataset. Engaging LD technology enhances university data with needed or missed researchers' data related to their research activities, such as publications and citations.
The study aims to improve the previous results of mapping the most qualified academic teacher with a new course to teach. A survey was conducted to find the most effective elements that control this process. This experiment is applied using the Silk tool to generate the link between the semantic data of the Faculty of Computing and Information Technology with its three departments at KAU and the scientific knowledge graph MAKG. KAULD is the resulting dataset, and it was published using GraphDB.
A statistical analysis of the results was performed and compared to the results from the previous work. The comparison showed that LD succeeded in improving the decisionmaking process and, unlike using SW alone, the results of leveraging LD with SW included the majority of the courses and teachers. Most of the courses in each department are assigned to more qualified teachers from the same department, while teachers are allocated to teach courses most related to their qualifications.
Although most teachers have matched with most courses, several shortages have appeared, especially when providing new courses in a department or when more than one teacher is assigned to teach one course simultaneously. As a suggestion to solve these shortages, more artificial intelligence technologies, such as machine learning and data mining, can be applied in our future work on the resulting dataset, to predict more courseteacher assignments and set teachers' priorities. In addition, the system can be extended to support more decisions within universities or to solve more educational challenges. Other universities can reuse it, especially those that apply the same rules as KAU.
|
2022-09-09T15:23:24.867Z
|
2022-09-02T00:00:00.000
|
{
"year": 2022,
"sha1": "8c973ac9cb3e84fc04a6934f5ff072815b5e5e5f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/10/17/3148/pdf?version=1662098582",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f838407f28d21bbdd575150434e7a606ccda0a7e",
"s2fieldsofstudy": [
"Computer Science",
"Education"
],
"extfieldsofstudy": []
}
|
26502289
|
pes2o/s2orc
|
v3-fos-license
|
Localization and Perturbations of Roots to Systems of Polynomial Equations
Michael Gil’ Department of Mathematics, Ben Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel Correspondence should be addressed to Michael Gil’, gilmi@bezeqint.net Received 20 March 2012; Accepted 28 May 2012 Academic Editor: Andrei Volodin Copyright q 2012 Michael Gil’. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We establish estimates for the sums of absolute values of solutions of a zero-dimensional polynomial system. By these estimates, inequalities for the counting function of the roots are derived. In addition, bounds for the roots of perturbed systems are suggested.
Introduction and Statements of the Main Results
Let us consider the system: f x, y g x, y 0, 1.1 where The coefficients a jk , b jk are complex numbers.The classical Beźout and Bernstein theorems give us bounds for the total number of solutions of a polynomial system, compared to 1, 2 .But for many applications, it is very important to know the number of solutions in a given domain.In the present paper International Journal of Mathematics and Mathematical Sciences we establish estimates for sums of absolute values of the roots of 1.1 .By these estimates, bounds for the number of solutions in a given disk are suggested.In addition, we discuss perturbations of system 1.1 .Besides, bounds for the roots of a perturbed system are suggested.
We use the approach based on the resultant formulations, which has a long history; the literature devoted to this approach is very rich, compared to 1, 3, 4 .We combine it with the recent estimates for the eigenvalues of matrices and zeros of polynomials.The problem of solving polynomial systems and systems of transcedental equations continues to attract the attention of many specialists despite its long history.It is still one of the burning problems of algebra, because of the absence of its complete solution, compared to the very interesting recent investigations 2, 5-8 and references therein.Of course we could not survey the whole subject here.
A pair of complex numbers x, y is a solution of 1.1 if f x, y g x, y 0.Besides x will be called an X-root coordinate corresponding to y and y a Y -root coordinate corresponding to x .All the considered roots are counted with their multiplicities.Put
1.4
With m m 1 m 2 introduce the m × m Sylvester matrix with a k a k y and b k b k y .Put R y det S y and consider the expansion: Thanks to the Hadamard inequality, we have If, in addition, condition 1.10 holds, then where y k are the Y -root coordinates of 1.1 taken with the multiplicities and ordered in the increasing way: This theorem and the next one are proved in the next section.Note that another bound for j k 1 |y k | is derived in 9, Theorem 11.9.1 ; besides, in the mentioned theorem a j • and b j • have the sense different from the one accepted in this paper.
From 1.12 it follows that
International Journal of Mathematics and Mathematical Sciences
To estimate the X-root coordinates, assume that inf |y|≤θ R 1 a 0 y > 0 1.14 and put 1.15 Theorem 1.2.Let condition 1.14 holds.Then the X-root coordinates x k y 0 of 1.1 corresponding to a Y -root coordinate y 0 (if they exist), taken with the multiplicities and ordered in the decreasing way, satisfy the estimates: In Theorem 1.2 one can replace f by g.Furthermore, since y k are ordered in the decreasing way, by Theorem 1.1 we get j|y j | < j θ R and y j < r j : 1 θ R j j 1, . . ., n .
1.17 Similarly, by Theorem 1.2, we get the inequality: Denote by ν X y 0 , r the number of X-root coordinates of 1.1 in Ω r , corresponding to a Y coordinate y 0 .
Corollary 1.4.Under conditions 1.14 , for any Y -root coordinate y 0 , the inequality: In this corollary also one can replace f by g.
Proofs of Theorems 1.1 and 1.2
First, we need the following result.
Lemma 2.1.Let P z : z n c 1 z n−1 • • • c n be a polynomial with complex coefficients.Then its roots z k P ordered in the decreasing way satisfy the inequalities: Proof.As it is proved in 9, Theorem 4.3.1 see also 10 , But thanks to the Parseval equality, we have Hence the required result follows.
Proof of Theorem 1.1.The bound 1.11 follows from the previous lemma with P y R y /R 0 .To derive bound 1.12 note that where y is a zero of R y .By the previous lemma International Journal of Mathematics and Mathematical Sciences Thus, . This proves the required result.
Proof of Theorem 1.2.Due to Theorem 1.1, for any fixed Y -root coordinate y 0 we have the inequality: We seek the zeros of the polynomial: Besides, due to 1.14 and 2.7 , a 0 y 0 / 0. Put 2.9 Clearly, x k y 0 < θ Q j j 1, 2, . . ., n .
Perturbations of Roots
Together with 1.1 , let us consider the coupled system: b jk x m 2 −j y n 2 −k .
3.2
Here a jk , b jk are complex coefficients.Put
3.3
Let S y be the Sylvester matrix defined as above with a j y instead a j y , and b j y instead of b j y , and put R y det S y .It is assumed that deg R y deg R y n.
3.4
Consider the expansion: where μ R is the unique positive root of the equation: To prove Theorem 3.1, for a finite integer n, consider the polynomials: with complex coefficients.Put
3.11
Lemma 3.2.For any root z P of P y , there is a root z P of P y , such that |z P − z P | ≤ r q 0 , where r q 0 is the unique positive root of the equation This result is due to Theorem 4.9.1 from the book 9 and inequality 9.2 on page 103 of that book.
By the Parseval equality we have
3.14
The assertion of Theorem 3.1 now follows from in the previous lemma with P y R y /R 0 and P y R y / R 0 .
taken with the multiplicities and ordered in the decreasing way: |y k | ≥ |y k 1 |, satisfy the estimates: j k 1
Thus 1 .Corollary 1 . 3 .
1 has in the disc {z ∈ C : |z| ≤ r j } no more than n − j Y-root coordinates.If we denote by ν Y r the number of Y -root coordinates of 1.1 in Ω r : {z ∈ C : |z| ≤ r} for a positive number r, then we get Under condition 1.10 , the inequality ν Y r ≤ n − j 1 is valid for any
2 . 10 Due to the above mentioned 9 ,
Theorem 4.3.1 we have j k 1
P e it − P e it 2 dt ≤ max |z| 1 P y − P y 2 .
3.1.Under condition 3.4 , for any Y -root coordinate y 0 of 3.1 , there is a Y -root coordinate y 0 of 1.1 , such that
|
2017-07-29T06:11:32.710Z
|
2012-07-03T00:00:00.000
|
{
"year": 2012,
"sha1": "f2e57e1a2c72befbd02192db85e0e55facff0c39",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijmms/2012/653914.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f2e57e1a2c72befbd02192db85e0e55facff0c39",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
55328054
|
pes2o/s2orc
|
v3-fos-license
|
The role of chemical admixtures in the formation of the structure of cement stone
The influence of sulfates and carbonates of potassium and sodium on the character of the formation of the microstructure of cement stone was studied. The role of cations in the structure formation of cement stone is shown. The efficiency of chemical additives, hardening accelerators, was estimated from the ratio of the volumes of gel and capillary micropores. The ratio of gel and capillary pores allows to determine the efficiency coefficient of the action of chemical additives. It is shown that the potassium carbonate for Portland cement is the most effective additive for hardening in terms of microstructure modification, and potassium sulfate for slag Portland cement.
Introduction
In predicting the durability, durability of concrete, the pore structure takes the lead.Its characteristics determine the process of destruction of concrete in the structure.In the theory of calculating the durability of concrete structures, along with other factors, the leading role of the differential pore structure has been established.The determination of these characteristics is carried out experimentally or on the basis ofsemiempirical relationships.Estimating the total porosity, these dependences do not provide answers to questions related to the shape of pores, the quantitative and probabilistic distribution of pores by size.
It is known that the most numerous and responsible for the properties of concrete is the proportion of pores -capillaries.They are permeable to water and are the reason for its penetration inside the concrete of structure, contribute to the accumulation and development of cracks.The main properties of concrete depend on the characteristics of the capillary pore structure, the formation of which begins at the early stage of cement hardening.
The process of pore formation of cement stone is much more complicated than in monomineral knitting in connection with the presence of two developing and interacting structures -hydrosulfoaluminate and hydrosilicate.The main structure-forming role is played by hydrosilicates and calcium hydroxide, which constitute the bulk of the new formations of cement stone.Despite the huge morphological diversity of other hydrate neoplasms in the hardened cement stone, their effect on certain physical and mechanical properties is deterministic and can be predicted in advance.An ensemble of micropores with a corresponding distribution of their for size determines the main properties of cement stone -strength, permeability, frost resistance, etc. Changes in the pore size in the range from 2 to 100 nm can dramatically change the properties of the cement stone and thus allow to control the character of the structure formation at the hardening stage.One of the levers controlling the structure of cement stone is the use of chemical and mineral admixtures.
Materials and methods of research
The effect of chemical admixtures on the character of pore formation of two types of cement was studied: Portland cement PC CEM I 42.5R and slag Portland Cement II CEM II B-S 32.5 R, differing in mineral composition, slag content and, as a result, the rate of hydration and character of structure formation (Table 1).To estimate the parameters of the microstructure of the cement stone (dimensions, volume and pore size distribution), the thermoporometry method was used [1].The experimental basis of the method is differential scanning microcalorimetry.To determine the amount of free and adsorbed water in the cement stone, it is frozen in a calorimeter and phase transitions of water are recorded, the presence and intensity of which are determined by the pore size and free water content in them [2].The relationship between the pore radius r and the crystallization temperature of the pore water T is expressed by an equation of the type: where A and B are constants, depending on the experimental procedure, T 0 = 273 K.In this case, the thickness of the layer of adsorbed water is taken into account, which is 2.5 monolayers (8Å) according to [3].The pore volume is determined from the equation of the pore size distribution curve where a, b -the coordinates of the maximum propagation curve, m -width of the distribution curve at 10% of its maximum amplitude.The volume of pore space calculated according to formula (2) corresponds to the volume of the pore liquid, including free and adsorbed water.
The results of research
The hardening of all types of cement for 1 day is accompanied by the formation of a pore structure with two areas of distribution of micropores in size, characteristic for gel and capillary micropores (Fig. 1).For Portland cement, they are located in the range 2.2…22 nm with pore distribution maxima at 2.36, 2.45 and 14.6 nm.For slag Portland cement, the range of distribution of micropores narrows to a range of 3.1 ... 6.6 nm with maxima at 3.4, 3.7 and 6.3 nm (Fig. 1a).The volume of micropores increases by 1.3 times in comparison with portland cement (Fig. 1b) and is 0.46 cm 3 / g.This is due to a slower binding of mixing water during hydration of the SHPC, the formation of hydrogelenite and hydrogarnets as well as calcium hydrosilicates of less basicity than with hardening portland cement.The consequence of this is a reduction in the size of capillary micropores and the formation of low-density LD C-S-H with a porosity in the range 3.1 ... 4.2 nm (Fig. 1a).Transbud-2017 The gel microporosity for the SHPC is only 7.6% of the total porosity, the analogous index for the PC is 47%.Consequently, the amount of calcium hydrosilicates formed in the cement stone composition is 40% greater than that of the SHPC for the same hardening time.
The effect of hardening accelerators is based on a change in the solubility of the initial astringent and final products of its hydration due to a change in the ionic strength of the solution [4].
Admixtures that contain ions of the same name with astringents contribute to the formation of embryos of crystalline hydrate neoplasms and at low concentrations they reduce the solubility of astringent and hydrate neoplasms.As the concentration of additives increases to a certain limit, their effect increases somewhat.With an increased concentration of these additives, it is possible to react with calcium hydroxide to form double salts.As a result, the solubility of Ca(OH) 2 and the silicate minerals of cement clinker is increased.By varying the solubility of astringent and hydrate neoplasms, it is thus possible to regulate the kinetics of cement hydration.
Admixtures that do not contain the same name with astringent ions, at low concentrations [5] accelerate hardening, and at large, the opposite effect is possible.
The most effective hardening accelerators are alkali metal salts based on Na + and K + cations with various anions of CO 2 3-, Cl -, SO 2 , S 2 O 2 3-and CNS -, etc.Effect of sulphates.Sodium and potassium sulphatesare classified as admixtures of the first class [4].The mechanism of their action is due to the reaction with calcium hydroxide, released during the hydration of cement, with the formation of a two-water gypsum: The resulting fine-grained gypsum reacts with cement and promotes nucleation and growth of neoplasms.Most effectively as an accelerator, sulfates show themselves on Portland cement and Pozzolan Portland cement.However, in the opinion of Yu.M. Bazhenov [6], sodium sulfate negatively affects the long-term strength of the cement stone.
In addition, Na 2 SO 4 hydrolysis retains sodium Na + cations in the pore fluid, and the anionic part -SO 4 2is bound by alumina-containing phases.Isolation of cations of Na + leads to a change in the ionic strength of the solution, i.e. increasing the alkalinity of the medium, causing an increase in the solubility of the silicate constituents and accelerating the hardening [7].Anions SO 4 2-contribute to the formation of sodium-containing hydrosulfoaluminates and calcium hydrosulfoaluminoferrites [8].
The effects of sodium and potassium sulfate on the formation of microporosity during Portland cement hydration are shown in Fig. 2.
When Portland cement is solidified with 2% Na 2 SO 4 , two pore distribution regions are formed: 2.5...3.3 nm and 5...16.5 nm with maxima at 2.7 nm and 10.5 nm, respectively.The total pore volume increases by 1.2 times as compared with the sample without the admixture and is 0.41 cm 3 /g.This character of pore formation of cement stone is associated with a change in the balance between the pores already on the first day of hardening.Sodium sulfate, accelerating the hydration of cement, promotes the rapid formation of capillary micropores in the range 5.5...14.6 nm with a total volume of 0.38 cm 3 When Portland cement hydrates in the presence of 2% K 2 SO 4 , one region of the pore distribution is formed in the interval 6...14.2 nm with a maximum at 10.5 nm, which corresponds to the region of the capillary pores.The total volume of micropores also increases by 1.2 times and corresponds to 0.41 cm 3 /g.Potassium sulphate activates the formation of monosulfoaluminate and ettringite, which leads to stresses and deformations leading to a decrease in the strength of the cement stone, an increase in the volume of the solid phase, and the size of micropores.
In the process of hardening slag Portland cement without admixtures, two regions of pore size distribution are formed in the range of 2.9 ... 6.8 nm (Fig. 3).The admixture of Na 2 SO 4 contributes to the formation of three regions of the distribution of pores in the range: 1.79...17.8 nm with maxima at 1.9, 2. When hydrating the slagPortland cement with K 2 SO 4 , two microporous regions are formed in the range of 2...2.5 nm and 3.6...6.6 nm with distribution maxima at 2.3 nm and 5.2 nm.The microstructure of the slag-portland cement stone is represented mainly by the C-S-H phase with the ratio C/S = 1/1.5.The cations Na + and K + can be embedded in such a structure, which increases the strength and resistance to chemical corrosion.
In the opinion of V.B. Ratinov in the presence of additives of sodium and potassium sulfate, new hydrate compounds -sodium-containing hydrosulfoaluminates and calcium hydrosulfoalumferrites -are formed [4].The reaction products are formed directly in the porous space of the cement stone, filling the pores, which leads to a decrease in their size and volume.O.M. Rosenthal showed that in the presence of Na + and K + ions alone, the solubility of aluminates increases and the process of their interaction with sulfates develops in the bulk of the liquid phase of concrete with pore filling by crystalline products [10].
Differences in the effect of sulfates can be explained by the fact that for equal valence of compounds, the intensity of their effect on the ionic strength of the solution is higher for Na 2 SO 4 .This leads to an increase in the solubility of silicate phases of cement, acceleration of its hydration, and an earlier formation of gel microporosity.
Effect of carbonates.In the opinion of V.B. Ratinov and F.M. Ivanov carbonates of potassium and sodium react with calcium hydroxide to form KOH, which will accumulate in the aqueous phase of the solution and contribute to a change in the composition of the pore fluid [11].This leads to a change in the pH of the medium and accelerates the hydration of the cement.
Hydration of portland cement with admixture of Na 2 CO 3 generates two regions of pore distribution in the intervals 2.8...3.22 nm and 4.3...14.5 nm with distribution maxima at 3.02 and 7.5 nm, respectively (Fig. 4).When solidified with K 2 CO 3 , one region of pore distribution is formed in the range 4...12.4 nm with a maximum at 7.3 nm.The total pore volume when these admixtures are added increases 1.5 times with hydration with Na 2 CO 3 and 1.3 times with K 2 CO 3 .The accelerating effect of carbonates on the hydration of Portland cement is due to a decrease in the size of capillary pores with simultaneous growth in their quantity.The introduction of Na 2 CO 3 contributes to the formation of only a small amount of gel micropores, the total volume of which is 55 times less than that of a sample without an admixture.
Solidification of slag Portland cement in the presence of K 2 CO 3 and Na 2 CO 3 is accompanied by the formation of a discrete distribution of micropores in the intervals 3.4...14.5 nm and 2.24...12.4 nm, respectively (Fig. 5).In slag Portland cement stone with the addition of potassium carbonate, there is practically no gel micropores.The introduction of sodium carbonate leads to the growth of gel micropores, whose volume is 10% greater than that of the control sample.But at the same time high-density calcium hydrosilicates HD С-S-Н (2.2...3 nm) are formed, which has a positive effect on improving the properties of cement stone and concrete.From these data it follows that the addition of Na 2 CO 3 contributes to the early formation of gel porosity.Its formation in the opinion of V.B. Ratinov and T.I.Rosenberg [4] can be associated with the formation of gel-like substances such as sodium hydrosilicates, which provide a more dense structure, which positively affects the physical properties of concrete.
Discussion of results
Based on the obtained data on the effect of hardening accelerators in the early stages of hydration of cements and the formation of a microstructure of a cement stone, the following lyotropic series were obtained: -for Portland cement -K 2 CO 3 >K 2 SO 4 >Na 2 SO 4 >Na 2 CO 3 ; -for slag Portland cement -K 2 SO 4 >Na 2 SO 4 > Na 2 CO 3 > K 2 CO 3 .
The ratio of gel and capillary pores allowed to determine the coefficient of efficiency of K ef in the form of the following ratio of dynamic microstructure indices [12]: , where V gel is the volume of gel pores; V сap is the volume of capillary pores, V tot is the total volume of pores, index 1 corresponds to cement with admixtures, 0 -without admixture.
The results of the efficiency evaluation are shown in Fig. 6.As can be seen from Fig. 6, for Portland cement, potassium carbonate is the most effective accelerator, and potassium sulfate for Portland slag.It should be borne in mind that the coefficient values are less than zero indicate a negative effect of the additive, at K ef = 0 -the additive has practically no effect on the structure formation of the cement stone.
Conclusions
Studies of the influence of sulfates and carbonates of potassium and sodium showed that the character of the formation of the microstructure of the cement stone depends to a large extent on the type of cation.Evaluation of the relationship between capillary and gel pores showed that the potassium cation promotes the increase in gel porosity with a decrease in capillary porosity.It has been shown that potassium carbonate for Portland cement is the most effective additive for hardening from the point of view of microstructure modification, and potassium sulfate for slag Portland cement.
Fig. 1 .
Fig. 1.Influence of cement type on microporosity of cement stone: (a) pore size distribution; (b) the total pore volume.
4 and 8.5 nm.The first two areas refer to the porosity of C-S-H, and the third -to the capillary pores.This pattern of pore size distribution confirms the formation of high-density calcium hydrosilicates -НD С-S-Н (1,75...2,8 nm) in the presence of sodium sulfate.Our data coincide with the results obtained by Ali Noaman Khalid Hussein [9] -two types of calcium hydrosilicates of the composition C-S-H (I) and C-S-H (II) are formed in the hardening slagPortland cement in 1 day.At 7 days, the hydrosilicates have the composition C-S-H (I) and C 2 SH (B), the hydrosilicate composition on the 14th day is mainly CSH (II), which also indicates the presence of two types of gel microporosity related to different types of calcium hydrosilicates.
|
2018-12-11T18:44:16.260Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "9f87e7d20db7c522b72133e213a8bfd2456928d5",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/30/matecconf_trs2017_01018.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9f87e7d20db7c522b72133e213a8bfd2456928d5",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
244590978
|
pes2o/s2orc
|
v3-fos-license
|
Vision-based kinematic analysis of the Delta robot for object catching
SUMMARY This paper proposes a vision-based kinematic analysis and kinematic parameters identification of the proposed architecture, designed to perform the object catching in the real-time scenario. For performing the inverse kinematics, precise estimation of the link lengths and other parameters needs to be present. Kinematic identification of Delta based upon Model10 implicit model with ten parameters using the iterative least square method is implemented. The loop closure implicit equations have been modelled. In this paper, a vision-based kinematic analysis of the Delta robots to do the catching is discussed. A predefined library of ArUco is used to get a unique solution of the kinematics of the moving platform with respect to the fixed base. The re-projection error while doing the calibration in the vision sensor module is 0.10 pixels. Proposed architecture interfaced with the hardware using the PID controller. Encoders are quadrature and have a resolution of 0.15 degrees embedded in the experimental setup to make the system closed-loop (acting as feedback unit).
Introduction
Vision-based kinematic parameter estimations are generally termed as very accurate due to the noncumulative in joint errors. The programming of very high precision using the traditional method like "teach-in" is very expensive. So, the need for offline programming is in demand and proves to be very accurate as it comprises minor pose errors. The role of this paper is to increase the accuracy of the parallel robot, that is the architecture consists of three Delta robots in a symmetric order using the vision-based calibration. In this paper, the central part describes the calibration of the parallel robots based on the Delta architecture. The Delta robots have three degrees of freedom (DOF), only translation along the x, y and z axes.
Sheng-Weng [1] proposes kinematic parameter identification for an active binocular head. The configuration of the binocular head comprises four revolute joints and two prismatic joints. The kinematic parameter of the binocular head is unknown due to the presence of the off-the-shelf components. It estimates the kinematic parameter without any initial estimates. Therefore, existing solutions of closedform based on pose measurements do not provide the required accuracy. As a result, the design of a new technique does not need measurements of orientation. Only the position measurements of calibration are necessary to obtain the highly accurate estimates in the closed-form systems. This method applies to the identification of kinematic parameter problems. In this case, the links are rigid; joints are either prismatic or revolute.
Zubair et al. [2] have explained computer vision technology to analyse the Stewart platform forward kinematics using a vision sensor. The unique solution of the kinematics of the platform, a predefined library of ArUco markers, has been used for pose estimation. The analytical solution for the kinematics C The Author(s), 2021. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited. problem of a Stewart platform mathematically has multiple solutions and is nonlinear. By using computer vision, complexity decreases and speed increases. The advantage of using ArUco markers is that a single marker has enough information for pose estimation [3]. Multiple such ArUco markers are used to increase pose accuracy further, and the pose of the entire board of multiple markers provides us with reliable pose estimation. The camera used is the Logitech C270 with a sensor resolution of 1280 × 960.
Garrido et al. [3] present a fiducial marker for the camera pose estimation and have applications like tracking, robot localization, augmented reality, etc. The derivation of inter-marker distance to its maximum attained by the binary markers was also performed. Detection of the markers in an automated fashion is also presented. A solution to the occlusion problem in the augmented reality is presented. This noise propagated in the estimation of the camera extrinsic parameters. The jitter level observed for the black and white marker is less than that of the green and blue ones. Analysis of the occlusion is also presented here.
Garrido-Jurado [4] presented square-based fiducial markers, that is very efficient for the camera pose estimation. To maximize the error correction capabilities, the inner binary codification with a significant inter-marker distance is implemented. Mixed Integer Linear Programming approach generates the fiducial markers dictionaries (square shape) to maximize their inter-market distance. The primary method finds the optimal solution for the small dictionaries and bits as the computing time is too long. The secondary way is a formulation to extract the sub-optimal dictionaries having time constraints.
Yuzhe [5] presented kinematic parameter identification and calibration algorithm in parallel manipulators. Comparison with the other two conventional techniques is also presented. Investigation of the mathematical properties of the identification technique is demonstrated by analysing the identification matrix using the singular value decomposition. The identification of simulation parameters is implemented based on both six-degree-of-freedom (DOF) and five-degree-of-freedom measurements, respectively.
Reymong [6] kinematic calibration of the Delta robot is discussed. The kinematic calibration of the two models has been introduced. The first model took care of all mechanical parts except the spherical joints and was termed "model 54." The second model considers only that deviation that affects the endeffector position, not its orientation, and is termed as "model24." An experimental setup is presented to estimate the end-effector pose with respect to the fixed base frame. The kinematic parameter estimation is performed after the estimation of the pose of the end effector.
Aamir [7] discussed serial robot kinematic identification. This area is an active domain of research to improve robot accuracy. The Denavit-Hartenberg (D-H) parameters represented the architecture of the robot and provided by its manufacturer. Due to time, these parameters are given by the manufacturer change, so it needs to be identified. An analytical technique is discussed for the identification of a serial industrial robot. This can be achieved by providing the motion to one joint and keeping all rest joints to fix. From this, the point values on the robot's end effector were estimated using the singular value decomposition method.
Hoai [8] presented robot kinematic identification errors during the calibration process. This requires accurate pose (orientation and position) measurements of end effector (acting robot) in Cartesian space. A method is proposed for the pose measurement of end-effector calibration. This method works on the feature extraction for the set of target points placed on the robot end effector. The measurement validation is done by simulation results serial robot (PUMA) using the proposed method. The experimental results and calibration results are validated using the Hyundai HA-06 robot. This proves the correctness and reliability of the proposed technique. This technique can also be deployed to the robots with only revolute joints or the last joint as revolute.
Shi Baek [9] proposes the dynamics modelling implementation and the interfacing with the hardware of a Delta parallel manipulator. This architecture is complex, and the derivation of the inverse dynamics is performed using the Lagrangian equations (first type). As another Delta robot that is commercially available can attain a speed of 10 m/s. Fast and accurate dynamics computation is essential for high-speed applications like intelligent conveyor systems and the manufacturing industry to compute the torque for controlling the Delta manipulator. The validation of the inverse dynamics with the ADAMS is also performed, and less than 0.04 millisecond time is needed for calculating the dynamics and inverse kinematics module.
Modelling of the generic error is presented in Ruibo [10] and is based on the exponentials (POEs) used for the calibration of the serial robot. So, the parameters that are identifiable of the given model were analysed. The result of the analysis shows that errors in joint twists are identifiable. The second outcome is the zero position error of joint, and the transformation error at initial is not identified provided the same error model. When the joint twists coordinates are linearly independent, errors in joint (zero position) are identifiable. As for the n degree of freedom (DOF) robot, the maximum identifiable parameters are (6n + 6). If n(r) is termed as number of revolute joints, n(t) is termed as prismatic joint then maximum identifiable parameters represented as (6n(r) + 3n(t) + 6). The error model's POE expression can be a minimal, complete and continuous serial-robot calibration for kinematic modelling.
Dynamic parameter identification presented in Vishaal [11] of an industrial robot-like KUKA KR5 is discussed. KUKA KR5 has six revolute joints and comprises a serial architecture. As for this, a simplified model of the serial robot is considered, that is those joints, which performs orthogonal to the gravity vector. Euler-Lagrangian technique is used to formulate the dynamic model and then find the dynamic parameters. Thus, yielding the equation of the motion linearization is performed and expressed in the terminology of base parameters. The estimation of the base parameters is calculated by the technique called linear regression, as applied to the given planar trajectory points. KUKA KRC2 controller has a sensory interface called robot sensory interface, used to acquire the torque at each joint and the endeffector pose of the serial robot. The results obtained, that is dynamic parameters, validated with the numerical values of the mass moment computed from the curve fitting approach.
Aamir [12] presented the estimation of the dynamic parameters of the serial robot, and a comparison with the CAD model is performed. The identification equation of the serial robot is inherited from the Newton-Euler technique, that is geometric parameters and joint values as input and joint torque data as output. The dynamic parameters are identified for the CAD model provided by the robot manufacturer in simulation. Experimentally, seven DOF robot KUKA-iiwa R800 is used. The variation between the joint torques predicted from the estimated base parameters obtained using the CAD model and actual robot is presented. The factors responsible for the variation are also highlighted. A detailed study of Delta robot structure, kinematics, Delta catching system, selection of catching system, design of components, dynamics analysis of the system, control analysis has been studied.
Boney [13] has explained the success story of the Delta parallel robot. Robots are achieving acceleration up to 50 g in experiments and 12 g in industrial use. Delta robot is perfect for application where the light object has to be placed from one place to another (10 g-1 kg). Murray [14] has discussed the kinematic analysis of the Delta robot (Clavel's) by the geometric method. The geometric configuration of Clavel's Delta manipulator is explained in detail. The initial conceptual design has been taken from Clavel's Delta configuration.
Tsai [15] has discussed the position analysis of the University of Maryland manipulator. The graphical and algebraic solutions for direct and inverse kinematic analysis of manipulators have been explained. Codourey [16] has presented the dynamic modelling of the parallel robot for computed torque control implementation. The number of forces and motion of the end effector is significantly less in the concept of micromanipulation. Laribi [17] has explained the dimensional analysis of the Delta robot for a specified workspace, that is designs of the dimensional configuration of the Delta manipulator for a given workspace. Kosinska [18] has explained the optimization of variables of the Delta parallel manipulator for a workspace. The methodology of deriving design constraints from the closed-loop configuration of the Delta-4 parallel manipulator is given. The initial conceptual design has been taken from Clavel's Delta configuration. Tsai [19] has explained the methodology based on the principle of virtual work for static analysis of parallel manipulators. The torque transmitted at the actuated joints due to the grasping force at the end effector and, finally, the Jacobian formulation has been used.
Robots for catching
Through the years, researchers and engineers have developed various catching-based robots. Major catching-based robotic systems were developed in the mid-1990s and 2000s. Philip W. Smith et al. [20] proposed vision-based robust robotic manipulation algorithms used for objects having non-rigid in nature. It is based on an image-based representation of the non-rigid structure and relative elasticity, with no a priori physical model. The relative elasticity has many advantages as simplicity, comprehensiveness and generality. This method overcomes many limitations of existed non-rigid object manipulation.
Satoshi Yonemoto et al. [21] proposed mapping real-time human action with its corresponding virtual environment. In this approach, the scene constraints fall between the user motion and the virtual object. This approach has resulted in the action information and non-trivial to have the body posture representation. Yejun Wei et al. [22] described non-smooth domains and demonstrated and verified a motion planning algorithm for the system. In this approach, catching is performed on a non-smooth object using four fingers. Park et al. [23] described a method that takes the visual data to produce force guidance data and an applicable tele-manipulative system. Mkhitaryan et al. [24] presented a visionbased haptic multisensory. The system interacts with objects having fragile nature. Chen et al. [25] revealed the significance of active lighting control in the domain of robotic manipulation.
In this paper, various strategies of intelligent lighting control for industrial automation and robot vision tasks are discussed. However, none of the manipulations of an object by the direct drive parallels the Delta robotic system (one Delta end effector as one finger, i.e. three-fingered parallel robot) applied before. In section A, the proposed methodology has been discussed. Section B discusses some geometry background. Section C describes the initial conceptual design, and section D discusses the mechanical structure design. In Section 2, kinematic analysis and experimental setup used are discussed. In section 3, controller design, including the PID, genetic algorithm and current limiting, is discussed. In section 4, catching the object is examined. Results and validation are discussed in section 5.
Experimental setup
As shown in Fig. 1, three parallel robots are arranged in symmetric order in the kinematic identification experimental setup. This paper is limited to the kinematic and parameter estimation of the system to perform the catching of the object (cube) in 3D as it moves dynamically. Validating the proposed approach with the standard estimation techniques like the iterative least square method is performed. The experimental setup comprises three Delta robots, and each of the end effectors behaves like one finger, so it comprises three fingers that perform the catching and manipulation in the real-time scenario. The pose of the moving platform is estimated using a pre-calibrated vision sensor (Basler scout) The finger as Delta end effector is used in this architecture for high payload and dedicated to highspeed applications like catching (shown in Algorithm 1). In this architecture, the moving frame has no rotation with respect to the fixed frame, that is 3DOF of only translation in X, Y, Z axis, as shown in Fig. 2.
The use of parallelograms is the basic design of the Delta robot. This design allows an identical rotation matrix (R = I 3 × 3 ) between the static and moving platform. The three parallelograms control the orientation of the unfixed platform, which remains with 3DOF translation, as shown in Fig. 2. The input links of the three parallelograms are mounted on rotating levers via revolute joints. Link dimensions have been appropriated with reference to Clavel's configuration. The base three arms (upper arms) are connected with three parallelograms (lower Arm). The ends of the lower Arm are connected to a small triangular moving platform. Actuation of the input links moves the triangular platform in three dimensional, that is X p , Y p , and Z p direction.
Delta manipulator employs revolute joints and spherical joints to give platform output to translational motion. The conceptual design and configuration have been understood [26]. In , that is three translations and three rotations, is 6-DOF with respect to the fixed base coordinates system using the camera as a vision sensor. After that, inverse kinematics of all three Delta robots is performed, estimating the thetas [θ 1 to θ 9 ] from all the three Delta robots, and feeding the calculated thetas to the controller of the manipulating system is performed. Getting feedback from the encoder and PID control scheme is also implemented. The tuning of the PID values is achieved at every cycle using the optimizing technique. Validation of the kinematic analysis and identification results with the experimental setup is also achieved.
Input The schematic of the Delta manipulator used in this paper is shown in Fig. 4. The angles defined as θ 1i , θ 2i, θ 3i ; θ 1i are the angles of the I st link with respect to the horizontal. θ 2i is the angle between l 2i and the extension of l 1i . θ 3i is the angle between the second link and its projection on the x-z plane.
In the Delta robot architecture, two spherical joints s i1 , s i2 , and a revolute joint b i . The angle between axes X, u 1 , v 1 is 120 • , and the direction of axes Z, w 1 and w 2 are the same.
Kinematic analysis
The three DOF Delta robot is capable of 3D transitional of its moving surface. As in the proposed Delta architecture, there are three identical Revolute-Universal-Universal (R-U-U) chains as legs. In the figure shown below, points B i , i = 1,2,3 represent the hips, points, P i , i = 1,2,3 are the ankles and points, A i ,
Forward position kinematics [simulation]
The forward position kinematics is defined as the three actuated joint angles θ = {θ 1 , θ 2 , θ 3 } T are given then compute the resulting Cartesian position of the unfixed surface control point (P), The forward position kinematics solution for parallel robots is non-trivial. The correct solution set is selected for a straightforward analytical solution, as there is translationonly motion in the 3DOF Delta robot, as shown in Fig. 6. Given θ = {θ 1 θ 2 θ 3 } T , then compute the three Delta robot forward position kinematics is determined by its point of intersection of the three spheres. This sphere is represented by a centre point termed as c and the scalar radius defined by r.
In this section, the analytical solution is computed as the intersection point of the given three spheres. In this concern, we need to solve a transcendental equation that is coupled in nature. So, to solve the forward kinematics solution, there exists an intersection-three-sphere algorithm. If this is the case when the entire three-sphere centres { B iv A } have the same height (Z), then the solution will be in the zone of the algorithmic singularity. To fix the problem of rotating the coordinates, that is all { B iv A }, height values are not the same. In this problem, two solutions are encountered when the three spheres are intersected. If the sphere meets tangential, then it results in one solution. If the centre distance is too great for the given sphere radii, then it has zero solution. The second case is the case when there is an imaginary solution, and the data are inconsistent. The algorithm (spheres-intersection) computes solution sets, and the computer automatically chooses the solution below the base triangle. This approach to the forward position kinematics for the Delta robot estimated results same as of solving the kinematics equations, depicted as
Forward position kinematics using vision sensor [experimental]
In computer vision terminology, the object's pose is defined as the relative orientation and position of an object coordinate system with respect to the camera coordinates system. The object's pose is estimated, so the need for at least 3 points on the 3D object in the world frame (U V W) is required as the task is to infer the coordinates of the 3D points in the camera frame from the 2D coordinates of the image (x, y).
In the library called ArUco, which consists of the board having multiple markers is used. The points of interest in the board are the four corners of each marker and are known to be in a plane and with known separation. This is compared to the image points projected back from the camera. Direct Linear Transformation is used for the initial estimate for [R, t], and later on, refinement is done using the Levenberg-Marquardt (LMA) method. The re-projection error, the sum of the squares distances between the observed projections and the projected object points, is minimized using the LMA method. Thus, the method is also called the least squared method. By using the pose of the moving platform with respect to the camera and stationary base with respect to the camera, the orientation and position of the moving platform can be computed with respect to the base using: where H P B is the homogeneous transformation matrix of the platform with respect to the base frame. H B C is the homogeneous transformation matrix of the base platform with respect to the camera. H P C is the homogeneous transformation matrix of the moving platform with respect to the camera.
We can estimate these matrices using a board of 16 markers, evenly spaced in a 4 × 4 grid, placed on the base and the moving platform. The board for the base and moving platform is separated, and the markers are of different sizes. The size of the markers on the platform is of side 26 mm, and the markers on the base are of side length 40 mm. Different sizes are chosen so that the edges of the markers can be detected accurately on both the base and the platform without having to move the camera. The centre of the boards is placed to coincide with the origins of the base and the platform coordinate systems.
T P B and R P B are extracted from the homogeneous transformation matrix ArUco markers and are preferred over similar marker dictionaries such as ARTag or AR Toolkit because the waymarkers are added into the marker dictionary. After dictionary creation, the minimum hamming distance between a marker in the dictionary with itself and other markers isτ . The error correction capabilities of an ArUco dictionary are related toτ . Compared to AR Toolkit (cannot correct bits) and AR Tag (can correct up to 2 bits only), ArUco can correct up to [τ − 12] bits. If the distance between the erroneous marker and a dictionary marker is less than or equal to [τ − 12], the nearest marker is considered the correct marker.
The output from the marker detection code of ArUco is the form of the position vector of the centre of the marker and the Rodrigues vector of the marker frame. The Rodrigues vector is converted into a 3 × 3 rotation matrix. The following are the experimental iterations. input is an image instead of a video stream, the rotation matrix is unreliable. Therefore, a single ArUco marker is insufficient to estimate the pose of a Delta platform accurately. The following three ArUco markers arranged at the corners of an equilateral triangle of 100 mm are used to extract plane information solely from the position vectors of the three markers. The results are more reliable than single marker detection. While repositioning the markers in the same plane, the plane equation derived from the pose estimation of the 3-marker system showed variation reaching 5 o . A solution adopted for improving pose estimation is increasing the number of markers in the form of a board and extracting the board's pose as a whole instead of individual markers. On repositioning this arrangement in the same plane, the orientation vector of the board showed a slight deviation limited to a maximum error of around 1 o , as shown in Fig. 7.
Kinematic analysis of the Delta platform is attempted using marker board detection for pose estimation. A board of 16 markers, evenly spaced in a 4 × 4 grid, is placed on the base and the moving platform. The board for the base and moving platform is separated, and the markers are of different sizes. The size of the markers on the platform is of side 26 mm, and the markers on the base are of side length 40 mm. Different sizes are chosen so that the edges of the markers can be detected accurately on both the base and the platform without having to move the camera. The centre of the boards is placed to coincide with the origins of the base and the platform coordinate systems. Compensations for the depth of the ArUco marker board and frame are considered while calculating the position vector of the platform's origin with respect to the base frame, as shown in Fig. 8.
By keeping the camera fixed, various data are collected with the help of the designed controller. The resultant position and orientation of the platform with respect to the base are estimated. By using the output rotation matrices and translation vectors, the homogeneous transformation matrix of the platform with respect to the base is computed. This was done for 32 different configurations of the end effector of the Delta platform by varying the reachable position with the help of the designed controller.
Kinematic parameter identification
In the robotics domain, positioning accuracy is needed for a wide range of applications [27][28][29][30]. The accuracy is affected by geometric factors like geometric parameter errors and non-geometric factors like link flexibility, encoder resolution, thermal effects, gear backlashes, etc. The repeatability of the calibration process improved the positioning error to estimate the mechanical characteristics and geometrical dimensions. According to Knasinski [31], 95% of the total error is due to the geometric factors calibrating the geometric parameters and treating non-geometric parameters as the randomly distributed error.
The kinematic calibration comprises four distinct steps. In the step first, a mathematical formulation gives results in the form of the model, that is the function of the joint variables (q), geometric parameters (ï) and external measurements (x). In the second step, collecting experimental data to include all the configurationally combinations of the end effector is done. In the third step, geometric parameter identification and its validation of the result are executed. In the last step, compensating the geometric parameter errors is done.
In this paper, kinematic parameter identification of the Delta robot based upon the implicit model is discussed. In this identification problem, ten geometric parameters are estimated, that is lower legs of the Delta robot (l1, l2, l3), upper leg of the Delta robot (Lx1, Ly1, Lx2, Ly2, Lx3, Ly3) and (R-r), that is the difference between the upper triangular moving platform and lower triangular platform. As only, the ratio is changed while performing the catching of the object. Modelling of the loop closure equations is done and then compute the Jacobian. The iterative least square method is performed for making the stopping criteria. If the rank of the Jacobian matrix is not 10, the system is rank deficient, and the iterations are stopped. Finally, updating the final estimated vector in every iteration of the parameter estimation. So, the nonlinear equation can represent the calibration model, as shown in the equations below.
where x represents external measured variables like the end-effector pose frame. q is the joint variables vector of order (nx1). η is the geometric parameters vector of order (Npar × 1). φ is the calibration Jacobian matrix of order (p × Npar) and elements whose elements are calculated as the function of the generalized Jacobian matrix. y is the prediction error vector of order (p × l). For estimating the η, Eq. (2) is used if sufficient configurations are present. Therefore, combining the equations resulted in linear and nonlinear systems. The order of the system of the equations is (p x n), where n represents the configurations index.
where q t = [q 1 T . . .. . . q e T ] T , x t = [x 1 T . . .. . . x e T ] T , W represents the observation matrix of order (r × N par ). £ and £ represent the modeling error vector, and it also includes the un-modelled nongeometric parameters.
The configurations n defined by keeping the fact that the number of equations, r = ps x e, should be greater than N par . In general, efficient results can be attained by taking r ≥ 5N par . Initially, a fixed camera is calibrated, and its re-projection error is 0.10 pixels. Providing ambient uniform light to the camera is needed. At the same time, calibration is required for this precise estimation of kinematic parameters because these small error clubs have a high degree of errors in the final stage of the algorithm.
Once the intrinsic matrix is known, the order of 3 × 3 comprises a focal length and principal point in x, y directions, respectively. In the offline, estimation of the fixed platform with respect to the camera is estimated. The data set of 90 values that contain the pose of the moving platform with respect to the fixed platform and the corresponding angles is recorded. In this setup, the function is defined as the minimum difference between the measured and calculated end-effector locations.
The data recorded from the real-time scenario are feed into the simulation module to estimate the kinematic parameter of the parallel architecture-based robot. Vector of (np i× 6) matrix is maintained, where np is the number of data points, and each vector comprise of position (x, y and z) in mm and corresponding angles (θ 1 , θ 2 , θ 3 ) in degrees.
where M T B is the transformation of the moving platform with respect to the fixed base platform. B T C is the transformation of the base fixed frame with respect to the fixed camera frame. M T C is the transformation of the moving platform with respect to the fixed camera frame
Controller design
A PID controller consists of a proportional, an integral and a derivative module. The objective of the work is to tune the PID gains with the minimum error between the set value and actual concentration value, where the error is represented as the difference between desired output (r) and actual output (y). u is the PID control law, and K c , T I and T d parameters are represented as the proportional gain, integral and derivative, respectively. Commonly employed error criteria to achieve optimized PID tuning values are integral square error , integral absolute error (IAE) and integrated time absolute error , respectively. PID-based controllers are widely used in the majority of the industries as no other advanced control approach like internal model control , model predictive control and sliding mode control compares with the straightforward functionality, simplicity and ease of user interface provided by PID-based controller [32]. The tuning of the PID-based controller is performed at a particular operating point, which does not give an appropriate response [33]. Controller based on soft computing for doing the PID control tuning is widely used in industrial expertise during the last few decades [34][35][36][37][38].
This paper implements a real-time PID-based controller to control the parallel architecture-based robot in a real-time scenario. Encoders have the resolution 0.15 degree and persist quadrature nature outputs for supervising the positive and negative direction. Current sensor based on Hall Effect is implemented for the current limiting.
Kinematic analysis results and its validation with experimental setup
The inverse kinematics of the input trajectory (set of 3D points) determined the actuator angles. The encoders reading is being taken from the attached actuator shaft termed the measured angle. The validation of the measured and desired angle for the Delta robot (Leg1) is shown in Fig. 9. Similarly, the validation of the measured and desired angle for the Delta robot (Leg2 and Leg3) is shown in Figs. 10 and 11, respectively. The result shown in Fig. 12 depicts the desired and measured end-effector (platform) position while executing motion that consists of the set of 3D points. The set of 3D points is given to the inverse kinematics function to calculate the joint angles, and based on that, the actuator generates the required PWM to move the end effector of the Delta robot.
The forward kinematics of the input theta values resulted in the position termed the desired position, and the result of vision-based pose estimation termed the measured position. The validation of the measured and desired positions is shown in Fig. 13. The rms error is in the order of 0.9 mm. The result is
Real-time controlling of the delta manipulator
Case I: In this case, a planar circular path was input. The Delta robot movement while following the path 02 times is recorded by the encoders. The planar circular path, as shown in Fig. 15(a), is input to the inverse kinematics function of the Delta, and its corresponding joint angles are calculated, and the motors are drive to those positions. The location of the end effector is "tracked" by the vision system. The desired and measured end-effector (circle trajectory) position is shown in Fig. 15(b). In Fig. 15(a), the desired position is based on the trajectory (circular) provided to the inverse kinematics function. The measured position is estimated based on the encoder values recorded. Initially, the Delta robot was at its home position, which was lifted to a height of 5 mm, and then, the end effector was moved to the circumference of the circular trajectory. This is the starting point of the trajectory as well. As shown in Fig. 15(b), the sinusoidal curve highlights that the end effector is following the given circular trajectory.
The measured motion was recorded as a sub-centimetre error. Desired angles in Leg1, Leg2 and Leg3 are shown in Fig. 15(c), measured angles in Leg1, Leg2 and Leg3 are shown in Fig. 15(d). The error is shown in Fig. 15(e).
Case II: In this case, a helical trajectory was given input. The Delta robot movement following the path was recorded. As the helical path is provided to the Delta robot, as shown in Fig. 16(a), the inverse kinematics function and its corresponding theta angles are estimated. The desired and measured endeffector (helical trajectory) position is shown in Fig. 16(b). The error in executing a circular profile is analysed. Desired angles in Leg1, Leg2 and Leg3 are shown in Fig. 16(c), measured angles in Leg1, Leg2 and Leg3 are shown in Fig. 16(d). The error is shown in Fig. 16(e).
Catching the cube (free-fall)
The virtual catching setup is used with a cube in a vertical free fall, as shown in Fig. 17(a). The object's pose with reference to the base fixed frame is estimated based on that, and inverse kinematics have been used to move the fingers into a catching position. The stereo camera identification of the cube and assignment of a body-fixed coordinate system is shown in Fig. 17(b), and the overall trajectory of the object as tracked in 3D is shown in Fig. 17(c).
The uncertainty in the camera calibration routine must be estimated to make the experimental setup robust for catching objects. The camera matrix transforms the 3D world coordinate of a point to 2D image coordinates through Eq. (1). Subsequently, differences in image coordinates between two points whose spatial relationship is known or that of the image coordinates of the same spatial point in multiple images are used to determine the object's spatial information. Error in camera calibration occurs due to errors in creating the calibration grid and sensor chip geometry.
Here, u and v represent the image coordinates, and λ is the scaling factor. P represents the world coordinates. The matrix M transforms from 3D homogeneous coordinates to 2D homogenous coordinates known as the calibration matrix. This cannot be calculated from a single point instance and is estimated using the least square method. This method estimated the 11 unknowns (M 11 . . ..M 33 ) in the M matrix using known 3D points P and measured image coordinates p. The value of M 34 is 1.
The average re-projected error in X-axis on re-projection is 0.105 mm, and the standard deviation is 0.115. The average re-projected error in Y-axis on re-projection is 0.115 mm, and the standard deviation is 0.071, as shown in Fig. 18.
Conclusion
This paper discusses the kinematic analysis of the Delta robot and its identification of the kinematic parameters using the vision sensor. In this approach, the monocular vision system has been used, which is fixed in nature, to estimate the pose of the manipulated object and perform the kinematic analysis in real time with high accuracy as the kinematic parameters of the parallel mechanism are estimated accurately to achieve the catching. The design of the controller and optimization of the parameters in real time is discussed. In various cases in real time and in the virtual environment, performing the catching of the object has been discussed. This proposed work can be applied in the automation industry to enhance the tracking and manipulating capability in a real-time environment.
|
2021-10-21T15:08:54.830Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "62ee2329bbf73ffbe8360cf52d64b25764d63d17",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/EF26B52AA7AE9650B76FD07E22308912/S0263574721001491a.pdf/div-class-title-vision-based-kinematic-analysis-of-the-delta-robot-for-object-catching-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "55aaa600517cc72aa479e84ed1f0c56377e80376",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119387220
|
pes2o/s2orc
|
v3-fos-license
|
Analytical properties of the gluon propagator from truncated Dyson-Schwinger equation in complex Euclidean space
We suggest a framework based on the rainbow approximation with effective parameters adjusted to lattice data. The analytic structure of the gluon and ghost propagators of QCD in Landau gauge is analyzed by means of numerical solutions of the coupled system of truncated Dyson-Schwinger equations. We find that the gluon and ghost dressing functions are singular in complex Euclidean space with singularities as isolated pairwise conjugated poles. These poles hamper solving numerically the Bethe-Salpeter equation for glueballs as bound states of two interacting dressed gluons. Nevertheless, we argue that, by knowing the position of the poles and their residues, a reliable algorithm for numerical solving the Bethe-Salpeter equation can be established.
I. INTRODUCTION
Due to the non-Abelian and confinement properties of Quantum Chromodynamics (QCD), gluons obeying self-interactions can form colorless pure gluonic bound states, also referred to as glueballs. The occurence of glueballs is one of the early predictions of the strong interactions described by QCD [1]. However, despite many years of experimental efforts, none of these gluonic states have been established unambiguously, cf. Ref. [2]. Possible reasons for this include the mixing between glueballs and "conventional" mesons, the lack of solid information on the glueball production mechanism, and the lack of knowledge about glueball decay properties. Therefore, study of glueballs is one the most interesting and challenging problems intensively studied there are several approaches in studying glueballs. One can mention phenomenological models mimicking certain nonperturbative QCD aspects, such as the flux tube model [3,4], constituent models [1,[5][6][7][8][9], holographic approaches [10][11][12], approaches based on QCD Sum Rules [13][14][15][16][17].
"Experimental" studies are performed within the Lattice QCD (LQCD) approaches [18][19][20][21] (for a more detailed review see Ref. [22] and references therein). It should be noted that these theoretical approaches provide values of glueball masses which can differ from each other as much as 1 GeV and even more. No single approach has consistently reproduced lattice gauge calculations, cf. Refs. [18][19][20][21]. One can assert only that the consensus of the past two decades from lattice gauge theory and theoretical predictions is that the lightest glueball is a scalar (J P C = 0 ++ ) state in the 1.5-1.8 GeV mass range, accompanied by a tensor (J P C = 2 ++ ) state above 2 GeV.
Another interesting problem is the glueball-meson mixing in the lowest-lying scalar mesons.
The question whether the lowest-lying scalar mesons are of a pure quarconium nature, or there are mixing phenomena of glueball states [23] remains still open. To solve these problems one needs to develop models within which it becomes possible to investigate, on a common footing, the glueball masses, glueball wave functions, decay modes and constants, etc. Such approaches can be based on the combined Dyson-Schwinger (DS) and Bethe-Salpeter (BS) formalisms, cf.
Refs. [24,25]. It is worth mentioning that theoretically such models, with direct calculations of the corresponding diagrams, encounter difficulties in solving the DS equation, related to divergencies of loop integrals and to the theoretical constrains on gluon-ghost and gluon-gluon vertices, such as Slavnov-Taylor identities. These circumstances result in rather cumbersome expressions for the DS equation, hindering straightforward numerical calculations.
In the present paper we suggest an approach, similar to the rainbow Dyson-Schwinger-Bethe-Salpeter model for quark propagators [26], to solve the DS equation for gluon and ghost propagator with effective rainbow kernels. The formidable success of the rainbow approximation for quarks in describing mesons as quark-antiquark bound states within the framework of the BS equation with momentum dependent quark mass functions, determined directly by the DS equation, such as meson masses [26][27][28][29][30], electromagnetic properties of pseudoscalar mesons [31][32][33][34]) and other observables [35][36][37][38][39], persuades us that the rainbow-like approximation may be successfully applied to gluons, ghosts and glueballs as well. The key property of such a framework is the self-consistency of the treatment of the quark and gluon propagators in both, DS and BS equations by employing in both cases the same approximate interaction kernel.
Recall that the rainbow model for quarks consists of a replacement of the product of the coupling g dressed gluon propagator D ab µν (k 2 ) and dressed quark-gluon vertex Γ ν by an effective running coupling and by the free vertex Γ 0 ν [26,37], where a, b are color indices and Z(k 2 ) is the effective rainbow running coupling. The explicit form of Z(k 2 ) has been induced by the fact that, in the Landau gauge, it is proportional to the nonperturbative running coupling α s (k 2 ) which, in turn, is determined by the gluon Z(k 2 ) and ghost G(k 2 ) dressing functions [40][41][42][43][44][45][46][47][48] as where µ 2 is a renormalization scale parameter at k 2 = µ 2 whith G 2 (k 2 , k 2 )Z(k 2 , k 2 ) = 1. In what follows, the parameter µ 2 is suppressed in our notation and a simple notation G(k 2 ) and Z(k 2 ) is used for the dressing functions.
In principle, if one were able to solve the DS equation, the approach would not depend on any additional parameters. However, due to known technical problems, one restricts oneself to calculations of the few first terms of the perturbative series, usually within one-loop approximation, thus arriving at the truncated Dyson-Schwinger (tDS) and truncated Bethe-Salpeter (tBS) equations, known as the rainbow-ladder approximation. The merit of such an approach is that, once the effective parameters are fixed, the whole spectrum of the tBS bound states is supposed to be described without additional approximations.
In the present paper we investigate the prerequisites to the interaction kernel of the combined Dyson-Schwinger and Bethe-Salpeter formalisms to be used in subsequent calculations to describe the glueball mass spectrum. Note that within such an approach it becomes possible to theoretically investigate not only the mass spectrum of glueballs, but also different processes of their decay, which are directly connected with fundamental QCD problems (e. We are working in Landau gauge and, consequently, we need to take into account the contribution of the Faddeev-Popov ghosts. Thus, one needs a generalization of the usual BS scheme that allows for mixing of bound states of different fields. In general, the completye system of BS equations includes also the contribution of quark-antiquark bound states, i.e. involves also glueball-meson mixing in the BS calculations. The problem of how large can be these mixing effects is not yet clearly settled. However, there are some indications, based on lattice calculations of the pure glue pseudoscalar glueball [49], that at least in the pseudoscalar channel the glueball-meson mixing can be neglected, see also the discussion in Ref. [24]. In what follows we will be interested in bound states for a pure gauge theory, that is neglecting quarks. The corresponding system of coupled tBS equations is presented diagrammatically in Fig. 1. The explicit form of the corresponding equations can be found, e.g. in Ref. [25]. where the screening effect from the creation of quark-antiquark pairs from the vacuum slightly decreases the value of the gluon dressing around its maximum. In our approach this effect is implicitly taken into account by adjusting the phenomenological parameters of the model to the full, unquenched lattice calculations [52,53]. In the Landau gauge the gluon propagator D ab µν (k) and ghost propagator D ab G (k) are expressed via the dressing functions Z(k) and G(k) as where t µν (k) is the transverse projection operator, t µν (k) = g µν − k µ k ν k 2 . Then the corresponding dressing functions obey the tDS equation (cf. Fig. 2).
where p = q − k and Z 3 andZ 3 are the gluon and ghost renormalization constants, respectively.
To solve this system of equations one needs information about the three-gluon vertex Γ βσν , the gluon-ghost vertex Γ ν , the coupling g and the propagators D ab µν and D ab G . The simplest approach consists in a replacement of the full dressed three-gluon and ghost-gluon vertices by their bare values, known as the Mandelstam approximation [54][55][56] and the y-max approximation [57].
In order to simplify the angular integration, in the Mandelstam approximation the gluon-ghost coupling is neglected. Then the resulting solution exhibits a rather singular gluon propagator at the origin. In Ref. [57] the coupling of the gluon to the ghost was not neglected, however additional simplifications for Z(k 2 ) and G(k 2 ) have been introduced, again to facilitate the angular integrations and the analytical and numerical analysis of the equations. From these calculations it has been concluded that it is not the gluon, but rather the ghost propagator that is highly singular in the deep infrared limit. A more rigorous analysis of the tDS equation has been presented in a series of publictions (see, e.g. Refs. [42,44,58,59] and references therein), where much attention has been focused on a detailed investigation of the gluon-gluon and ghostgluon vertices and on the implementation of the Slavnov-Taylor identities for these vertices.
With some additional approximations the infrared behavior of gluon and ghost propagators has been obtained analytically and compared with the available lattice calculations. In Ref. [60] a thorough analysis of the relevance of the Slavnov-Taylor identities, renormalization procedures and divergences in the tDS equation is presented in some details. Comparison of the numerical calculations for the gluon and ghost dressing functions and running coupling α s with lattice data have been presented as well. Similar calculations together with a comparison with lattice data are presented also in Ref. [48] (for a more detailed review see Ref. [61] and references therein quoted). It should be noted that the above quted approaches result in rather cumbersome expressions for the system of tDS equations which, consequently, cause difficulties in finding the numerical solutions. Yet, a direct generalization to complex Euclidean space becomes problematic due to numerical problems at large |k 2 | of the complex momentum.
C. Rainbow approximation for ghosts and gluons
In the present paper we suggest an approximation for the interaction kernels in Eqs. (5) and (6), similar to the rainbow model [26,37,38], Eq. (1), which allows for an analytical angular integration in the gluon and ghost loops and facilitates further numerical calculations for the complex momenta. The results of the lattice calculations of the running coupling α s (k 2 ), Eq. (2), can serve as a guideline in choosing the explicit form of these kernels. The gist of our approximations is as follows: where A ∼ 1/3 is a phenomenological parameter which takes into account the difference in The rainbow approximation for the propagators in Minkowski space is obtained by inserting Eqs. (7)- (11) in to Eqs. (5) and (6) from the IR region, the perturbative ultra-violet (UV) parts of F ef f 1,2 are neglected. Such an approximation corresponds rather to the AWW kernel [38] than to the full Maris-Tandy model [26]. As in the case of the quark rainbow approximation [26,36,37,43,62], the explicit form of F ef f 1,2 (p 2 ) is inspired by the fact that the r.h.s. of Eqs. (7)-(9) are proportional to the running coupling (2). The available QCD lattice results [52] show that, in the deep IR region, α s increases as k 2 increases and reaches its maximum value at k ∼ 0.8 − 0.9 GeV/c; then it decreases as k 2 increases and acquires the perturbative behaviour in the UV region. In Ref. [52] an interpolation formula consisting of three terms (monopole, dipole and quadrupole, multiplied by k 2 ) has been proposed to fit the data. However, we prefer an interpolation formula which, in our subsequent calculations, allows to perform angular integrations analytically and assures a good convergence of the loop integrals. For this we use a Gaussian interpolation formula and refitted the lattice data [52] in the IR region with several Gaussian terms and achieved a good agreement with data (see Appendix). This stimulate us to use for F ef f 1,2 (p 2 ) the same interpolation formulae. We found that one Gaussian term for F ef f 2 (p 2 ) and two terms for F ef f 1 (p 2 ) are quite sufficient to obtain a reliable solution of Eqs. (5)- (6): With such a choice of the effective interaction, the angular integration can be carried out analytically leaving one with a system of one-dimensional integral equations in Euclidean space, where I
D. Numerical solution along the real axis
The resulting system of one-dimensional integral equations (15) and (16) we solve numerically by an iteration procedure. For this we discretize the loop integrals by using the Gaussian integration formula, so that the system of integral equations reduces to a system of algebraic equations. Independent parameters are ω i and D i , i = 1 · · · 3, see Eqs. (13), (14). We find that the iteration procedure converges rather fast and practically does not depend on the choice of the trial start functions. The phenomenological parameters ω i and D i have been adjusted in such a way as to reproduce as close as possible the lattice QCD results [52,53].
Few remarks are in order here. First, the deep infrared behaviour of the ghost and gluon propagators requires a separate consideration. It has been established that the gluon dressing Z(k 2 ) vanishes at the origin, while the ghost G(k 2 ) is highly singular, see e.g. Refs. [40,44,48,58]. In the deep IR region, k ≤ ǫ the gluon and ghost dressing are predicted to behave as It should be noted that, for the quark-antiquark bound states the use of the complex rainbow solution in to the tBS equation provides an amazingly good description of many properties of light mesons (masses, widths, decay rates etc., cf. [28, 29, 31-33, 35, 63, 64]). However, for heavier mesons the quark propagators possess pole-like singularities [65,66] A second option is to deform the loop integration path itself away from the real positive k 2 axis [65,73]. This can be done by deforming the integration contour and solving the integral equation along this new contour. For complex momenta k, one has to solve the integral equation along a deformed contour in the complex plane. In practice, one changes the integration contour by rotating it in the complex plane, multiplying both the internal and the external variable by a phase factor e iφ , so one gets the complex variables k = |k|e iφ and q = |q|e iφ and solves the tDS equation along the rays φ = const. This method works quite well in the first quadrant, φ ≤ π/2, but fails at φ > π/2, see e.g. Ref. [65,66]. This is because along the rays φ = const all the values of |k|, from |k| = 0 to |k| → ∞ contribute to the tDS equation, even if one needs the solution only in a restricted area of the parabola Im k 2 < 0. Consequently, numerical instabilities are inevitable at φ > π/2.
The third method, which we use in this work, consists in finding a solution to the integral equations in a straightforward way from the tDS equation along the real q on a complex grid for the external momentum k inside and in the neighbourhood of the parabola (18). As in the previous case, numerical instabilities can be caused by oscillations of the exponent e −(k−q) 2 /ω 2 and of the Bessel functions I such isolated poles k 2 0i read as 1 2πi 1 2πi 1 2πi where N G(Z) is the number of poles in the domain enclosed by the contour γ (an effective algorithm for numerical evaluations of Cauchy-like integrals can be found, e.g. in Ref. [74]).
In such a way we find the poles of G(k 2 ) and Z(k 2 ) together with their residues relevant for further calculations. In subsequent numerical calculations of integrals, involving functions with pole-like singularities, one can use the following theorem: if a complex function f (z) possesses isolated poles, then it can be represented in the form where f (z) is analytical within the considered domain and, consequently, can be computed as Note that a good numerical test of the performed calculations is the following procedure.
Enclose a few poles by a larger contour and ensure that the Cauchy integral of G(k 2 ) or Z(k 2 ) is different from zero and that the Rouché integral of the inverse, G(k 2 ) −1 or Z(k 2 ) −1 , is an integer equal to the number of enclosed poles. Note that the Cauchy integral of G(k 2 ) or Z(k 2 ) in this case must coincide with the sum of individual residues of the isolated poles.
B. Pole structure of the dressing functions
Results of our calculations are presented in Table I and Fig. 4. It is seen that for M gg < The model interaction kernels in the rainbow approximations is inspired essentially by the behaviour of the running coupling (2) in the IR region, which now is available from the lattice QCD data [52]. In order to facilitate the calculations, the explicit expressions for the kernels are taken in form of Gaussian terms. Accordingly, it is preferably to have the parametrization of the running coupling also in such a form. Usually, in original publications of the lattice QCD results one employs parametrizations to fit data as a sum of several multipole terms, cf. [52,53].
For our purpose we have to refit the data within another, Gaussian-like formula. Here below we present a fit for the running coupling (2) in form of a sum of several Gaussian terms with fitting parameters found from a Levenberg minimization procedure. Such a parametrization serves as a guideline in choosing the form of the effective kernels (13)- (14).
The minimization procedure converged to a set of parameters listed in Table II, which provide a fit of lattice QCD data presented in Fig. 5.
|
2018-11-05T02:03:23.000Z
|
2018-11-05T00:00:00.000
|
{
"year": 2018,
"sha1": "ce9e445695127b9a22d37ec3df3b08bfbb12f396",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8ecaf3722d60dd4ee5f359afe069eeff5fb262fb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256276243
|
pes2o/s2orc
|
v3-fos-license
|
Deterministic early endosomal maturations emerge from a stochastic trigger-and-convert mechanism
Endosomal maturation is critical for robust and timely cargo transport to specific cellular compartments. The most prominent model of early endosomal maturation involves a phosphoinositide-driven gain or loss of specific proteins on individual endosomes, emphasising an autonomous and stochastic description. However, limitations in fast, volumetric imaging long hindered direct whole cell-level measurements of absolute numbers of maturation events. Here, we use lattice light-sheet imaging and bespoke automated analysis to track individual very early (APPL1-positive) and early (EEA1-positive) endosomes over the entire population, demonstrating that direct inter-endosomal contact drives maturation between these populations. Using fluorescence lifetime, we show that this endosomal interaction is underpinned by asymmetric binding of EEA1 to very early and early endosomes through its N- and C-termini, respectively. In combination with agent-based simulation which supports a ‘trigger-and-convert’ model, our findings indicate that APPL1- to EEA1-positive maturation is driven not by autonomous events but by heterotypic EEA1-mediated interactions, providing a mechanism for temporal and population-level control of maturation.
Introduction
In cellular signal transduction, information is often encoded as a transient pulse or as a temporal pattern of signals. The binding of growth factors to their receptors results in activation of secondary messengers followed by critical deactivation of receptors through the interaction with phosphatases and lysosomal degradation (1,2). This combination of events in the signal transduction pathway typically encodes the temporal pattern. The endosomal pathway, where both spatial trafficking and biochemical maturation of endosomes occur in parallel, is a central process that modulates the interaction of receptors with enzymes embedded in other organelles, such as the endoplasmic reticulum, or degradation via the lysosomal pathway (3). Following formation at the plasma membrane via endocytosis, endosomes carrying cargoes undergo maturation processes (4) facilitated by the concerted effects of motility, inter-endosomal fusions, fissions, and endosomal conversions. These latter switch-like processes involve protein conversions, in which one specific set of proteins are shed and another acquired (5,6). This occurs in concert with phosphoinositide conversions, in which specific phosphoinositide species act as the modules of coincidence detection (7). Thus, phosphoinositide species provide a second layer of regulation, governing which proteins will localise to a specific subset of endosomes (8,9). Epidermal growth factor receptors (EGFR) have been shown to depend on dynein for receptor sorting and localisation to mature endosomes (10,11). In addition, localisation of EGFR to EEA1 compartments was delayed when dynein was inhibited. On the other hand, expansion of APPL1 compartments enhanced EGFR signalling, consistent with the role of endosomal maturation in modulating temporal activity of receptors in endosomes. An open question that arises then is, how does motility or subcellular localisation influence endosomal maturation? Furthermore, in the context of trafficking of cargo such as EGFR, that respond to pulsatile patterns of ligands, how do populations of endosomes mature in a timely manner that ensures accurate signal interpretation?
Our current understanding of the dynamics of endosomal maturations comes from seminal live-cell imaging studies that captured the process of individual endosomes undergoing direct conversions (5,6). These observations led to the prevailing single endosome-centric model wherein a phosphoinositide switch controls the transition from adaptor protein, phosphotyrosine interacting with PH domain and leucine zipper 1 (APPL1) to early endosomal antigen 1 (EEA1) on an individual endosome (6). APPL1 and EEA1 bind to endosomes via coincidence detection binding to Rab5, as well as the phosphoinositides PI (3,4)P2 and PI(3)P, respectively (12,13). Zoncu et al. showed that PI(3)P was required for long-lived EEA1 endosomes; they also observed reversions of EEA1-to-APPL1 conversion upon inducible depletion of PI(3)P, suggesting that APPL1 to EEA1 maturation is underpinned by a phosphoinositide switch resulting in PI(3)P production. In mammals, PI(3,4)P2 can be dephosphorylated to PI(3)P by either of two phosphoinositide 4-phosphatases, INPP4A and INPP4B (14,15), which have been suggested to have distinct intracellular localisations, with INPP4A being found on Rab5-positive endosomes (8,16). Nonetheless, these single endosome-centric maturation models do not address population-level maturation rates, which are essential for bulk regulation of receptor trafficking, and therefore signal interpretation. Secondly, the single endosome-centric models rely on stochastic binding of molecules, which is unpredictable as a mechanism. Stochasticity poses crucial challenges in maintaining causal ordering and temporal specificity, i.e., a tight probability distribution of events in time. However, despite the emphasis on stochasticity in constituent dynamics in the vesicular transport system (17)(18)(19)(20), endosomal trafficking processes display an extraordinary degree of robustness and predictability in delivering cargo to specific intracellular destinations, and receptors transported through the endosomal system show reproducible signalling outcomes. These properties suggest that there exist mechanisms to counter the stochasticity of the constituent processes and thus to achieve tight control over maturation, trafficking, and dynamics of the intracellular transport system. A limiting factor in extending and reconciling the previously established single endosome-centric model to population-level maturation rates has been the difficulty in directly measuring these dynamic events at whole cell levels.
Here, we used lattice light-sheet microscopy (LLSM) livecell imaging, which allows rapid imaging of whole cell volumes for extended periods of time (21), to measure the whole cell dynamics of APPL1 and EEA1. To quantify these data, we developed a bespoke endosome detection and tracking algorithm to measure large numbers of endosomal collisions, fusions, and conversions occurring within many single cells over a prolonged period of imaging. We complemented these methods with live-cell fluorescence lifetime microscopy (FLIM) to interrogate the molecular orientation of EEA1, a head-to-head homodimer bound to maturing early endosomes. We show that very early endosome (VEE) to early endosome (EE) conversion is a multistep process, underpinned by the multiple asymmetric binding sites of EEA1 and its cyclical conformation changes, which is brought about by endosomal collisions and heterotypic fusions. Through simulations, we test the effectiveness of our proposed mechanism in predicting the maturation time course, specifically, the conversion from APPL1 to EEA1 and from N-to C-terminal EEA1 attachments. These results warrant a significant upgrade to the model of endosomal maturations, with heterotypic interactions-where collisions lead to triggered conversions or fusions-forming a large fraction of events leading to endosomal maturations. Furthermore, our simulations indicate that this emergent mechanism imparts tight temporal control over the ensemble maturation of VEEs.
Results
Measuring and quantifying whole cell-level endosomal maturations. To both measure ensemble endosomal conversion dynamics, as well as follow individual endosomes at whole cell levels with fast spatiotemporal resolution, we used lattice light-sheet microscopy (LLSM) to image cells expressing APPL1-EGFP (22) and TagRFP-T EEA1 (23). LLSM-based live-cell imaging enabled near-diffraction limited prolonged imaging of ∼30 minutes with a temporal resolution of ∼3 seconds per entire volume of the cell with minimal photobleaching ( Fig. 1a; Supplementary Fig. 1a,b; and Supplementary Movie 1). Rapid LLSM imaging confirmed minimal overlap between APPL1 and EEA1 signals with the exception of rapid switch-like APPL1 to EEA1 conversions, as has been reported previously (6). Visual inspection of the data revealed three major categories of dynamic phenomenologies: inter-endosomal 'kiss-and-run' events preceding conversions, inter-endosomal collisions leading to fusion, and conversions ( Fig. 1b and Supplementary Movies 1 and 2). The number of distinct events, and their highly stochastic nature, preclude interpretations based on human-biased selection of representative trajectories. We therefore developed an automated image analysis pipeline to convert raw data to full trajectories of detected endosomes, automatically annotated for the presence of events such as heterotypic collisions, conversions, and fusions. Briefly, we identified all potential endosomes using a blob detection routine (Laplacian of Gaussian operator), then filtered to the true endosomes with an unsupervised pattern recognition-based routine (Supplementary Fig. 1c). The brightest and dimmest objects (>100 total) were taken to represent true versus false endosomes, respectively, then used as inputs for template matching to construct a set of features for each class, followed by k-means clustering into signal versus background ( Supplementary Fig. 2). These discrete segmented objects were then tracked using a custom tracking routine built with trackpy (24), using both localisation and intensity values ( Supplementary Fig. 1d, Supplementary Movie 2). Tracked objects from opposite channels were then analysed independently to identify collision, fusion, and conversion events based on the time course of spatial separation between nearby endosomes (Supplementary Fig. 1e,f).
Inter-endosomal interactions are necessary for robust conversions. To investigate whether heterotypic interactions play a regulatory role in very early endosomal maturation, we applied this analysis pipeline to six untreated and two nocodazole-treated whole cell volumes (equivalent to >1 hour of total observations), which resulted in detection of thousands of events. A representative montage of a conversion preceded by multiple collision events is shown in Fig. 1c, with the corresponding intensity trace (with annotated events) in Fig. 1d. We applied stringent selection criteria to all automatically identified events to select only clear cases of APPL1 to EEA1 conversion ( Supplementary Fig. 3, Supplementary Movie 3), confirmed by visualisation of population-average signals of APPL1 and EEA1 immediately before, during, and after each conversion event. During the process of conversion, APPL1 and EEA1 signals for the same endosome showed average colocalisation for ∼30 s ( Supplementary Fig. 4a), but with considerable variability. This is demonstrated by separating all events into cohorts defined by the total duration of APPL1-EEA1 colocalisation (in bins of 10 s); population averages for each cohort are shown in Supplementary Fig. 4b. During visual inspection of these data, we noticed a clear association between the speed of individual APPL1 to EEA1 conversions and the number of preceding heterotypic collisions. To confirm this observation, we calculated the number of collisions occurring between each APPL1 endosome and any EEA1 endosomes ( Supplementary Fig. 5) in the 30 s immediately prior to a detected conversion or fusion event, then segmented the distribution of events from each colocalisation cohort according to the number of preceding heterotypic collisions (Fig. 1e). Importantly, all slow detected APPL1 to EEA1 conversions and fusions had few or no potential collisions prior to conversion. The relative numbers of each type of event (collisioninduced or unaided, fusion or conversion) are summarised in Fig. 1f. In line with previous models of EEA1-mediated fusion (25,26), 39% of the events displayed immediate fusion following collisions ('unaided fusions'). This could be attributed to EEA1-mediated fusion where EEA1 molecules can bridge two endosomes at the instant of collision, as has been postulated previously (26)(27)(28)(29). 38% of events involved fusions, but that were preceded by at least one heterotypic collision ('collision-induced fusions'). While 12% of events represented unaided conversions, which have been reported earlier and result from direct binding of EEA1 from the cytoplasm, collisions leading to conversions were found to be 11% of all events. Together, these events form the endosomal maturation process. Note that heterotypic fusions result in endosomes with both APPL1 and EEA1, and represent an intermediate step in conversion (vide infra). The quantitative analysis also revealed that unaided conversions were more prominent for larger endosomes ( Supplementary Fig. 4c), whereas the heterotypic collisions were a feature of a much broader and smaller size of endosomes that showed stochastic directed runs and transitions to periods of little movement, as has been reported for early endosomes (30). These results underline the necessity of rapid volumetric imaging and bespoke analysis routines to capture the described processes.
APPL1 and EEA1 are counter-clustered during conversion. Furthermore, we observed that in nocodazole-treated cells, some endosomes showed vacillating 'back-and-forth' fluctuations between the signals of APPL1 and EEA1, never fully committing to a complete conversion into an EEA1-positive endosome that did not revert (Supplementary Movie 4). Interestingly, a few endosomes displaying EEA1 fluctuations were also 'pulsatile', suggesting existence of clustering ( Supplementary Fig. 6), non-linearity, and binding-unbinding events that corresponded to more than a few molecules. Many endosomal markers and associated proteins including dynein have been reported to exist as clus-ters on the endosomal surface (31)(32)(33)(34). In addition, phosphoinositide lipids display clustering induced by binding of specific proteins (35). Therefore, we reasoned that, given the observed dynamics, APPL1 and EEA1 may display some level of clustering. To confirm the existence of clusters of EEA1, we performed single molecule localisation microscopy using EEA1 Dendra-2 (36). We found that EEA1 was not uniformly distributed over the entire surface of the endosomes, but instead formed distinct domains ( Supplementary Fig. 7). To confirm this observation in live cells and to investigate the distribution of EEA1 with respect to APPL1, we performed multi-colour live super-resolution microscopy via super resolution by radial fluctuations (SRRF) (37) of APPL1 and EEA1 (Supplementary Movie 5). Interestingly, we observed that APPL1 and EEA1 are counter-clustered (Fig. 1g, Supplementary Fig. 8a). Both APPL1 and EEA1 show dynamic localisation with time, but this counter-clustering is maintained through the process of conversion, until the APPL1 signal is lost (Fig. 1h, Supplementary Fig. 8b-d).
Two distinct populations of EEA1 endosomes bound via N-and C-termini exist. Taken together, our experimental observations suggested that heterotypic interactions contribute to the initiation of conversion processes. Therefore, we hypothesised that the inter-endosomal binding ability of the EEA1 homodimer and the presence of heterotypic collisions may work together to seed conversions. EEA1 projects out into the cytoplasm due to its ∼200 nm-long coiled-coil domain (26,29); furthermore, it can bind to endosomal membranes at both its N-and C-terminal ends (38). Whilst at the C-terminal domain, EEA1 binds to membranes through the coincidence detection of Rab5 and PI(3)P (12,29,39), at the N-terminus, EEA1 solely binds to Rab5 through a zinc fingerbinding domain (29,40). We therefore rationalised that in a heterotypic collision, the incident APPL1 endosome would have little to no PI(3)P, and as such the only EEA1 binding that is probable is through N-terminal binding, thus producing an encoded precedence in EEA1 N-versus C-terminal binding.
To determine which terminus of EEA1 is bound to the already EEA1-positive endosome, and which domain binds to the incoming nascent endosome, we utilised fluorescence lifetime microscopy (FLIM). We reasoned that N-terminally tagged EGFP-EEA1 combined with an RFP FRET partner could distinguish N-from C-terminal binding using the lifetime of EGFP, since EEA1 is 200 nm in length in its straight conformation, and it binds directly to Rab5 via its N-terminus (Fig. 2a). Multi-scale molecular dynamics simulations also suggest that the coiled-coil domain can extend with a tilt up to 50°from the normal to the endosomal membrane surface when bound using the C-terminal FYVE binding domain (41). Thus, N-terminal binding will result in decreased fluorescence lifetime due to FRET with Rab5 labelled with RFP, whereas C-terminal binding will show the EGFP lifetime since no FRET will take place. We first investigated whether different populations of EEA1-positive endosomes, bound via N-or C-termini, exist in fixed cells. We found that EEA1 endosomes showed two strikingly distinct populations: C-terminally bound EEA1 that localised closer to the nucleus of the cell (Fig. 2c), and N-terminally bound EEA1 that was predominantly peripherally localised (Fig. 2b,c). Additionally, we were able to detect these same two populations of endosomes using the inverse FRET pair using Rab5 EGFP lifetime in cells transfected with EEA1-TagRFP, in contrast to cells transfected with only EEA1-EGFP, which showed only a single longer lifetime distribution corresponding to native EGFP ( Supplementary Fig. 9). To confirm that these two populations of lifetimes corresponded to N-and Cterminally bound EEA1, we expressed Rab5 EGFP and either CT-Mut EEA1 TagRFP or NT-Mut EEA1 TagRFP, which both showed only a single lifetime peak corresponding to entirely FRET or non-FRET lifetimes, respectively (Supplementary Fig. 9). In addition to the N-and C-terminal mutants, the donor:acceptor intensity ratio versus fluorescence lifetimes displayed no dependence, confirming that the observed lifetime decrease results from FRET interactions and not insufficient acceptor molecules ( Supplementary Fig. 10). These experiments strongly indicate that, in newly generated endosomes, the first EEA1 binding occurs via the Nterminus.
EEA1 binding via the N-terminus precedes binding via the C-terminus.
To map the temporal dynamics of EEA1 binding via the N-or C-termini, we performed live-cell FLIM of EGFP-EEA1 and Rab5-mRFP. However, live-cell FLIM using confocal microscopy with sufficient temporal resolution to capture endosomal processes intrinsically results in a reduced number of collected photons. To overcome this, we took advantage of a priori knowledge from fixed cell experiments and fit live-FLIM data with the two lifetime components detected in fixed experiments. This gave a shorter lifetime component corresponding to N-terminally bound EEA1, where GFP can 'FRET' with Rab5-RFP, and a longer fluorescence lifetime corresponding to C-terminally bound EEA1, where the N-terminus is at least 150 nm away, extended into the cytoplasm from the Rab5 RFP. We then separated the detected photons collected at each pixel based on these two components, effectively giving an 'NT EEA1' and a 'CT EEA1' channel ( Fig. 2a). Using this two-component fitting, we visualised the initial appearance of EEA1 on Rab5positive, EEA1-negative endosomes following a collisionconversion event. We observed that only N-terminally bound EEA1 (Fig. 3a,b; Supplementary Movies 6 and 7) localised on these Rab5-positive endosomes and displayed an increasing signal of C-terminally bound EEA1, concomitant with fusions and trafficking towards the perinuclear region (PNR) (Fig. 3d,e; Supplementary Movie 8). This gradual acquisition of C-terminal EEA1 seen through the increase in longer lifetime components and reduced N:C intensity ratio (Fig. 3e) suggests a concurrent phosphoinositide conversion of PI(3,4)P2 into PI(3)P, with the initial trigger via Nterminally bound EEA1, even for unaided conversions. This subsequent maturation following appearance of N-terminal EEA1 can also be observed with analogous FLIM analysis methods including phasor plots and average pixel lifetimes ( Supplementary Fig. 11). Whilst EEA1-EEA1 fusions are commonly observed, by separating EEA1 vesicles into their constituent N-and Cterminally bound populations, we observed that fusions primarily occurred when at least one vesicle had C-terminal EEA1 present (Fig. 3c). Fusions were most likely to occur between N-and C-terminal EEA1-positive or C-and C-terminal EEA1-positive endosomes (Fig. 3c). Endosome pairs with at least one EEA1-negative endosome did not show significant fusions. Remarkably, in cases with both endosomes N-terminally positive, no significant fusions were observed. Three conclusions could be drawn from these results. Firstly, the requirement of at least one C-terminally bound EEA1 and non-fusion of N-terminally bound EEA1-positive endosomes suggests that cross-binding of EEA1 is a necessary step for endosomal fusions to occur. This aligns with previously published results showing that both ends of the endosomal tether must be stably bound to result in endosomal fusion (26,29). Secondly, the appearance of N-terminally bound EEA1 prior to C-terminally bound EEA1 in cases of unaided conversions indicates that the N-terminal binding is a necessary and intermediate step before further undergoing a maturation via phosphoinositide conversion into C-terminally bound EEA1. Finally, with the requirement of at least one EEA1-positive endosome being C-terminally bound, and the other being either N-or C-terminally bound, the EEA1-mediated fusion of endosomes is biased towards the more mature, later populations.
Endosomal conversions are driven by phosphoinositide conversions by INPP4A.
To further characterise the maturation into EEA1 endosomes, with N-terminal EEA1 binding preceding C-terminal EEA1 binding in the context of phosphoinositide, we combined the FLIM-based investigation of EEA1 orientation with staining for PI(3)P in fixed cells. To label PI(3)P without inducing overexpression artefacts or steric hindrance, we utilised a purified recombinant GST-2xFYVE probe that could be detected using antibodies against GST as described previously (42,43). We observed that C-terminally bound EEA1 endosomes have significantly higher PI(3)P labelling as compared to N-terminally bound EEA1 endosomes or the peripheral Rab5-positive, EEA1negative endosomes (Fig. 4a, Supplementary Fig. 12). This is in agreement with previously published studies of EEA1 C-terminal coincidence detection between Rab5 and PI(3)P (38,44,45), and suggests that NT-EEA1 appearance may precede PI(3)P production on endosomes. The two distinct modes of EEA1 binding via the N-and Ctermini, and the fraction of unaided conversions of APPL1 to EEA1 observed using LLSM, suggested that phosphoinositide conversion that results in PI(3,4)P2 to PI(3)P must occur on the incoming nascent endosomes. This hypothesis is supported by live-FLIM data, which showed that a corresponding fraction of endosomes displayed an N-to C-terminally bound EEA1 exchange, strongly suggesting that the source of PI(3)P must be within the same endosomes that have not collided with more mature endosomes. However, it was unclear whether this PI(3)P production triggered during early endo-somal maturation was produced through dephosphorylation of PI(3,4)P2 or phosphorylation of PI. To distinguish these possibilities, we targeted INPP4A, a PI4-phosphatase that dephosphorylates PI(3,4)P2 to PI(3)P, as well as VPS34, a class III PI3-kinase that phosphorylates PI to generate PI(3)P and is another source of PI(3)P at the early endosomal level (46,47). To test whether PI(3)P generated via VPS34 contributes to APPL1 to EEA1 conversions, we used SAR405, a drug that specifically targets VPS34 (43,48). Quantifying and comparing the number of conversions versus untreated cells revealed that SAR405 treatment caused a 3-fold reduction in the number of detected conversions. In contrast, targeting INPP4A using siRNA caused a more severe ∼10-fold reduction in the number of detected conversions, suggesting that most conversions were driven by PI(4,5)P2 to PI(3)P conversion via INPP4A (Fig. 4b). Despite the distinct effects on early endosomal maturation rate of INPP4A siRNA and SAR405, these treatments led to a similar 50-60% reduction in Rab5-localised PI(3)P, highlighting that PI(3)P produced via PI3-kinase or PI4-phosphatases play complementary roles in early endosomal biology ( Supplementary Fig. 13). Consistent with these results, upon assaying for the binding of EEA1 using FLIM to distinguish between Nversus C-terminal binding, we found a clear reduction in the number of C-terminally bound EEA1-positive endosomes in SAR405 treated cells, but never a complete abolishment, suggesting that INPP4A-mediated phosphoinositide conversions acted as a source for a fraction of PI(3)P on these early endosomes ( Supplementary Fig. 14). In addition to the impact on endosomal maturation, inhibition of PI(3)P production by VPS34 or INPP4A using SAR405 or siRNA leads to a significant reduction in early endosomal fusion rates, highlighting the central role EEA1 dual-endosome binding plays in this process (Fig. 4b). It is interesting to note that whilst the SAR405-treated cells displayed almost no fusion events, consistent with a drastic loss of PI(3)P and therefore impaired EEA C-terminal binding, INPP4A siRNA-treated cells retained ∼20% of their fusions, suggesting that a population of transiently N-terminally bound EEA1 vesicles were still able to fuse with more mature early endosomes containing VPS34-produced PI(3)P.
N-terminal binding of EEA1 is necessary for endosomal maturation.
To validate the consistent observation of N-terminal binding of EEA1 as a prior to any maturation process, and to investigate the stringency of the requirement for N-terminal binding of EEA1 via Rab5 in maturation, we used an N-terminal mutant of EEA1 carrying F41A and I42A at the C2H2 Zn 2+ site (EEA1 Nt-mut), which is impaired in Rab5 binding (49) (Fig. 4c). When expressed in wildtype RPE1 cells, conversions were unimpaired and endosomal fusions were only mildly affected. This suggested that the Rab5 binding mutant, EEA1 Nt-mut, did not display a strong dominant negative phenotype and that the endogenous EEA1 could still function to evince endosomal maturations (Fig. 4d). This could be because the observed clustering buffers against dysfunctional mutant EEA1; in addition, as EEA1 is a homodimer, it may still have one active binding site. Therefore, we used a HeLa EEA1 knockout (KO) cell line and transiently expressed EEA1 Nt-mut. In contrast to wild-type EEA1, EEA1 Nt-mut exhibited no heterotypic interactions resulting in maturations per cell over 20 minutes. Furthermore, no EEA1 signals were observed on APPL1 endosomal trajectories, suggesting that the collision-triggered conversion mechanism was dysfunctional owing to impaired Rab5 binding at the instance of collision. It is also to be noted that the expression of EEA1 Nt-mut resulted in larger but fewer and less motile endosomes (Supplementary Movie 8).
If only the C-terminus of EEA1, via its Rab5 and FYVE binding, were involved in the phosphoinositide-governed conversion, we would expect to detect some number of APPL1 to EEA1 conversions. In our experiments, unaided conversions were also completely abrogated, indicating that, even in direct conversions, where collisions may not play a role, the N-terminal binding is a compulsory intermediate step.
The C-terminus of EEA1 harbours a FYVE domain and a Rab5 binding domain. Unfortunately, our attempts to investigate the role of PI(3)P binding in conversions using a construct with a mutation in the C-terminal PI(3)P binding pocket (R1375A) (45) proved unfruitful. We observed that the localisation of this mutant was largely cytosolic with quick transient binding in some cases, as has been reported elsewhere (45). This prevents any direct measurement of the influence of FYVE domain-based PI(3)P binding on the entire process of conversion. However, it emphasizes the role of PI(3)P binding by the FYVE domain, along with Rab5, in localising EEA1 robustly to the endosomes, in agreement with previously suggested models of dual interactions/coincidence detection of the EEA1 C-terminus (7,38,45).
A feed-forward endosomal conversion model. To summarise, collisions between endosomes form an important step in overall endosomal conversions rates. The live FLIM data suggests that N-terminally bound EEA1, via interaction with Rab5, is a step preceding the phosphoinositide-based binding of EEA1 via its C-terminal FYVE domain (Fig. 3). Expressing the N-terminal Rab5 binding mutant in HeLa EEA1 KO did not rescue any maturation events, suggesting that this is a necessary step (Fig. 4). Additionally, superresolution imaging suggests clustered distribution of EEA1, as well as counter-clustering of APPL1 to EEA1 (Fig. 1g,h and Supplementary Figs. 5 and 6). This suggests the presence of feedback in the reaction scheme that governs progressively preferential EEA1 binding over APPL1 binding.
To construct a plausible model that agrees with our experimental observations as well as the known protein-protein and protein-membrane interactions of the components involved, we designed a computational model that captures the complex interplay between the distinct phosphoinositide molecules, Rab5, APPL1, and EEA1, and the phosphoinositide conversion (Fig. 5). Importantly, we took into consideration the N-terminal domain of EEA1, which was observed to bind first in unaided collisions as well as in aided conversions through collisions. To simulate this system, we used a grid on the surface of a sphere with two layers of nodes, consisting of a layer of Rab5 and a phosphoinositide layer which began as PI(3,4)P2 but could be converted to PI(3)P by INPP4A if unbound (16). Binding to these layers of nodes were the agents, each with a different attachment and detachment rate depending on the nodes present: APPL1 binding Rab5 and PI (3,4)P2; N-terminal EEA1 binding Rab5; and Cterminal EEA1 binding Rab5 and PI(3)P. The interaction map of agents and nodes is shown in Fig. 5a. Using this reaction scheme, we were able to simulate the reactions and tune the parameters to recapitulate the experimentally observed conversion dynamics, as well as formulate the effects of the 'trigger and convert' mechanism (Supplementary Movie 9).
Fig. 5b-e shows an example trajectory, beginning with a very early endosome that is APPL1-positive and bound to PI(3,4)P2 and Rab5 via its PH-BAR domain. Spontaneous binding of INPP4A to this endosome can result in conversion of PI(3,4)P2 to PI(3)P; however, with APPL1 occupying most PI(3,4)P2, most INPP4A remains unbound and, therefore, inactive on its substrate. APPL1 can be transiently displaced by N-terminal EEA1, which binds directly to Rab5. Due to the inclusion of a positive feedback switch to mimic the experimentally observed clustering, APPL1 endosomes are relatively stable (i). However, upon the introduction of a large pool of EEA1 as the result of a collision (ii), Nterminal EEA1 can sequester Rab5, thus destabilising the APPL1-Rab5 interactions and resulting in APPL1 desorption (iii). Consequently, INPP4A can now bind to its substrate PI(3,4)P2 and convert it to PI(3)P (iv). This leads to the binding of EEA1 through its C-terminal FYVE binding domain, as well as Rab5 binding (v, vi). In this scheme, the Nterminal binding of EEA1 acts as a trigger. Moreover, since the N-terminus of EEA1 has weak binding affinity to Rab5, we reasoned that the clustered organisation of EEA1 on endosomes, and the interaction of multiple N-terminal EEA1 at the instance of collision, would result in overwhelming the APPL1-Rab5 on the incoming endosome. We simulated the net decrease in conversion time of a single endosome that underwent one collision (Fig. 5f), and the net decrease in conversion time of endosomes in a cell allowed to collide randomly at increasing collision frequencies (Fig. 5g).
These agent-based simulations showed that clustering has a two-pronged effect on accelerating conversion. If a Rab5 molecule originally surrounded by bound APPL1 is occupied by EEA1, it will become unavailable for binding to APPL1. This creates a 'hole' in the APPL1 layer, which decreases the binding affinity of APPL1 in the region surrounding the hole (as compared to a region filled by APPL1, since clustering increases the binding affinity of a species in accordance with the local density of that species). This in turn increases the chance that the hole will expand. On the other hand, clustering of EEA1 attracts more EEA1 to the vicinity of the 'hole'. These two factors speed up the local back-andforth conversion between APPL1 and EEA1 clusters, which increases the windows of opportunity for INPP4A to convert PI(3,4)P2 to PI(3)P. A heterotypic fusion between endosomes with N-terminally bound EEA1 and C-terminally bound EEA1, as observed in the live FLIM experiments (Fig. 3a) represents only a state with higher N-to C-EEA1 ratio and the reaction scheme will proceed to convert the transiently increased PI(3,4)P2 to PI(3)P, subsequently replacing N-terminally bound EEA1 by C-terminally bound EEA1. Through our simulations, we were able to quantify the net decrease in conversion time due to clustering (Fig. 5f). Once a sufficient number of PI (3,4)P2 have converted to PI(3)P, C-terminal attachments dominate since they have a stronger binding affinity, and they require both PI(3)P and Rab5 to bind, rendering Rab5 unavailable for N-terminal attachments.
Discussion
The endosomal system is highly dynamic, requiring successive biochemical maturations of key lipids and associated proteins to achieve correct targeting of internalised cargo. Whilst the order of appearance of key species has been diligently identified for early endosomes, how the timing of maturation is maintained for each generated vesicle had not been studied. In this work we describe a novel mechanism that ensures timely maturation of vesicles at a whole cell level. Specifically, we present a new trigger-and-convert model of APPL1 to EEA1 early endosomal maturation, as summarised in Fig. 6. In this model, nascent very early endosomes (VEEs) characterised by APPL1 bound to PI(3,4)P2 and Rab5 (13) undergo active transport along microtubules and collide stochastically with mature EEA1-positive early endosomes (EEs). This collision is a 'trigger' that primes the VEE for maturation. Our experimental observations are consistent with a model whereby a cluster of EEA1 is transferred onto the incident VEE following such a collision. Furthermore, this model is in accordance with the following molecular details of EEA1. First, C-terminal EEA1 has a rigid quaternary structure that ensures that the coiled-coil region extends into the cytoplasm, preventing the N-terminus from folding back and binding to Rab5 on the same endosome (26). This would result in the Nterminus of EEA1 being located 160-180 nm from the endosome surface (29), in agreement with observations of two distinct EEA1 populations made in our FLIM experiments. Second, EEA1 possesses two distinct Rab5 binding sites-one corresponding to the C2H2 Zn 2+ finger at the N-terminus and the other overlapping with the PI(3)P binding FYVE domain at the C-terminus. The C-terminal end also contains a calmodulin (CaM) binding motif. Of EEA1's two Rab5binding domains, the N-terminus forms a stronger interaction in isolation (38,49); however, in the presence of PI(3)P, the FYVE domains at the C-terminus of EEA1 lead to a much stronger association with endosomal membranes by coincidence detection of Rab5 and PI(3)P (7,45). While the exact steps at the instant of collision fall beyond the scope of this manuscript, it is conceivable that a collision would result in the stronger N-terminus-Rab5 interaction overriding the Cterminus interactions. Furthermore, an unexplored but plausible mechanistic detail lies in the interactions of Ca 2+ /CaM with Rab5 and the C-terminus of EEA1, which antagonises PI(3)P binding, and may operate to release C-terminal binding when the N-terminal interactions take place as a result of collision (50, 51). Whether transient Ca 2+ spikes operate to mediate transfer of molecules remains an attractive detail to investigate. After collision, the sequestration of Rab5 via N-terminal EEA1 results in desorption of APPL1 clusters. The reduced APPL1 binding to Rab5 also exposes PI(3,4)P2 to dephosphorylation by 4-phosphatases, producing PI(3)P. The most likely candidate for this reaction is INPP4A, since it localises to Rab5-positive EEs (8,16). This availability of PI(3)P now enables EEA1 to bind via its C-terminal FYVE domains, thereby resulting in the irreversible maturation to an EEA1positive EE. This mature endosome is in turn able to trigger more conversions of APPL1 VEEs following collisions, thus ensuring continual maturation of this dynamic population of vesicles. Consistent with other studies of descriptions of specific domains on endosomes, we observed that both VEEs and EEs showed a counter-clustered APPL1 and EEA1 distribution. The hypothesis that clustering plays a key role in ensuring a more robust process was recapitulated through our simulations, which suggested it to be essential for the timely conversion of these vesicles. An attractive hypothesis is that phosphoinositide clustering underlies the observed protein distributions, as phosphoinositide clustering has been demonstrated in other vesicular and tubular membrane entities (35). Additionally, Rab5 has also been suggested to be clustered (34). A clustered distribution of EEA1 or its binding partner Rab5 in the incident endosome would ensure that a higher probability of transfer of EEA1 molecules exists following a collision. Furthermore, this would produce large fluctuations of EEA1 intensity on a converting endosome, as observed in our imaging movies. Previous studies have shown that stochastic fluctuations have a significant effect on trafficking and maturation processes (18). The greater the stochasticity in a system, the more the system dynamics favour non-steady state biochemical maturation over steady state vesicular exchange in cellular transport pathways. Biochemical maturation is characterised by a first passage time event in which the first instance of complete maturation of the compartment in question marks a point of no return. But the noise due to the inherent stochasticity in the system poses challenges to robust directional flow of material, which requires tight regulation on exchange processes between organelles. It was shown by Vagne and Sens that the presence of positive feedback in the maturation process can significantly suppress the stochastic fluctuations, and Golgi cisternae use homotypic fusions as the likely mechanism to overcome this challenge (18). In a similar vein, our proposed mechanisms of clustering, collision, and heterotypic fusion each provide positive feedback to the maturation process and are essential in the robust functioning of the exchange processes through noise suppression. The specific requirement for INPP4A, which converts PI(3,4)P2 to PI(3)P on the maturing endosome, ensures a definitive distinction between APPL1-and EEA1-positive endosomes. This is achieved by the depletion of PI(3,4)P2, which ensures that APPL1 cannot rebind following desorption, and thus that conversions are unidirectional. Therefore, even though VPS34-mediated conversion of PI to PI(3)P forms the major source of PI(3)P, we hypothesise that a more significant role is played in the process of APPL1 to EEA1 maturation by virtue of depletion of PI (3,4)P2 and subsequent enrichment of PI(3)P even before the newly generated endosomes have fused with endosomes bearing VPS34derived PI(3)P. Early endosomal maturation is intimately linked with early endosomal fusion and therefore the flow of cargo through the endosomal system. Through the delineation of EEA1 endosomes into two distinct populations, namely N-terminally and C-terminally bound, we have shown that fusion of EEA1bound vesicles is dependent on EEA1-cross binding between the two vesicles as has been evidenced previously (26,29). Furthermore, we observe that this occurs only between vesicles that both contain EEA1, and that at least one endosome must be positive for PI(3)P, to enable stable C-terminal binding. EEA1 interacts with SNARE proteins Syntaxin 6 and Syntaxin 13 via the C-terminus of EEA1 (52,53), which post entropic collapse of EEA1, may execute the membrane fusion. A relevant protein complex to this work is the mammalian class C core vacuole/endosome tethering (CORVET) system, which functions to mediate endosomal fusion independently of EEA1 (54). Surprisingly, overexpression of the Nterminal mutant of EEA1 also resulted in a similar phenotype of smaller, more fragmented APPL1 endosomes, with the exception that we found no APPL1-EEA1 double positive endosomes. It is unclear at what stage CORVET operates, and dissection of this question is beyond the scope of this study. However, the strong phenotype observed for the N-terminal mutant of EEA1 reinforces the role of EEA1 in self-regulating APPL1 to EEA1 conversion. What is the physiological relevance of this mechanism? The trigger-and-convert approach provides emergent regulation of the timing of early endosome maturation, leading to a tightly controlled and more timely and consistent flux of maturation, able to overcome the intrinsic stochasticity of single molecule protein-protein and protein-membrane interactions. This is critical to robust trafficking, as early endosomes act as stable sorting centres of endocytosed material, from which cargo is redirected towards the plasma membrane or sent to late endosomes and lysosomal degradation. As a result, robust maturation of cargo-bearing vesicles is a requirement of the intracellular transport system. Furthermore, it has become increasingly apparent that many diverse transmembrane receptors are able to signal from within endosomes (55)(56)(57)(58)(59)(60) and that signal attenuation may rely on trafficking to distinct intracellular destinations or organelles (61,62). This suggests that the trafficking and maturation rate of endosomes is intrinsically coupled to the downstream signal transduction of transmembrane receptors (61,63,64), further highlighting the importance of tightly regulated intracellular transport itineraries that include transport and maturation (11,65). An interesting corollary of this revised model of endosomal maturation, is that we expect distinct very early endosomal populations to show different maturation times depending on their motility, which has consequences for rapidly versus slowly trafficked cargo as well as for statically anchored endosomes (11,66,67). Our work highlights the power of rapid volumetric imaging, coupled with an unbiased analysis pipeline and complemented by simulations, to capture and describe dynamical processes and thus unravel mechanisms in unperturbed systems. Importantly, this approach precludes the need for genetic and pharmacological alterations that lead to the establishment of a new steady state or phenotype, thereby potentially obscuring the very dynamics that are to be studied. Emergent phenomena are central to biological processes across scales, and there is increasing evidence for structurefunction relationships that extend far beyond molecular scales to form larger-scale patterns in space and/or time. In the endosomal system, the biochemical process of conversion is underpinned by phosphoinositide chemistry at the individual endosome level; at a population level, however, it is governed by the physical process of stochastic collisions that forms an inherent part of the transport system of endosomes. Importantly, this suggests that the robustness of the intracellular transport network may not derive solely from so-called 'master regulators' but through the complex dynamic interactions of individually noisy components to create emergent reproducibility of large-scale processes.
Methods
Cell lines. RPE1 and HeLa EEA1 knockout (KO) cells were incubated at 37°C in 5% CO 2 in high glucose Dulbecco's modified Eagle's medium (DMEM) (Life Technologies), supplemented with 10% foetal bovine serum (FBS) and 1% penicillin and streptomycin (Life Technologies). Cells were seeded at a density of 200,000 per well in a six-well plate containing 25 mm or 5 mm glass coverslips.
Live cell imaging. Cells were imaged using a lattice lightsheet microscope (3i, Denver, CO, USA). Excitation was achieved using 488-nm and 560-nm diode lasers (MPB Communications) at 1-5% AOTF transmittance through an excitation objective (Special Optics 28.6× 0.7 NA 3.74-mm immersion lens) and detected by a Nikon CFI Apo LWD 25× 1.1 NA water immersion lens with a 2.5× tube lens. Live cells were imaged in 8 mL of 37°C-heated DMEM and images acquired with 2× Hamamatsu Orca Flash 4.0 V2 sC-MOS cameras.
SiRNA INPP4A. RPE1 cells were transfected with APPL1-EGFP, EEA1-TagRFP, and either 10 nM INPP4A siRNA (AM16810, Thermo Fisher Scientific) or Silencer Negative Control siRNA (AM4611, Thermo Fisher Scientific) using lipofectamine 3000. ∼24 h later the cells were imaged using epifluorescence microscopy (configuration as above). The cells were imaged sequentially with 100 ms exposure and at a rate of 3 s/frame for 20 min. The whole cell number of conversions within this window was reported for each condition.
Fluorescence lifetime imaging. RPE1 cells were transfected with either EGFP-EEA1 + mRFP-Rab5, EEA1 TagRFP-T + EGFP-Rab5, EEA1-NTmut TagRFP-T + EGFP-Rab5 or EEA1-CTmut TagRFP-T + EGFP-Rab5 and either fixed with 4% paraformaldehyde or imaged live. The cells were imaged using an SP8 Falcon (Leica Microsystems) with an 86× 1.2 NA objective. Fluorescence lifetime images were acquired upon sequential excitation at 488 nm and 560 nm using a tuneable pulsed white-light laser at 10% transmission, with emission collected at 500-550 nm and 580-630 nm, respectively, using two Leica HyD detectors. The GFP lifetimes were fitted using two-component fitting with τ 1 = 1.006 ns and τ 2 = 2.600 ns. The fixed images were analysed with pixel-wise lifetime fitting, and the live movies were analysed by separating the images into the two contributing fluorescence lifetime images.
Drug addition. Cells were incubated with 100 nM nocodazole in 8 mL DMEM for 5 min before and during imaging as indicated. Cells were similarly treated with 100 nM phorbol 12-myristate 13-acetate (PMA) (P1585, Sigma-Aldrich) 5 min before and during imaging as indicated. To selectively inhibit Vps34, cells were treated with 100 nM SAR405 (533063, Sigma-Aldrich) for 2 h prior to imaging and throughout the experiment.
PI(3)P staining.
To visualise PI(3)P localisation in relation to EEA1, immunofluorescence staining was performed as described previously (43). Briefly, RPE1 cells were transfected with EGFP-EEA1 and mRFP-Rab5 and fixed in 2% PFA. These cells were then permeabilised using 20 µM digitonin for 5 min and labelled with 8 µg/mL recombinant GST-2xFYVE (69) which was detected using a GST primary antibody (71-7500, Invitrogen) and a Goat anti-Rabbit Alex-aFluor647 secondary antibody (A-21245, Thermo Fisher Scientific). These cells were then imaged using an SP8 Falcon as above, with the PI(3)P being detected using 647 nm excitation and emission collected at 660-700 nm, using a Leica HyD detector.
Super resolution by radial fluctuations (SRRF). RPE1 cells transfected with APPL1-EGFP and EEA-T-TagRFP
were stimulated with 100 nM PMA as detailed above. The cells were then imaged using widefield fluorescence microscopy with a Nikon Ti-2E body, 100× 1.5 NA objective (Olympus) and Prime 95B camera (Photometrics). Images were captured in 100 frame bursts with 5 ms exposure for each channel sequentially every 2 s for ∼1 min image periods. The images were then processed using the SRRF plugin for Fiji (37,70).
Segmentation and tracking analysis. Datasets analysed consisted of LLSM six movies of untreated and two movies of nocodazole-treated RPE1 cells.
Images were first deskewed, then adaptive histogram equalisation and a median filter applied prior to blob detection using the Laplacian of Gaussian operator. The expected range of object sizes were supplied as an independent parameter for each fluorescence channel, with other parameters tuned to return a preliminary set of over-detected blobs, defined by centres of mass and approximate radii. From these data, representative regions denoting endosomes and background, respectively, were chosen from each movie in an unsupervised manner (by choosing the brightest and dimmest blobs, respectively); these regions were then used as templates to calculate cross-correlations against each candidate endosome. The results of this operation define a set of features for each object, which were used as inputs to a k-means clustering algorithm to classify objects into endosomes versus background ( Supplementary Fig. 2). A custom tracking routine built on trackpy (24) was then used to link objects into complete trajectories, independently for each channel. Trackpy is a package for tracking bloblike features in video images and analysing their trajectories, which consists of a Python implementation of the widelyused Crocker-Grier algorithm (71) to link features (here, both localisation and intensity information) in time. Events of interest were then calculated by trajectory analysis, as follows. Correlated trajectories were classified as potential conversions (11,65), with stringent filters applied to exclude any events not clearly representative of APPL1 to EEA1 conversions ( Supplementary Fig. 3a). To identify heterotypic collisions, local trajectories of neighbouring APPL1-EEA1 pairs were used to calculate the pairwise inter-endosome distance (separation between surfaces of nearby APPL1 and EEA1 endosomes along the line connecting their centres of mass). Local minima in the inter-endosome distance below a threshold value (within 200 nm, or roughly two pixels of overlap in the lateral dimension) were classified as collisions. These values were subsequently filtered to ensure that conversionlike events were excluded from the set of heterotypic collisions ( Supplementary Fig. 3b). Events showing APPL1 to EEA1 conversions were classified as fusions or conversions, respectively based on whether or not the particular EEA1 track existed prior to colocalisation with APPL1 (Supple-mentary Fig. 3c). Events were classified as collision-induced versus unaided based on whether the APPL1 endosome collided with any EEA1 endosome in the 30 s prior to the event ( Supplementary Fig. 3d).
Photo-Activated Localisation Microscopy (PALM).
Dendra-2 EEA1 was generated by replacing TagRFP-T in RagRFP-T EEA1 (Addgene plasmid #42635) at cloning sites AgeI and XhoI. Cells transfected with Dendra-2 EEA1 were fixed using 0.2% glutaraldehyde and 4% PFA in cytoskeletal buffer (10 mM MES, 150 mM NaCl, 5 mM EDTA, 5 mM glucose and 5 mM MgCl 2 ) for 15 min at room temperature. The cells were washed gently three times with PBS. PALM microscopy was carried out with a Nikon N-STORM microscope with a 100× oil immersion objective (1.49 NA) with a cylindrical lens for 3D localisation. A 488-nm laser beam was used for pre-converted Dendra-2 excitation, with 405 nm for photoconversion and a 561-nm beam for post-photo converted Dendra-2. Localisations were exported to ViSP for visual examination and generating depth colour-coded images (72).
Simulations. The endosome's surface was simulated as a bilayered Fibonacci Sphere (a spherical grid in which neighbouring points are approximately equidistant). One layer consisted of Rab5 and the other PI(3,4)P2 or PI(3)P. The agents (APPL1, INPP4A, and N-and C-terminally attached EEA1) were allowed to stochastically attach and detach according to the schematic shown in Fig. 5a. The attachment rates increased with the number of neighbouring agents of the same type (cluster attach), and detachment rates increased with the number of neighbouring agents of the same type that detached recently (cluster detach). In addition, INPP4A had a fixed probability of converting PI (3,4) (c) RPE1 wild-type cells and HeLa EEA1 knockout (KO) cell lines expressing wild-type EEA1 (blue) or N-terminal mutant deficient in binding Rab5 (red) were imaged using LLSM. (d) The total number of conversions and fusions were quantified; these data indicate that the initial N-terminal of EEA1 is essential for endosomal conversions. ns indicates non-significant difference, * indicates p < 0.05. Each mean was compared against the others using an ordinary one-way ANOVA. In the case of HeLa EEA1 KO cells expressing EEA1 N-terminal Rab5 binding mutant, no events were detected by the analysis workflow or by visual inspection. Single points indicate measured data; the violin plots correspond to a normal distribution of all events; and the box plots corresponds to the 25th to 75th percentile of events, with bars showing the total range. Means are depicted by the open squares. The grey curve shows the case for no collisions. Upon increasing the collision frequency (i.e., decreasing the average time interval between collisions), the endosomes become more likely to encounter one or multiple collisions, which in turn leads to faster conversions. Hence, the weight of the conversion time distribution shifts further to the left (towards red). There are multiple modes in the conversion time distribution corresponding to the number of collisions the endosome experienced before conversion. The leftmost mode at 20-50 s corresponds to two collisions before conversion, the middle mode at 60-100 s to a single collision, and the rightmost mode at 130-200 s to zero collisions. Fig. 6. Summary of proposed EEA1 'trigger-and-convert' mechanism of maturation. Very early endosomes formed at the cell periphery (endosome 1) have PI(3,4)P2 (orange)-containing membranes and APPL1 (cyan) bound to Rab5 (grey ). These vesicles collide with mature EEA1 vesicles (endosome 0), seeding N-terminally bound EEA1 and triggering the conversion process. This enables the production of PI(3)P (red) and the binding of C-terminal EEA1. These vesicles can trigger conversions on nascent APPL1 vesicles (endosome 2) and participate in canonical endosomal tethering and fusion processes (bottom endosomes).
|
2022-04-20T13:21:31.531Z
|
2023-06-08T00:00:00.000
|
{
"year": 2023,
"sha1": "5569fe289324a76bf85186066e6c70a6cf2d4e0b",
"oa_license": null,
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2022/04/16/2022.04.15.488498.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "5569fe289324a76bf85186066e6c70a6cf2d4e0b",
"s2fieldsofstudy": [
"Biology",
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
111172446
|
pes2o/s2orc
|
v3-fos-license
|
VISUAL REPRESENTATIONS OF THE TROJAN WAR IN ATTIC CLAY VASES
Pottery is a primary source of evidence throughout the history of ancient Greece. Pervasive and almost indestructible, its generally predictable development means that it provides a basis to which other arts can be related. . Of all ancient Greek vases, those originating in Attica receive significant attention for several reasons. First, they outnumber the other Greek vases so far unearthed. Secondly, their painting techniques, especially, the black figure' and red figure used in Attica, show some sophistication in the art and style which created them and thus reach the climax in the development of Greek vase paintings. Moreover, the decorations in these vases contain imagery that represents diverse themes, ranging from the scenes from the ancient Greek myths to those from daily life. The representations of Greek myth in both black and red figure vases range from mythological episodes referring to the Olympian deities to episodes associated with the epic cycles. The quantity of academic research on this domain is notable and varies from general studies on Greek vases to more specialized studies on Athenian vases. J. D. Beazley's pioneering contribution to the study of ancient Greek vase paintings through a series of articles and books on Attic vase painting, painters and their techniques from 1910 until his death in 1970 in both English and German languages is invaluable. Athenian Black Figure Vases (1974, corrected version in 1991), Athenian Red Figure Vases: the Archaic Period (1975) and Athenian Red Figure Vases: The Classical Period (1989), The Hist01y of Greek Vases: Potters, Painters and Pictures, (2001) of J. Boardman are also of commendable service to students of Greek vase painting. Apart from these, detailed studies of T.H. Carpenter Art and My th in Ancient Greece ( 1991 ), and of H. A. Shapiro Myth into Art: Poet_ and Painter in Classical Greece (1994), and the articles of A. M. Snodgrass ' Poet and Painter in Eighth-Century Greece ' (I 979), S. Lowenstam 'The Uses of Vase-depictions in Homeric Studies'(l992), 'Talking Vases: The Relationship between the Homeric Poems
VISUAL REPRESENTATIONS OF THE TROJAN WAR IN ATTIC CLAY VASES
Pottery is a primary source of evidence throughout the history of ancient Greece.Pervasive and almost indestructible, its generally predictable development means that it provides a basis to which other arts can be related.
. Of all ancient Greek vases, those originating in Attica receive significant attention for several reasons.First, they outnumber the other Greek vases so far unearthed.Secondly, their painting techniques, especially, the black figure' and red figure 2 used in Attica, show some sophistication in the art and style which created them and thus reach the climax in the development of Greek vase paintings.Moreover, the decorations in these vases contain imagery that represents diverse themes, ranging from the scenes from the ancient Greek myths to those from daily life.The representations of Greek myth in both black and red figure vases range from mythological episodes referring to the Olympian deities to episodes associated with the epic cycles.
The quantity of academic research on this domain is notable and varies from general studies on Greek vases to more specialized studies on Athenian vases.J. D. Beazley's pioneering contribution to the study of ancient Greek vase paintings through a series of articles and books on Attic vase painting, painters and their techniques from 1910 until his death in 1970 in both English and German languages is invaluable.Athenian Black Figure Vases (1974, corrected and Archaic Representations of Epic Myth' (1997) specifically discuss the connection between Greek myth and art.
The present study observes how certain scenes of the Trojan War illustrated in Attic black and red figure vases deviate from the Homeric representations in the Iliad in order to re-examine possible causes for such deviations with a view to understanding the problems one may encounter when using visual representations as validations of poetic representations.Our attention here is limited to a sample of black and red figure vase paintings through which the deviations from the Homeric Iliad can be distinctly illustrated to facilitate the discussion.
It is generally known from the very outset of the study of Greek art 3 that the ways of illustrating a story by an artist show basic differences to that by a poet.Moreover, the artist's gamut was constantly changing-partly, in response to external incentives and partly, in reflecting the internal dynamics of an art form. 4It was the practice of naming figures and scenes from as early as 650-630 BC 5 which bestowed an identity to a particular scene, such as warriors fighting over a fallen comrade, a warrior carrying a dead comrade over his shoulder 6 , or a warrior arming for battle, which otherwise would have remained generic. 7It is such developments that enabled us to use visual depictions in Greek vases as confirmations of Greek literary and archaeological information.
As for visual demonstrations of the Homeric Iliad, Athenian black figure artists were comparatively less attracted to depict scenes from it.When they did, they mainly concentrated on the latter part of the poem, i.e. after Achilles' return to the battlefield, emphasizing Achilles' wrath towards Hector for killing Patrokles.Thus "Patroklos' funeral'', "Achilles dragging dead Hector behind his chariot" are popular representations besides "the ransom of Hector".Some of the earlier depictions of Achilles refer to his arming either at home when setting out to the battle at Troy or at Troy (for the second time) when the first armour was lost with Patroklos. 8Yet, these themes from the Iliad are better represented by red figure artists who also showed themes such as "The Greek mission to Achilles", "The capture of Dolan," "The death of Sarpedon" "Duels" and "Warriors departing for battle." 9In general, there is apparently an early interest in scenes involving 3 In the eighteenth century. 4 Another example would be the 'Pantie' amphora (540-530 BC) on which two heralds walk with three other men.This scene may remind us of the embassy to Achilles mentioned in the Iliad.But the reverse of the same vase, that shows three heralds and two legates, indicate that the paintings do not correspond to specific scenes in myth.See Lowenstam 1992, note 6. 8 After the red figure technique was introduced, black figure artists depicted as many scenes as they Hector's death.The new stimulus to depict themes from the Iliad later in the sixth century B.C. may be due to Hipparchos' promotion of Homeric recital s at the Great Panathenaia.10
As Shapiro
11 points out, the manner and the strategy chosen by an artist to illustrate an episode, whether through a monoscenic 12 , synoptic 13 , cyclic 14 or continuous 15 depiction of it differs very much from that of a poet due to a variety of reasons.These may range from the literacy of the artist, his social exposure and personal stimuli to the demands of the consumer.Besides these general variations between visual and poetic representations, there are also other reasons that could be adduced to explain the dev iation of vase paintings from the Homeric versions.One such reason is that the variant versions of the same myth or legend were known to the artist besides the Homeric poems.For instance, there could be poems highlighting different aspects or elements of a myth used by Homer.Thus, Homer was not the only source of influence for vase painters but dramatic representations (especially tragedy) and folktales had a similar impact. 16Cook has argued that folktales and stories told to painters may have been the primary source of epic subjects on vases before 530 BC. 17 Moreover, Snodgrass has questioned artists' dependence on poetic sources even though he believed that the early influence of Homeric poems also had had a considerable impact on them.
18
Furthermore, it is possible that a painter may not have known or may have forgotten the Homeric or traditional story.Further reasons as to why the painted scenes and their parallel Homeric versions do not correlate could be summarized as artistic license (i.e.reasons that induce an artist to have hi s paintings deliberately deviate from an established poem), which could be further classified as follows.First, the very difference of their respective media which acts as a kind of regulator in representing what is comfortably articulated in words.One example of thi s would be the multiple scenes on Achilles' shield as noted in the Iliad (18.478-608).
58
CHANDIMA S. M. WICKRAMASINGHE iconography by the artists also made their works unique. 20Finally, when painters added labels to a generic scene in order to bestow on the picture an individualized identity, details appropriate to the generic scene may clash with the new context.
21
Having, thus, mentioned the reasons that may cause differences in the artistic and literary representations of Homeric accounts, we may next take individual vase paintings that show a variation from the Homeric version of which we are aware.In this study the photographs of vases follow the order of scenes in the Iliad.
• The Iliad begins with the dispute between Agamemnon and Achilles over Briseis, the concubine of Achilles.According to the Homeric version, though Agamemnon threatens to take her from Achilles, he does not actually lead her out in person from Achilles' lodging (I.318-326 ).References to Agamemnon's seizure of Briseis are rare in ancient art.An Attic calyx-krater, datable to 490 BC and attributed to the Eucharides Painter, depicts the embassy to Achilles with a deviation from the Homeric version as the artist replaces Ajax with Diomedes.According to the Homeric version, Ajax, Odysseus and Phoenix visited Achilles with the proposal of Agamemnon to regain his help to fight the Trojans.As for Diomedes, he was one of the younger and most enthusiastic of the Achaean heroes who showed steady loyalty to Agamemnon and the Greek mission, stirring the Greeks with his cheerful determination to fight to the end when Agamemnon, in despair, was proposing to abandon the siege and return home.
23.
22 Homer also records his consistent loyalty to Agamemnon and to the Greek cause.
CHANDIMA S. M. WICKRAMASINGHE
Here, Agamemnon (the figure is clearly labelled thus) himself fetches Briseis with Diomedes and the herald Talthybios as his companions.The herald is indicated by the kerykeion and all figures are named and only the presence of the herald tallies with the Homeric version.Could it be that Makron was using an alternate version of the myth?Yet, a close observation of the elements in the Iliad itself may probably have guided Makron to depict his scene with Agamemnon.As just mentioned, although Agamemnon does not fetch Briseis himself in the Iliad, his initial threat to Achilles mentions that he himself would take her away (Iliad 1.137-139, 184-185).Furthermore, when Briseis was taken away, Achilles' complaint to his mother, Thetis, was that Agamemnon snatch his prize from him (Iliad I. 356).Similarly, when Nestor later advices Agamemnon to make amends, he (Agamemnon) refers to the action as of his own.G.S. Kirk, taking such evidence in the Iliad into consideration, argues that Agamemnon's threat may have created a great impact on the minds of some characters 28 and Makron was simply projecting such an impact in the vase painting.Based on this same evidence, Teffeteller 29 also argues that involving Agamemnon in the act is merely arbitrary and refers to the injury it causes Achilles.Lowenstam, further, points out that one possibility of the deviation of Makron's work and of any other painter who depicts a man escorting a woman could be that they were misled by the ambiguity of the phrase au)to/j a)pou/raj [lit.led by himself] (Iliad I. 356). 30But referring to the scene on the other side of the same vase by this painter, which is true to the Homeric version (i.e.embassy to Achilles with Phoenix, Ajax and Odysseus) Lowenstam correctly concludes that Makron was showing an alternate version known to him where Agamemnon fetches the girl away.Lowenstam further mentions that Homer was probably aware of both versions and his poetic mastery has enabled him to fuse both versions in one poem.
31 Though Lowenstam does not discuss the inclusion of Diomedes in the former scene, one could suggest that Diomedes, the Greek warrior ever loyal to Agamemnon, may have been in an alternate version, if not in a previous illustration, 32 used by the painter though Homer excludes him from the Briseis episode. 33Accordingly, Makron not only shows his scholarship but also his artistic excellence through this demonstration.However, it is difficult to recognize deviations from the Homeric version as examples of pure artistic license as it is hard to discover whether a particular artist was inspired by a scene from a lost oral, visual or a literary source.A scene that is considerably different from the Homeric version is the depiction of the removal of the dead body of the Lycian Sarpedon (a son of Zeus and one of the great defenders of Troy) from the battlefield by Euphronios.It is Apollo whom Zeus chooses to watch over the rescuing of Sarpedon's body because he is well disposed to the Trojan side 28 G.S. Kirk, The Iliad: A Commentary, vol.I, 1985, Cambridge, p. 72. 29A. Teffeteller, 'AUTOS APOURAS Iliad 1.356' Classical Quarterly 1990, vol. 40, pp.17-19. 30Also cf.Lowenstam, 1997, p. 43. 31 /bid, 1997, p. 44.Knowledge of the version in which Agamemnon snatched the girl is limited to verbal threats and communications but in describing the real action Homer omits Agamemnon instilling more prestige and honour to the character of Agamemnon.
• 12 Note, as discussed above, that a decade ago the Eucharides painter had already replaced Ajax with Diomedes. 31A !so see.Shapiro, 1994, p. 16. and also because he was the healing god of the Greeks.Apollo was to entrust the task to the twin brothers Hypnos (Sleep) and Thanatos (Death) (Iliad 16. 667-683).Euphronius has depicted this acco unt twice and each scene is different from the other.The earlier depiction dated to c. 520 B.C, is modest in scale and is without a divine aura feeling as in Homer.But Hypnos and Thanatos seem to struggle under the weight of the body beside a person labelled Acamas, (who plays no part in the Homeric version) who leads the company.Yet, the artist chooses Acamas here to symbolize that the body would be transported to distant Lycia because the typical characteristic of Acamas is hi s interest in di stant places. 341"•~;<'~;r:~~r~1Wtli'~';: . .... : -'_;--__ ' Fig. 3. Att ic red-figure cup signed by Euphronios.c.520 B.C. 35 Noteworthy are the changes that accompany the second illustration of the same scene by Euphronius, few years later, on a calyx-crater datable to 515-510 B.C.The moment depicted is just before the body is being lifted by Hypnos and Thanatos .Yet, the spirit is much closer to the Homeric model as the divine twin s seem much relaxed.Presenting the duo with splendid wings, an element Homer never mentions, is apparently a logical inference of the artist to manage the transportation of Sarpedon 's body to distant Lyc ia.The one who watches over the task, here, is Hermes, not Apollo, and his inclusion once again seems logical due to his double role as the messenger of Zeus and as the conductor of soul s of the dead (psychopompos).36 Thus, his presence may indicate that the operation was done under the command of Zeus.The omission of Apollo may have permitted to show the uncleaned and unclothed body of Sarpedon as these were the tasks entrusted to Apollo in the Homeric account.Finally, by framing the scene with Leodamas and Hippolytos, two figures not mentioned in the Homeric version in this regard, the painter may simply have intended to indicate the battlefield from which the body was removed 37 because Leodamas and Hippolytos were Trojan warriors killed in battle before Sarpedon.Thus, in both illustrations, we observe that by introducing figures new to the corresponding scene, Euphronios does not show his ignorance or confusion . Inad, it demonstrates that he is not a mere illustrator of poetic narrations but that he possessed the skill in fusing his learning with his artistic originality to produce a masterpiece.
As noted above, the early black figure painters focused on the later part of the Jliad.A few Attic black-figure vase painters of the sixth century preferred to depict Achilles dragging or preparing to drag the body of Hektor. 39An Attic black-figure hydria of the Leargos group dated to c. 520 BC is notable in this regard as the artist skillfully managed to condense, in his crowded yet clearly articulated work of art, many moments that spread across several books of the /laid.Achilles is about to leap into his chariot in which the charioteer already stands and to which the body of Hektor has been attached.As he looks back he confronts Priam and Hekuba who stands beneath a Doric entablature, watching this gruesome spectacle.Although Hektor's parents witness the mutilation of their son's body (Iliad 22. 396-415) it is the artist who brings them to such terrifying closeness to the act.As the horses of Achilles disappear from the scene in the top right-hand comer, the soul of Patroklos flies away from his omphalos-shaped tomb, indicating that the body is dragged around the tomb of Patroklos (Iliad 24.14-17).The woman in the centre gesturing to stop could be identified as Iris, who in the Homeric version, was sent to Thetis by Zeus asking her to persuade Achilles to stop the gruesome act.Then Zeus sends Iris again to Priam to encourage him to ransom the body of Hektor from Achilles (Iliad 24.l 04-187).The attempt of the painter here could be to combine both episodes in one scene by employing Iris to convey both messages by convening all characters necessary to understand the episode.Thus, the gesture of Iris signals to Achilles that he must end the dragging while signaling to Priam that he must visit Achilles. 40Thus, the painter has compressed both time and space to show his story through his artistic ingenuity .On a fragment of a dinos by Sophilos, dated to 580-570 BC, tiny men constituting the audience are seated while huge chariot-horses race towards them.Achilles' name appears, though he is missing, as the one presiding over the event and an unusual inscription (Patroklos at/a: 'Games in honour of Patroklos') identifies the scene as the funeral games of Patroklos.The preserved fragment contains the horses of the winning chariot and part of the winner's name, ending in 'OS' which does not correspond to the name of the winner of this event in the Iliad, i.e.Diornedes, showing that Sophilos was not following the Homeric version we know which perhaps was not yet established by the first quarter of the sixth century B.C.The painting of Kleitias was the best preserved and is on the neck of the Attic black-figure volute-krater known as the Fran~ois vase.In this, chariots race having passed the turning post toward Achilles who stands in front of a bronze tripod which, along with a dinos and another tripod in the background were probably meant as prizes.As in the Homeric account, five charioteers compete at the event, but only Diomedes occurs in correspondence to the Homeric narration (Iliad 23.352-361 , 448-460, 506-513 ).Moreover, though Diomedes is the winner in Homer he is put in third place by Kleitias, and shows Odysseus, 43 who is not even mentioned with regard to the chariot race in the Iliad, as the winner.
44 Perhaps, Odysseus' fame as a talented athlete may have persuaded Kleitias to make him the winner. 45• .... !These differences between the poetic and artistic versions could either be due to the painter's ignorance of the Homeric account, his faulty memory of the Homeric version , his deci sion to base the drawing on one or a few independent traditions known to him, or his original creativity.47 The Homeric Iliad ends with the ransoming of Hektor's body by Priam.One out of several versions48 of this scene was chosen by some artists who decorated both black and red-figure vases from 570-480 B.C.The vase paintings that depict the ransom of Hektor, presented in this study, provide fine examples of artistic license.49 In Homer, onl y Idaios attends Priam (Iliad.24 . 464 71 ), the treasure is left outside Achilles' tent in the wagon (Iliad.24.572-581 ), Achilles has finished his meal and taken all precautions not to expose the body of Hektor to Priam (Iliad.24.584ff).
In the Attic black-figure hydria presented below, Priam is accompanied by Hermes and a servant bearing ransom for Achilles who lies beneath the dining table of Achilles.The woman to the right might be the one ordered to wash the body of Hektor.In the Attic red-figure cup from Vulci, Achilles, reclining on a couch beneath which is the body of Hektor, is holding a drinking cup and is looking at a woman, presumably Briseis, who places a wreath on his head.Priam approaches with a servant carrying a hydria and three phialai , and having led Priam to Achilles' tent, Hermes departs.Also notable is the food (bread and meat) on the table beside Achilles.
I I
probably symbolizes his temper, authority and power.The interior of the cup also shows a private conversation between Achilles and Priam. 52ere, just as in the above parallel illustrations, the painter deviates from the position of being a mere illustrator of Homeric accounts .Instead, he uses his imagination to create a synopsis of the entire episode of ransoming the body of Hektor by merging more than one sequential scenes into a single frame.
68
CHANDIMA S. M. WICKRAMASINGHE Some identical features can be observed in all vase paintings that depict the ransom of Hektor, such as Priam accompanied by a group of men and women bearing rich treasures while Achilles is shown feasting, reclined on a couch beneath which the body of Hektor lies easily visible to Priam who approaches from the left as a suppliant.The resulting picture is a busy conveyance of ransom and the shocking spectacle of Priam appealing to the killer, who feasts, having the corpse of his prey within his reach, to return the corpse of is son.Sometimes, Priam is shown accompanied by Hermes, suggesting that it is divine will that he achieves his purpose.Without the presence of the body of Hektor and the treasure, the scene may not be evident as the 'ransom of Hektor' unless it is labelled in the background of the vase.Thus, the very artistic license has also enabled the artist to produce a comprehensive scene from the Iliad, though such details led to make these scenes differ from the Homeric depiction.
In conclusion, what is apparent is that while some Greek vase painters illustrate scenes in correspondence to the Homeric version of the Iliad, some of these painters were perhaps influenced either by multifarious issues such as the availability of different Homeric versions, dramatic representations and visual representations of scenes from the Iliad by their predecessors.Apart from this, the artists probably attempted to overcome the drawbacks in the art-form, such as the limited space and the difficulties caused by the media, by using their imagination, inventive skills and the effective use of iconographic representations which in tum led to the artistic representation of a scene to be different from the parallel poetic representation.When this occurs, it is hard to rely on artistic representations on vases as validations of poetic representations or vice versa.Such deviations, however, can be a valuable source of information for historians in attempting to filter evidence about the level of learning of the artists and their contemporary society, about the artistic demands and interests of the time, and also about the independence of the artists to develop their skill and trade.
CHANDIMA S. M. WICKRAMASINGHE version in 1991), Athenian Red Figure Vases: the Archaic Period (1975) and Athenian Red Figure Vases: The Classical Period (1989), The Hist01y of Greek Vases: Potters, Painters and Pictures, (2001) of J. Boardman are also of commendable service to students of Greek vase painting.Apart from these, detailed studies of T.H. Carpenter Art and My th in Ancient Greece ( 1991 ), and of H. A. Shapiro Myth into Art: Poet_ and Painter in Classical Greece (1994), and the articles of A. M. Snodgrass ' Poet and Painter in Eighth-Century Greece ' (I 979), S. Lowenstam 'The Uses of Vase-depictions in Homeric Studies'(l992), 'Talking Vases: The Relationship between the Homeric Poems 1 Decorations were painted in black and the background of the vase left in the brownish red colour of the clay. 2 Decorations were left in the brownish red colour of the clay and the background of the vase painted in black.The Sri Lanka Journal of the Humanities XXXVII (1&2) 2011 23 Side A: Embassy to Achilles.Seated Odysseus talks to sulking and heavily draped Achilles.Phoenix and Diomedes frame the scene: Shapiro, 1994 fig. 9. Side B of this vase shows Hypnos and Thanatos carrying the body of Sarpedon.The picture is very fragmentary.The mission to Achilles is also shown in a stamnos by the Triptolemos painter [see Boardman, 1975 , fig.304.1 J.Besides these, several other artists have decorated their vases with this theme.
Fig. 4 .
Fig. 4. Attic red-figure calyx-crater signed by Euphronios (painter) and Euxitheos (potter). 38 38 Carpenter.1991 , fig.310 and Shapiro, 1994, fig.13 ._ w Interestingl y, this theme has not captured the attention of the red-figure vase painters.~0 Also see.Shapiro, 1994, pp.27-31 and Lowenstam, 1992, pp.177-178 A significant scene in the Iliad, yet rarely depicted by the vase painters of the first half of the sixth century, was the chariot race at the funeral games held for Patroklos (Iliad 23.261-270).It appears in two black-figure vases decorated by Sophilos and Kleitias which deserve attention in this study as they are far from the Homeric version.
One of the ancient associations of the home was a dog, as is seen in Exekias' famous depiction of the departure of the Dioscuri.
22 ,,.;&Z.t-7 Fig. I.The Eucharides Painter's calyx-krater. 23Side A: Embassy to Achilles. 20Lowenstam, 1992, p. 170. 21For a detailed discussion on these points see.Lowenstam, 1992, pp.168-174; ibid.1997, pp.22- Though one may argue that this difference is due to the painter's ignorance of the Homeric version,24Eucharides seems to be cautious with his replacement since he retains the other two crucial figures, Odysseus (most renowned of the Achaeans for his cleverness and persuasive speech) and Phoenix (who is like a second father to Achilles).Though true to the Greek cause, Ajax's lack of guile may not transmit the true sense of Agamemnon's offer.Thus, by bringing Diomedes into the scene, the painter may have used his artistic inventiveness to bring about the sincerity of Agamemnon's offer.Nonetheless, the true Homeric account is shown on one side of an Attic redfigure cup datable to 480 B. C. It shows heralds leading Briseis away from Achilles who sits mourning in hi s tent. 25A contemporary skyphos also depicts the mi ssion to Achilles (Iliad 9. l 82ff) sent by Agamemnon in full correspondence to the Homeric account. 2 (! Here, a heavily draped Achilles sits on a stool while Ajax, Odysseus and Phoenix stand on either side of him.Yet, the image on the other side of this very same skyphos does not correlate to the Homeric narration.(fig.2, below).
|
2019-04-13T13:04:36.107Z
|
2014-07-26T00:00:00.000
|
{
"year": 2014,
"sha1": "fc0574cad6386ecbca1f300673ba2991725653e6",
"oa_license": "CCBY",
"oa_url": "http://sljh.sljol.info/articles/10.4038/sljh.v37i1-2.7203/galley/5539/download/",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "50b1e5b95389e6649acfd393a8336ef86cd74da4",
"s2fieldsofstudy": [
"History",
"Art"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
265008868
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiology of Cervical Cancer in the Caribbean
Cervical cancer (CvC) is considered a preventable disease; however, in the Caribbean, it is still one of the fourth most common causes of death in women. Efforts to overcome obstacles to the treatment and control of this preventable disease are being made by several countries within the Caribbean. However, no health issue can be readily managed without first acquiring an understanding of the dynamics relating to its severity of impact reaching the target population, its clinical pathology, and the availability of treatment and/or preventative measures to control or halt its progression. To assess the status of CvC in the Caribbean, a review of the literature was conducted using PubMed. The Caribbean was defined in the review as comprising nations and islands whose coastlines are touched by the Caribbean Sea. This led to an assessment of the available literature on CvC for 33 Caribbean territories. The review showed a lack of published information on CvC and highlights the need for greater research. This also serves as a template for subsequent investigations.
Introduction And Background
In 2020, CvC was listed as the fourth most common malignancy in women globally, as well as being widespread and fatal [1,2].The relationship between CvC and the human papilloma virus (HPV) virus is widely revered as an important role in the prevalence and incidence.The advancements in treatment and prevention (e.g., vaccination) and management methods have improved through the years, and it is hoped that this ranking will decrease [3].However, data for many Caribbean territories are lacking on incidence, mortality, patient demographics, clinicopathology, behavioral risks, genetics, and control and treatment [4].
CvC is considered preventable, and as such, the necessity of an early diagnosis is considered the main course of management [5].This has led to greater focus being applied to the identification of abnormal cervical cells (abnormal pap-smear results) during the process of screening.Globally, in the early 2000s, the incidence of CvC had noticeably decreased as improved methods of screening were being introduced and implemented.The dissemination of information regarding these methods and their perceived benefits became a priority, and the relevance of cervical cytology as an important tool of preventative care became more known [3].To achieve this, at the GLOBOCAN Conference by the World Health Organization in 2018, health experts initiated talks to align HPV testing as a co-screening method to the cytological Papanicolaou smear screening procedure as a response to the growing mortality rates due to CvC.This resulted in a global call for the 'elimination' of CvC [2], which was subsequently reaffirmed at the GLOBOCAN 2020 conference.However, it was observed that, among lower-income countries, there was a disparity in the availability and accessibility of screening and prevention programs [4].This disproportion existed between the provision of information and the actual utilization of screening methods at varying socioeconomic levels [4].Additionally, there were noticeable hindrances brought on by cultural attitudes regarding the screening procedure itself, such as the lack of spousal support and anxiety due to participation in the screening procedure [5].
To this end, a combination approach of primary and secondary treatment was considered most effective.This involved informed consent for the administration of the prophylactic HPV vaccine and the introduction of screening using HPV assays and Pap smears, as part of a female's routine check-up, respectively [2].The data showed that, at least 84% of deaths due to cancer, were caused by CvC in lower-income nations; as such, greater efficiency in treatment and prevention is needed in these regions by providing equal access to uniform methods of vaccination and screening [2,6].
There are three main HPV vaccines commercially available targeting HPV genotypes 16 and 18 [7][8][9], which have been identified as the main etiology for CvC.These are the bivalent (genotypes 16 and 18 only), quadrivalent (genotypes 6, 11, 16, and 18), and nonvalent (genotypes 6,11,16,18,31,33,45,52, and 58) [8,10].However, there are over 120 identified HPV types, 14 of which are considered high risk [9].Therefore, further investigations into these HPV types may lead to causes directly or indirectly rooted in other forms of malignancy within the human reproductive system.By achieving a better understanding of these agents, a more direct investment can be made by intrinsically targeting the root of the actual problem.This suggests that an overall epidemiologic study of CvC and its main contributor (HPV) was required to build a comprehensive model for its management.Understanding the disease burden metrics, such as the incidence and prevalence of CvC in the Caribbean, survival and mortality rates, and consideration for the years of life lost, as well as the disability-adjusted life years of those affected, served as the foundation for this.This literature review's objective was to get a clear understanding of the presence and absence of these factors while taking into account any potential consequences, such as the cost of fertility preservation and affordability of treatment and vaccinations, in addition to the level of adherence in the Caribbean.from peer-reviewed publications on the topic of CvC, its contributing factors, development, progression, and prevention.This literature review was conducted within the last 63 years (1958 to October 2022) resulting from the key search words "cervical cancer in/and the Caribbean" on the PubMed database, which resulted in 379 records.Abstracts and full texts were reviewed for the inclusion eligibility of (1) all publications must discuss the Caribbean in general, or a specific Caribbean territory (composed of all territories listed in Table 1 below), and (2) the publications must be pertaining to an aspect of CvC, as indicated above.All studies found that were focused on CvC within the defined Caribbean population were included in Table 1, indicating the section of information covered.In addition, searches of each Caribbean territory by name and 'cervical cancer' as its search subject were conducted and resulted in 15 additional publications that fit the inclusion criteria.Finally, a total of 158 articles that met the inclusion criteria were included in this review (Figure 1).x indicates the presence of the disease burden measure under consideration
Results
As shown in Table 1, eight measures of disease burden were identified specific to CvC [10], including incidence, mortality, patient demographics, clinicopathology, behavioral risks, genetics, and control and treatment, and the presence of each measure type was indicated.For references of relevant findings and observations that were noted throughout the review, the attached citations can be found under the corresponding table.From Table 1, Puerto Rico and the Netherlands had the highest number of accessible publications with 29 and 30 articles, respectively.Jamaica, Trinidad and Tobago, and Haiti were less than half that value with 13, 11, and 10 publications, respectively.The Bahamas, Suriname, Martinique, Barbados, and Cuba followed with just over five, with the other regions trailing behind with four or fewer relevant documents.Six disease burden measures were covered by Barbados, the Dominican Republic, the French West Indies, Jamaica, and Puerto Rico, while five disease burden measures were covered by Belize, Cuba, the Bahamas [11][12][13], Barbados [14,13], Belize [2], Cuba [2,5,13,15], Dominican Republic [2,5,13], French Guiana [2,5,13], Grenada [16], Guadeloupe [2,5,13], Guyana [2,5,13], Haiti [17], Jamaica [2,5,13,18], Martinique [19,20], Netherlands Antilles [21,22], Puerto Rico [2,5,13], Suriname [23,24], Trinidad and Tobago [25,26] Age-standardized incidence rates (ASIR) were reported in 18 publications for 16 Caribbean territories (Table 3).The ASIR observed by the Netherlands showed that there was a significant reduction from 15. 100,000 women in 1989 to 13.6 per 100,000 women in 1998.At 148 per 100,000 women, Cuba had the highest incidence rate in 1990.The next year, it rose dramatically to 183.6 per 100,000 women, and it continued to grow until 2000 when it reached 225.9 per 100,000 women.From there, it had a little decline to 224.2 per 100,000 women in 2006.Trinidad and Tobago was ranked 18th in the region by Andall-Brereton, for having an ASIR of 27.1 per 100,000 women in 2002.No information was provided thereafter for the years preceding 2011 when this rate was indicated to be the same.Then, in 2018, the ASIR was determined to be 15.2 per 100,000 women, which is a considerable decrease.Table 2 shows an average of 119 cases annually for the years 1995 to 2009, which corresponds to the rate of 27.1 cases per 100,000 women previously mentioned.It can therefore be concluded that the population size had significantly increased by 2018 to allow for a larger number of newly diagnosed cases (140), which is indicative of a much lower rate of 15.2 cases per 100,000 women.Comparatively, it is concerning that the ASIR for Suriname increased by 4.4 per 100,000 women during the subsequent eight years to a whopping 26.8 per 100,000 women in 2018, following a reasonably stable period from 1981 to 2010 where its ASIR was 22.4 per 100,000 women.In a similar vein, it was found that Guadeloupe's ASIR had grown significantly from 3.3 per 100,000 women in 2018 to 7.9 per 100,000 women in 2020 [11,27].Nevertheless, compared to the other nations, Suriname and Guadeloupe both exhibited comparatively low ASIR levels.On the other hand, Jamaica's rate ranged throughout the past 30 years between 0.4 and 1.0, with the lower rates of 0.6 being more recent.This suggests that, even though Jamaica's incidence rate is low, CvC prevention and control efforts are not very successful.Bahamas [11][12][13], Barbados [14,13], Belize [2], Cuba [2,5,13,15], Dominican Republic [2,5,13], French Guiana [2,5,13], Grenada [16], Guadeloupe [2,5,13], Guyana [2,5,13], Haiti [17], Jamaica [2,5,13,18], Martinique [19,20], Netherlands Antilles [21,22], Puerto Rico [2,5,13], Suriname [23,24], Trinidad and Tobago [25,26]
Patient demographics
Across the 16 Caribbean territories, seven demographic factors were explored in the literature for CvC patients.These were age, marital status, ethnicity, income, highest level of education, socioeconomic status, and geographical residence (Table 5).Two-third of the articles brought in from Puerto Rico were geared toward gaining a grasp of the populace's knowledge of CvC, HPV infection, and its vaccine.This can be a useful tool for creating strategic initiatives that target particular age groups of people living in different socioeconomic strata across the Caribbean.x indicates the presence of the disease burden measure under consideration.
Marital status was seen as an indicator of safer sexual practices and responsibility.Puerto Rico had the largest proportion of married people (80%) [64,65], followed by Trinidad and Tobago (72%) [68][69][70].However, most studies involved half or fewer married participants.Educational attainment was also associated with knowledge, awareness, and a positive attitude towards learning.Qualitative results showed that educational attainment, awareness of the disease, and desire to take part in CvC screening or review were related.
The participants' annual income was disclosed in the publications from the Dominican Republic and Cuba [30,36,39]; however, it was only given as a range of less than or greater than $15,000/$20,000 per year; thus, it was found that socioeconomic status and income have no significant impact on whether a woman received a CvC diagnosis.However, it was expected that the patient's ability to pay for specialized and private care would have an impact on access to treatment and recovery choices such as uterine and fertility preservation.
More research on how socioeconomic status affects outcomes to determine the impact of income and socioeconomic status on both the influence of accessibility and/or limitations was found to be crucial.Healthcare, treatment, and a more active lifestyle are all more readily available to people with higher incomes.Problems with health insurance and other financial constraints are the root causes of restrictions for those in lower socioeconomic classes.Similar to this, socioeconomic position denies access and privilege to some while imposing limitations and restrictions on others.Geographical influence and socioeconomic status are closely related because a person's social standing typically determines where he lives.Depravity, luxury, and riches are all influenced by one's employment status, income, social class, and fortune [54].
Ethnicity was also shown to be associated with receiving an abnormal cervical smear, and as a result, it is thought to play a role in the development of CvC [16].Smits et al. [71] indicated that the HPV virus mutates at an extremely slow rate, which 'coincides with the evolution of man' and is therefore considered to have evolved from the origin of man.This speculates that as man traversed the geographic landscape and propagated over time so too did the HPV virus.For the same reason, it can be deduced that the resultant diversification of man brought forth variations in HPV strains and their impact on man [67].For example, mainly African women were impacted by CvC in Curaçao [71] and Trinidad and Tobago [37].In a study conducted in Trinidad and Tobago during the period of 1995-2009, of 487 women, 243 were African, 125 were Indian, and 119 were of mixed descent [37].Afro-Guyanese women are significantly more susceptible to CvC than the Indo-Guyanese and the Amerindian ethnicities in Guyana [25].The Maroons (women) in Suriname were found to have the highest prevalence of atypical squamous or higher levels of cytological abnormalities when compared to the other ethnicities in the region.The Amerindians, although not as susceptible as the Maroons, were found to be more susceptible than the Hindustani and Javanese [16].Jamaica further discussed ethnicity and cultural influence to be a barrier to participation in screening programs and the diagnosis of CvC [44].
Clinicopathology
The Bahamas, Curacao, the Dominican Republic, French Guiana, Guyana, Haiti, Jamaica, and Puerto Rico highlighted the need to investigate the impact co-morbidities had on the susceptibility of persons developing an HPV infection, on HPV-positive persons or persons undergoing treatment for CvC [15].The Caribbean has been reported as a common hub for sex tourism [71], which can result in the proliferation of STDs if left unchecked.In regions such as Curaçao where it was noted to have a lower male-to-female per capita ratio as compared to other regions, the rate of illegal prostitution was expected to increase [71].In addition, some regions utilize practices that are part of their normal personal routines, which have been noted to contribute to the development of infections that further increase the proliferation and severity of HPV infections.Haiti has reported the use of hygiene agents called twalet deba; these constitute plant and chemical-based agents such as balsam, castor oils, and Borasol.These vaginal cleansing products were attributed to the prevalence of HPV infections [17].The comorbidities of HIV (Bahamas, French Guiana, Puerto Rico, and Haiti), AIDS (Puerto Rico) [72], and syphilis and herpes (French Guiana) were mainly discussed.The survival of persons in the Netherlands with other primary cancers was reported [29].Of these diseases, patients with HIV as a co-morbidity are more likely to be screened using visual inspection and acetic acid application (VIA) as there is a greater probability of the expression of large lesions [73].Four other factors identified to be precursors to initiate referrals for cytologic screening are the evidence of white vaginal discharge, the presence of dysplastic cells, postcoital and spontaneous bleeding, and discomfort during intercourse.
Behavioral risks
The age of first coitus is inversely proportionate to the probability of having a cervical neoplastic disease.The relationship is propagated by early marriages and childbearing [74].An active sexual lifestyle from an early age was found to have a statistically significant association with HPV prevalence [38].The number of lifetime partners was found to be proportional to HPV seropositivity and, as a result, is directly correlated to the development of HPV infection [75].A study of 643 women in French Guiana resulted in 19.1 % of the surveyed population indicating that their age of first coitus was under 15 years.Of this population, 25.2% of the women tested positive for HPV infection, 20.5% of which were of a high-risk HPV type [40].The use of hormonal contraceptives for more than a four-year period was noted to increase the probability of HPVpositive women developing CvC [76].Hormonal contraception was said to act as "an enhancer of the neoplastic growth" of cervical carcinoma.The estrogen present in the contraception was stated to bind with the specific DNA that controls the transcription regulatory regions on the HPV genome [77].This effect was said to increase with parity.The lack of awareness of HPV and its role in CvC as well as knowledge of screening and treatment options have proven to be a great challenge in the Caribbean.Although many of the publications reviewed targeted assessing the level of awareness of persons (Table 6), as well as informing them by filling any identified gaps in knowledge and clarifying misconceptions, it was still unanimously documented that there was little understanding among the public population.Smoking and alcohol use were investigated; however, no study provided data that showed a statistically significant relationship between HPV prevalence and CvC incidence.x indicates the presence of the disease burden measure under consideration.
Genetics
An exceptionally wide variation of HPV types exists within the Caribbean: HPV- 16, 18, 33, 42, 44, 45, 51, 52, 53, 55, 56, 58, 59, 66, 68, and 70 [55].The study of the genetic makeup of the HPV virus has led to the development of vaccines that target specific types of HPV [98,100].This is significant as it has been identified that different countries experience varied subtypes of HPV that are most common in their region [101].In 2006 the Bahamas noted that HPV-18 was most prevalent in its region.HPV-16 was prevalent in Curaçao, HPV-45 in Jamaica, and HPV-52 in Tobago [101].In 2011, Suriname reported the prevalence of HPV-16, 18, and 45 [23].French Guiana found HPV types 31, 68, and 53 most common [40].Table 7 presents the genetics and molecular analysis in the peer-reviewed publications.x indicates the presence of the disease burden measure under consideration.
Investigations into the gene sequencing of these molecules have allowed mapping of the phylogeny of the HPV virus, enabling a better understanding of its origin [67].
Treatment and control
CvC screening and vaccination are crucial in the control of CvC (Table 8).The HPV vaccine was licensed by the Food and Drugs Administration for girls aged nine years and older on June 8, 2006, and recommended for use from age 11-12 by the Committee on Immunization Practices (ACIP).HPV-16 and HPV-18 were identified to be the main types of HPV responsible for CvC [68].As such, the quadrivalent HPV vaccine was recommended as the most effective as it covers the most prevalent types, as well as HPV types 6 and 11, which are low risk, but responsible for over 90% of genital warts [104].This provides vaccine-specific crossprotection and was predicted to reduce the overall annual HPV-16/HPV-18-related CvC incidences [41].Barbados, Belize, Guyana, Trinidad and Tobago, and the US Virgin Islands mentioned introducing this vaccine into their routine female vaccination regime, while Puerto Rico made it mandatory in order to access entry to schools [66].x indicates the presence of the disease burden measure under consideration.
Barbados [99], Belize [100,109], Curacao [37], Dominican Republic [39,77], Guyana [73,110], Haiti [46,81,111,112], Netherlands Antilles [113], Puerto Rico [64,65,84,86,89,90,114], Suriname [23,67], Trinidad and Tobago [115], US Virgin Islands [116] In tandem with the utilization of vaccination, efficiency in CvC screening leading to early detection of cytological abnormalities was identified to be imperative to furtive treatment and survival.Belize and Haiti offer all four screening methods mentioned in Table 8 to facilitate the varied needs of their population.Haiti also introduced a self-testing HPV method to provide a cost-effective approach to screening [111], while the cervical Pap smear was mentioned as available in seven territories and described as "uncomfortable, but necessary" [89].
The only regions throughout the review that spoke of CvC treatment measures were the Bahamas, Belize, Guyana, the Netherlands Antilles, and Trinidad and Tobago (Table 9).The treatment options mentioned were surgery such as the loop electrosurgical excision procedure (LEEP), irradiation therapy, and chemotherapy.The LEEP procedure and radiotherapy were noted to be the primary methods of treatment, although hysterectomy and chemotherapy were also mentioned occasionally.The Bahamas utilized a combination of external beam radiotherapy and brachytherapy irradiation techniques [113].Information was provided on numerous patient cases and applications of irradiation as an initial course of treatment was the choice action on the first recurrent sign of cancer.Belize reported on the LEEP, radical hysterectomy, and chemotherapy as being accessible, but limited within specific regions [115].Guyana reported mainly cryotherapy (usually carried out after VIA) and LEEP, followed by a referral for review in a year's time.The Netherlands utilized chemo-radiation, radiotherapy, and surgery as its choices of treatment depending on the stage of cancer observed [21].Each stage is treated with a specified combination of radiotherapy and chemotherapy or other procedures, such as lymph node dissection, as needed.The journal article from Trinidad and Tobago provided statistics on patients within the sample population who received chemotherapy, radiotherapy, and surgery.These data correlated to other variables such as geographic residence and ethnicity [73].x indicates the presence of the disease burden measure under consideration.
Conclusions
PubMed was found to be an excellent source of peer-reviewed publications on topics pertaining to CvC in the Caribbean.Although significant research has been conducted in the Caribbean, there still exists a substantial amount of data left to be garnered.There exist enormous gaps in clinicopathology and genetics, as well as a more detailed and specialized investigation specific to the behavioral risks and patient demographics is paramount.It was found that, even though the studies examined in the papers collected a sizable amount of data, a large portion of the data was not utilized.Thus, access to source data will allow for further valuable insights and contributions.In addition, to the measures indicated within this review, the disease burden of environmental impact was only remotely considered, as covered under behavioral risks by other studies, which identified the causative effect of smoking to be a potential cause of cervical carcinogenesis.Some countries, such as the Bahamas, Barbados, the Dominican Republic, Haiti, the Netherlands, and Trinidad and Tobago, are well on their way to mapping a sound course for the prevention and management of CvC by exploring multiple disease burden measures.However, they too require a more thorough examination of these measures, with a focus on causative agents and attitudes, not limited to HPV that must be evaluated and mitigated.Only then a pool of data from all territories can be created to equip the Caribbean region with the resources necessary to eliminate CvC.
FIGURE 1 :
FIGURE 1: The Progression of the PubMed Search Conducted to Procure the Publications for the Review.
From an average of 24 instances per year during the period 1958-1964 to 38 cases in 2018, Barbados had a gradual rise in cases.Although statistics for the intervening years were not provided to allow for a more thorough examination of its development throughout that time, it represents a noteworthy rise of 14 instances over a span of 50 years.The annual detection of new instances of CvC appears to be actively monitored in the Netherlands.It was noted from Table2that the prevalence of CvC in the Netherlands had generally remained high, averaging 731 newly diagnosed cases each year from 1989 to 1998.The number of instances climbed the next year by 10 cases after a notable decrease of seven cases in 1994.Then, there was another drop of six instances, followed by another rise of 10 cases.Martinique experienced a significant decline in cases from an average of 45 cases per year(1981- 2000) to an average of 25 cases per year (2008-2012) and then increased again to 32 in 2018.This showed that whatever measures were implemented during that period was perhaps not sustainable.From the period (1980-2000) to 2018, Suriname had an almost doubling of instances, from an average of 45 cases per year to 85 cases.Similarly, Trinidad and Tobago endured an increase from an average of 119 cases per year during the period (1995-2009) to 140 cases in 2018 and Puerto Rico increased from an average of 241 cases per year during the 2007-2012 period to 262 in 2018.Puerto Rico's growth was particularly concerning because there were already over 200 new cases reported annually, which is a considerably high level.
TABLE 1 : Information Coverage of the Publications Reviewed.
IncidenceCvC incidence statistics for the Caribbean were patchy and inconsistent.Although Table2listed new CvC cases from 1958 to 2018, much of the data for the years in between was not accessible.The Bahamas underwent a continuous rise in newly diagnosed cases from 19 cases in 1993 to 28 cases in 1994 and 40 cases in 1995.There was no further data until 2018 when the number of cases dropped to 29 cases, which was around the same number as that in 1994.
TABLE 3 : The Cervical Cancer Incidence Data (Age Standardized Rates) Provided by the Peer- Reviewed Publications.
, but to a considerably lesser level, in Belize, Cuba, the Dominican Republic, Guyana, and Trinidad and Tobago.From a rate of 23 per 100,000 women to 16.2 per 100,000 women, Belize dropped.The Dominican Republic decreased from 17.3 to nine per 100,000 women, whereas Cuba decreased from 8.3 to six per 100,000 women.Trinidad and Tobago decreased from 10.7 to 9.4 per 100,000 women, whereas Guyana decreased from 22.2 to 17.3 per 100,000 women.On the other arm, over the same time period, death rates significantly rose in Jamaica, Puerto Rico, and Suriname.The largest rise was seen in Jamaica, where the rate went from 12.2 to 20.1 per 100,000 women, while Suriname and Puerto Rico had significantly lower increases of 14 to 14.3 and 2.8 to 3.5 per 100,000 women, respectively.
|
2023-11-05T16:05:26.367Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "eb97ccd064913e7c4c42b11ad4e4b1862b799367",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/166171/20231103-15975-1bfef99.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4400afbf0088bf002922ffdb9b74879a51dd98c",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
195379593
|
pes2o/s2orc
|
v3-fos-license
|
Adrenergic to mesenchymal fate switching of neuroblastoma occurs spontaneously in vivo resulting in differential tumorigenic potential
Neuroblastoma is a pediatric tumor that originates from cells of the adrenergic lineage. Here we investigated the balance between differentiation and dedifferentiation in relation to tumor-engraftment potential in preclinical mouse models. We analyzed intratumoral heterogeneity through comparison of marker expression of normal adrenergic development versus tumor marker expression, which showed the presence of sympathoadrenal as well as mesenchymal subtypes of neuroblastomas cells. Subsequently, we evaluated long-term outgrowth capacity of these two (FACS-sorted) cell populations, which showed that adrenergic cells have a stronger long-term clonogenic potential. Engraftment of these sorted populations into mice revealed the occurrence of heterogeneous populations. Modelling of the interconversion rate indicated that cell fate transitions from the adrenergic to mesenchymal state were obtained gradually and stochastically as the tumors grew in mice. We found that adrenergic cells have an increased tumorigenic potential in mice without signs of beneficial cross talk between the two lineage populations. These findings indicate that neuroblastoma contains two rivalling differentiation states that exhibit differences in long term clonal/tumorigenic potential. We expect these states to be relevant for therapy resistance as a result of intratumoral heterogeneity
Introduction
Neuroblastoma is a devastating childhood disease that effects the adrenal glands and peripheral nerves [1].The disease leads to relapses in 40% of the patients through therapy resistance [2] for which intratumoral heterogeneity is considered to be accountable.
The development of neuroblastoma is comparable with the normal neuroendocrine system development.Neuroblastoma occurs at 90% of cases in the adrenal gland and retroperitoneal paraspinal ganglia, the region of the developing adrenal medulla and surrounding tissues.It is also found along the sympathetic ganglia [3,4].It has been shown that high MYCN expression driven by sympathoadrenal lineage-determining genes, such as DBH and TH, can give rise to neuroblastomas.Therefore, it is believed that neuroblastoma might arise in differentiated as well as less differentiated cells of the adrenergic lineage.
Morphological and behavioural heterogeneity has been recognized in a growing number of malignant tumors including neuroblastoma [5].This heterogeneity is often associated with therapy resistance [6].Histologically, neuroblastic tumors display different degrees of ganglioneuronal differentiation ranging from highly malignant neuroblastic tumors with no signs of ganglioneuronal differentiation to a benign tumor.The fully matured end of the spectrum consists of fully differentiated ganglioneuroma.In addition, neuroblastoma is known to be a heterogeneous tumor, based on morphological differences.More recently, we showed that adrenergic-to-mesenchymal transition occurs in neuroblastoma, which could have important implications for therapy [6].
To gain a better understanding of neuroblastoma intratumoral heterogeneity and cell fate interconversion, we herein provide a systematic analysis.We show that cell identity interconversion occurs stochastically during tumor expansion and consists of two relatively stable states.Furthermore, we show that the clonal expansion and tumorigenic potential of adrenergic neuroendocrine cells are much stronger than mesenchymal cells.These experiments provide an insight into the balance of cell identity versus tumor engraftment in neuroblastoma.
tNSE clustering of pediatric tumor cell lines distinguishes between a neuronal and mesenchymal lineage in neuroblastoma
As above mentioned, the heterogeneity of neuroblastoma is based on morphological differences between cell lines.Since pediatric tumors are often driven by aberrant activation of developmental pathways, we argued that neuroblastoma tumor heterogeneity might be reflected in the overlap with transcriptional programs present in pediatric malignancies.We therefore performed tSNE clustering of the cell lines derived from the following six pediatric tumors: medulloblastoma, neuroblastoma, pediatric acute lymphoblastic leukemia (ALL), Ewing sarcoma, osteosarcoma and rhabdomyosarcoma.This resulted in clusters of distinct groups according to their organ of origin (Figure 1).These data were independently confirmed by performing K-means clustering (Figure S1A) and principal component analysis (PCA, Figure S1B).The majority of neuroblastoma cell lines clustered together with the neuroectodermally-derived medulloblastoma cell lines as expected.However, a number of neuroblastoma and medulloblastoma cases clustered together with cell lines of mesenchymal origin (Ewing sarcoma, osteosarcoma and rhabdomyosarcoma).These data indicate that neuroblastoma cells might have similarities to neuronal-but also to mesenchymal-like features and suggests that these cell lines may be heterogeneous.) and osteosarcoma (n = 9) and the blood derived tumor type ALL (n = 7).The three main clusters observed are neuronal tumors (blue), mesenchymal tumors (green) and blood ALL (red).Note that some neuroblastoma and medulloblastoma cell lines cluster together with tumors of mesenchymal origin.Perplexity of the clustering was 28.Isogenic pair: SHEP2 and SY5Y.
Lineage marker analysis shows that neuroblastoma cell lines represent multiple stages of neural crest development
To further investigate the relation of adrenergic/neuronal and mesenchymal cell fates within and between tumors of individual patients, we first analyzed the expression levels of markers of the most important stages of neural crest development.Next, we analyzed pan-neuronal markers and mesenchymal markers summarized in Figure 2A [7,8].Our data show that adrenergic neuroblastoma cells express markers of catecholamine biosynthesis such as TH and Chromogranin A (Figure 2B and Figure 4D), indicating that the cells have matured to a sympathetic neurons or a chromaffin state.Pan-neuronal markers, such as β III tubulin as well as the proneural specifier ASCL1, were positive in the neuronal subtype of cells as expected.Furthermore, the expression of neurofilament light (NEFL) expression suggests a shift towards a sympathoadrenal fate rather than a chromaffin fate [8].Interestingly, our subset of mesenchymal cell lines has high mRNA levels of markers reflecting migratory and post-migratory neural crest cells (Figure 2A), but lacking any markers of sympathoadrenal commitment.Immunostaining and RNA profiling of isogenic cell lines 691T, 691B, SHEP2 and SY5Y confirmed these findings (data not shown).Furthermore, the mesenchymal cell line 691T expressed neural crest stem cell markers, SOX9, p75NGFR (Figure 1E) and lacked expression of the neuroendocrine marker TH.These mesenchymal, neural crest stage-like cells also lack mRNA expression of WNT1, CDH1 and SOX10, markers of pre-migratory, neural crest cells that have not yet delaminated (data not shown).This analysis formed the basis for the definition of the mesenchymal group of neuroblastoma cells that we previously published [6].Here we additionally show a gene expression profile that highly overlaps with migratory and post-migratory neural crest cells of human embryos at stages CS12-CS18.
Mesenchymal and adrenergic lineage cells differ in their tumorigenic potential and can interconvert spontaneously in vivo
Mesenchymal and adrenergic cells are found together in almost all neuroblastoma tumors [6].They share the same genetic defects, implying a shared descent.However, it is unknown whether cell intrinsic or exogenous signals trigger a cell-fate interconversion.AMC700B and AMC711T cell lines express both mesenchymal and neuroendocrine markers.By performing mRNA expression analysis of CD133/CD24-FACS sorted cells we found that this distinction that was found between adrenergic and mesenchymal subpopulations, is also present in cell lines.We found mutually exclusive expression of cell fate markers in mesenchymal (CD133+) and adrenergic (CD133-) sorted populations, which was confirmed by qPCR analysis (Figure 3A).
To determine whether mesenchymal and adrenergic sorted populations differed in the clonal outgrowth capability, we performed single cell clonal expansions of the 700B and 711T primary cell lines after FACS sorting.The CD133-lines resembled cells committed to a sympathoadrenal fate and accordingly expressed neuroendocrine (adrenergic) markers such as PHOX2B, DBH and TH.In contrast, CD133+ lines resembled migratory neural tube precursor cells that lacked these markers, but expressed mesenchymal markers such as p75NGFR, SLUG, VIM and FN1.The two mesenchymal and adrenergic populations did not markedly differ in their cell cycle profile (Figure 3B), but showed differences in clonal outgrowth capacity (Figure 3C).Sympathoadrenal cultures had a vital appearance (Figure 3C).To quantify the clonal expansion capacity, cells were plated as single cells in 384 wells plates and serially passed to a well with a larger surface area when they reached near confluence as shown in Figure 3D.Clonal outgrowth occurred at a low frequency (Figure 3E) and clonal expansion of adrenergic (CD133-) cells was efficient and led to long-term expandable cultures (LT, Figure 3F).Effects were independent of adherent or sphere growth (data not shown).Single cell outgrowth of mesenchymal cells (CD133+) only led to cultures with a short-term expandable culture (ST, Figure 3F).Accordingly, the sphere cultures showed a necrotic appearance (Figure 3C, lower panels).By performing titrations of cell densities toward clonal outgrowth conditions (1,000 cells/cm 2 ), the adrenergic (CD133-) population showed more clonal expansion potential as expected (Figure 3G).
In vivo transplantations show spontaneous lineage conversion and differences in tumorigenesis
Since the in vitro experiments showed cell autonomous differences in clonal outgrowth capacity, we tested the tumor-initiation capacity of the isogenic cell lines derived from patients AMC700 (Figure 4A), AMC691 and SKNSH.Consistently with the in vitro data, adrenergic type cells have stronger tumorigenic potential (Fig- ure 4B).
We next tested whether the cell-autonomous heterogeneity of developmental states and clonogenic potential were also occurring in vivo.Xenotransplantation of mesenchymal/adrenergic FACS-sorted cells into nude mice gave rise to heterogeneous tumors as shown by VIM, SYN and CHGA expression resulting in patchwork-like pattern of adrenergic and mesenchymal areas in the tumors (Figure 4C, D).This indicates that a delicate balance exists between maintenance and loss of cell identity during clonal expansion.
Modelling of lineage conversion shows that interconversions occur spontaneously and randomly
We next investigated the kinetics of the interconversion in vivo by modelling the evolutionary process using a clonal evolution model (Figure 5A).Prior to this experiment, we determined the growth rate of adrenergic and mesenchymal cells, assuming this growth rate remains constant during the subsequent in vivo experiments.Tumors emanating from an adrenergic origin generated mesenchymal progeny and vice versa (Figure 5B).As we measured the size of each clonal population during the marker analysis, we could extrapolate at which time point lineage interconversion occurred.None of the clones that occurred after injection of sympathoadrenal cells had a size that indicated that they existed at the moment of tumor initiation (Figure 5C, D).In contrast, tumors derived from these mesenchymal lineage cells showed an early onset appearance of sympathoadrenal clones, consistent with their higher tumorigenicity as was observed in vitro (Figure 5B) as well as in vivo (data not shown).Given all assumptions, these findings indicate that stochastic events control the in vivo identity behaviour of the tumor cells.
Subcutaneous co-injection of small cell lung cancer mesenchymal and neuroendocrine cell lines in immunodeficient mice revealed a crucial role for these cells in the formation of distant metastases [9].Nonetheless, strong evidence also suggests that metastatic biopsies are often differentiated and do not express markers that are different from their primary tumors [10][11][12].We therefore determined whether co-injection of sympathoadrenal cells and mesenchymal cells influenced the tumorigenic potential.The results showed no significant difference between mice that received the co-injection in two different flanks versus co-injected cells in one flank (Figure S2).Mesenchymal cells did not have the ability to grow out in vivo.These data show that co-injection of mesenchymal and sympathoadrenal neuroblastoma cells in vivo does not affect the tumorigenic potential.
Collectively, our results show that two cell populations exist in neuroblastoma cultures where the sympathoadrenal counterpart is more tumorigenic.Nonetheless, sympathoadrenal tumors interconvert partially to mesenchymal lineage cells with a low frequency and these clonal populations occur spontaneously after tumor initiation.
Discussion
Based on the expression of lineage markers in neuroblastoma cell cultures and xenografted tumors, we have generated a lineage model showing the developmental stage to which neuroblastoma is restricted (Figure 2A).This model shows that cells from the mesenchy- mal stage (p75NGFR and SOX9 delaminated neural crest marker positive, Wnt1 and NCAM neural crest marker negative; data not shown) can differentiate towards the sympathoadrenal neuroendocrine lineage (ASCL1, PHOX2B and HAND1 neuroendocrine marker positive) and vice versa.Cancer cells with a mesenchymal-like undifferentiated phenotype are known to be potentially involved in the development of invasive and drug resistant aggressive tumors [10,[13][14][15].
Early studies evaluating neuroblastoma cell heterogeneity have documented the presence of neuroblastoma cell lines lacking the neuroendocrine features which are commonly observed in tumors [16] ( Ciccarone et al., 1989, Ross et al., 1995).These authors hypothesized the existence of a common ancestor cell representing a malignant neural crest stem cell (I-type) able to self-renew and give either a neural (N) or a non-neural (S) daughter cell.Our results indicate that the developmental lineage heterogeneity observed in neuroblastoma is not the result of a hierarchical organization of cancer stem cells (i.e. the I, N, S model) but rather as a result of stochastic oscillation between two lineage states.
In our mice experiments, the initial lineage identity remained dominant during the in vivo evolution for both adrenergic and mes-enchymal type cells.This tendency to maintain the initial state might be driven by self-reinforcing mechanisms and is apparently dominant over the stochastic evolutionary cell identity process.Our findings show that mesenchymal neuroblastoma cells have a poorer in vivo and in vitro tumorigenicity which is in line with earlier observations that showed that mesenchymal neuroblastomas are less tumorigenic [5,[17][18][19][20][21].
Taken together, our results show that neuroblastoma cell lines are heterogeneous and contain subpopulations of cells reminiscent of the delaminated mesenchymal cells that migrate from the neural tube and cells that have adopted a sympathoadrenal fate that are reminiscent of cells that have arrived at the dorsal root ganglion in order to differentiate.hierarchical K-means clustering analysis was done on MAS5.0 normalized U133p2 microarray datasets using an appropriate amount of groups.For the clustering analysis of the panel of the ITCC consortium cell lines, the top 100 genes were taken for clustering into four groups.For K-means clustering the 39 cell lines that include neuroblastomas and osteosarcoma, the top 60 of genes were taken.Source: R2.amc.nl/psitcc cellpanel87 u133p2.
Cell lines
Primary cell lines have been characterized by Bate-Eya et al., 2013 [22].Cell lines were derived either from primary tumors (indicated by T) or bone marrow metastases (indicated by B) and cultured in DMEM/F12 including 10 ng/ml bFGF (Peprotech), 20 ng/ml EGF (BD), penicillin and streptomycin (50 units of penicillin [base] and 50 µg of streptomycin [base]/ml, Thermo Fisher).Classical neuroblastoma cell lines were grown in DMEM without glutamine supplemented with non-essential amino acids, 10% FCS as well as penicillin and streptomycin (50 units of penicillin [base] and 50 µg of streptomycin [base]/ml, Thermo Fisher).
FACS sorting
CD133/CD24 sorting was performed using CD133/2 (293C3)-PE (miltenyibiotec #130-090-853) or APC antibodies in a 1: 10 dilution using isotype controls.CD24 is a neuronal marker that can be used to distinguish neural sub-fractions within a CD133 positive population (pelicluster CD24, Sanquin M1605, isotype control Peli-cluster IgG1 FITC, Sanquin M1453).The day after sorting, single cell colony outgrowth was performed by dissociating sphere cultures.Individual cells were plated into 384 well assay and tissue culture plate Black/Clear/TC/Sterile/Lid plates (BD Biosciences #353270).Plating of individual cells were visually confirmed.Cells were expanded into colonies that were serially passaged using all grown cells whenever near confluence was reached.Colony passaging was performed from consecutively from 384 wells plate wells to 96, 48, 24, 12, 6 wells plate wells from which long term cultures were started.Passage numbers are shown in Figure 3E.
In vivo experiments
Nude mice (Harlan laboratories) were subcutaneously injected with FACS sorted cell lines (200,000 cells each in 200 µl [700B] or 5x10 6 cells in 100 µl [691T and B]) in a 50% matrigel (BD #354234)/PBS solution.After tumor outgrowth up to 1 cm 3 , mice were sacrificed using cervical translocation under anaesthesia.Tumors were processed for paraffin embedding as well as snap frozen for immunofluorescence microscopy.All experiments were conducted under approval of the ethical board of the Academische Medische Centrum in Amsterdam.
Phenotypic clonal evolution analysis and gene signatures
For mice injected with either CD133-cell or CD133+ 700B In order to compute the probability that Y i equals 1, in mouse i a tumor is detected, we define X 1 , . . ., X mi as m i indicator functions with X j equal to 1 if, in mouse i, cell j grows out to a tumor and 0 otherwise.We assume that X 1 , . . ., X mi are independent in the sense that if cell j grows out to a tumor, this does not have any effect on the outgrow of the other cells.p is defined as the probability where p := P(X j = 1).Then, P(Y i = 0) and P(Y 1 = 1) can be computed in terms of p as follows: Consequently, the probability P(Y i = 1) = 1 − (1 − p) mi .This holds for i = 1, . . ., n. Supposing we observe the data y 1 , . . ., y n , then p can be estimated with the method of maximum likelihood.The likelihood is given by: tinction between the cell type with respect to p can be made due to the inserted mixture of cell types.
To estimate the cell division rate, an exponential growth model was used.Hence, for mouse i (injected with CD133+ tumor cells) the total number of cells at time t (in days) is the fraction of injected cells that proliferates times 2 to the power growth rate times the elapsed time.In formula: In the above the estimate of parameter p as obtained above is plugged in.For the assessment of the uncertainty of the rate parameter estimate, it is re-estimated by simultaneously i) sampling a p from its above constructed distribution and ii) re-sampling the mice non-parametrically with replacement.For the mouse injected with CD133-tumor cells, a similar model is assumed and the proliferation rate of these cells is estimated as: With only a single mouse injected with CD133-tumor cells, proper assessment of the uncertainty in the estimated rate parameter is impossible.To obtain a more realistic impression of its uncertainty the noise introduced by boostrapping in the estimation of r CD133+ is transferred to the estimate of r CD133-.In the above, the contribution of the clones to the total number of cells in the tumor is assumed negligible (with seems reasonable in light of the observed data).To reconstruct the time of arrival of each clone, the same exponential growth model as above is employed.In addition, it is assumed that each clone has only a single progenitor cell.The estimated time of arrival of clone k in mouse i is then:
Fig. 1 .
Fig. 1.A number of neuroblastoma cell lines resemble tumors of mesenchymal origin.tSNE clustering of mRNA expression data of 87 pediatric tumor cell lines consisting of the neuronal tumors medulloblastoma (n = 14), neuroblastoma (n = 26), tumors of mesenchymal origin, Ewing Sarcoma (n = 21), rhabdomyosarcoma (n = 19) and osteosarcoma (n = 9) and the blood derived tumor type ALL (n = 7).The three main clusters observed are neuronal tumors (blue), mesenchymal tumors (green) and blood ALL (red).Note that some neuroblastoma and medulloblastoma cell lines cluster together with tumors of mesenchymal origin.Perplexity of the clustering was 28.Isogenic pair: SHEP2 and SY5Y.
Fig. 2 .
Fig. 2. Neuroblastomas express markers of normal sympathoadrenal differentiation.(A) Schematic visualization of neural crest to sympathoadrenal differentiation.Neuroblastomas show expression of mesenchymal and adrenergic stages but not of premigratory neural crest stages (B) Immunostainings of primary cultures of neuroblastoma patients.Cells were stained with antibodies that were suited for IHC of paraffin embedded cells (upper panels) or immunofluorescence of cells grown onto glass slides (lower panels).
Fig.
Fig. Sympathoadrenal neuroblastoma cells have a long term clonal expansion capacity.(A) qPCR data showing that FACS sorted AMC700B cells for CD133 expression shows that MES (CD133 +) cells have lower expression of ADRN genes.(B) Cell cycle profile based on click it FACS analysis showing that MES and sympathoadrenal cells have a similar cell cycle profile.(C) Light microscopy of FACS sorted mesenchymal and sympathoadrenal cells showing that sympathoadrenal cells have a viable morphology in contrast to the mesenchymal cells.(D) Schematic outline of the clonal expansion experiment.Cells were FACS sorted for CD133/CD24 staining and plated for clonal outgrowth into 384 wells plates.Each well was checked for the presence of maximal one sorted cell.Cells were serially expanded and assayed for CD133 expression.(E) Clonal outgrowth and serial expansion of CD133-700B cells (NE type) occurred more frequent than clonal outgrowth of CD133+ cells (MES type).NE cells could be serially expanded for multiple passages leading to long term cultures.(F) Histogram showing the relative amount of short term expanding cultures versus long term expanding cultures.CD133-NE type neuroblastoma cells show long term expanding cultures only.(ST) Short term clonal capacity (in percentage), (LT) long term clonal capacity are the chances to generate a culture from a single cell that lasts for less or more than 5 passages, respectively.(G) Titration of the number of cells per 6 wells results in enrichment of CD133-cells upon lower seeding densities that required clonal outgrowth after two week of expansion.
4. 1 .Fig. 4 .
Fig. 4. Sympathoadrenal neuroblastoma cell cultures are more tumorigenic.(A) FACS plot of CD133/CD24 contained cells one day after sorting B) Average survival of mice when ADRN and MES type cells are compared from three isogenic pairs (200,000 cells [AMC700B]; 5x10 6 [AMC691] and 1x10 6 [SKNSH] were injected).(C) Macroscopic image of immunohistochemistry for vimentin showing a patchwork pattern of vimentin positive and negative domains.(D) Immunostainings of primary cultures of neuroblastoma patients.Cells were stained with antibodies that were suited for IHC of paraffin embedded cells (upper panels) of immunofluorescence of cells grown onto glass slides (lower panels).*p < 0.05 log rank test.
Fig. 5 .
Fig. 5. Cell fate interconversion occurs stochastically in vivo.(A) Injection of CD133 low as well as CD133 high cells gave rise to heterogeneous tumors as shown by expression of the MES identifier: VIM (shown in brown) with mutual exclusive absence of expression of SYN and CHGA in these clones.(B) Circos plots representative of the clonal evolution analysis where the moment of establishment of the intratumoral heterogeneity was estimated from clone size.When MES cells were injected, occurrence of MES negative clones was overlapping with the time of tumor initiation, consistent with the tumor outgrowth phenotype that was determined based on the previous in vitro and in vivo experiments.(C) Circos plots showing that many clones that were derived from sympathoadrenal cells converted to a mesenchymal fate.(D) These clones were formed significantly later than the time point of tumor injection of the tumor, i.e. the occurrence of mesenchymal population is uncoupled from the tumor initiation.Significant cases are shown by an Asterix.
Fig. S1.K-means clustering shows that some neuroblastoma cell lines cluster together with tumors of a mesenchymal origin.(B) PCA plot showing that some neuroblastoma cell lines cluster together with tumors of a mesenchymal origin (purple arrows).
Fig. S2.Kaplan Meier plot showing that co-injection or separate injection of 691 mesenchymal and adrenal cells gives a similar survival of the mice.
−
N i,CD133+ (t) = pn i,CD133+ 2 init r CD133+ t , in which N i,CD133+ (t) is the number of tumor cells at time t, p the probability of a tumor cell proliferating, n i,CD133+ the number of injected CD133+ tumor cells, r CD133+ the proliferation rate (per day).With the left-hand side observed in the experiment, the rate parameter may be estimated using a least squares approach: rCD133-= arg min r log 2 pn init i,CD133-− r CD133-t i 2 .
Define Y 1 , . . .,Y n as n indicator functions, with Y i equal to 1 if in mouse i a tumor has been detected and 0 otherwise.
|
2019-06-26T14:12:55.333Z
|
2018-12-20T00:00:00.000
|
{
"year": 2018,
"sha1": "ab24927f08952247ba7e69cee53982803ee5e7dd",
"oa_license": null,
"oa_url": "https://jmcm.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=250",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d4e6d09e00f8a765e1acc66698f75c56dff88b8f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
255946453
|
pes2o/s2orc
|
v3-fos-license
|
Genetic variability, phylogeny and functional implication of the long control region in human papillomavirus type 16, 18 and 58 in Chengdu, China
Long control region (LCR) of human papillomavirus (HPV) has shown multiple functions on regulating viral transcription. The variations of LCR related to different lineages/sub-lineages have been found to affect viral persistence and cervical cancer progression differently. In this study, we focused on gene polymorphism of HPV16/18/58 LCR to assess the effect variations caused on transcription factor binding sites (TFBS) and provided more data for further study of LCR in Southwest China. LCR of HPV16/18/58 were amplified and sequenced to do polymorphic and phylogenetic anlysis. Sequences of each type were aligned with the reference sequence by MEGA 6.0 to identify SNPs. Neighbor-joining phylogenetic trees were constructed using MEGA 6.0. Transcription factor binding sites were predicted by JASPAR database. The prevalence of these three HPVs ranked as HPV16 (12.8%) > HPV58 (12.6%) > HPV18 (3.5%) in Chengdu, Southwest China. 59 SNPs were identified in HPV16-LCR, 18 of them were novel mutations. 30 SNP were found in HPV18-LCR, 8 of them were novel. 55 SNPs were detected in HPV58-LCR, 18 of them were novel. Also, an insertion (CTTGTCAGTTTC) was detected in HPV58-LCR between position 7279 and 7280. As shown in the neighbor-joining phylogenetic trees, most isolates of HPV16/18/58 were clustered into lineage A. In addition, one isolate of HPV16 was classified into lineage C and 3 isolates of HPV58 were classified as lineage B. JASPAR results suggested that TFBS were potentially influenced by 7/6 mutations on LCR of HPV16/18. The insertion and 5 mutations were shown effects in LCR of HPV58. This study provides more data for understanding the relation among LCR mutations, lineages and carcinogenesis. It also helps performing further study to demonstrate biological function of LCR and find potential marker for diagnosis and therapy.
Background
According to the data worldwide in 2012, cervical cancer is the fourth most common cancer in women, both for new cases and deaths. The data show large difference between developed and developing countries. It is the second most common cancer in less developed regions and covered 87% cervical cancer worldwide, and ranks 11th in developed regions [1]. There are an estimated 98,900 new cases and 30,500 deaths in China, 2015. That account for 18.7 and 11.5% of new cervical cancer cases and deaths worldwide, respectively [2].
Human papillomavirus (HPV) is a prevalent, globally distributed group of circular double-stranded DNA virus which can infect cutaneous and mucosal epithelia throughout the human body [3]. Persisting infection of high risk HPV is the most common reason to develop invasion cervical cancer that is confirmed by the majority. Over 220 HPV types have been fully characterized (https://www. hpvcenter.se/human_reference_clones/). Most oncogenic or high-risk HPV types are members of several species of Alpha-papillomavirus genus [4,5] which are responsible for 90% of all cervical cancers worldwide [6]. Alphapapillomavirus 7 is mostly related to high-risk mucosal lesion, including HPV18, HPV45, HPV39, etc. Alphapapillomavirus 9 is the most important species to malignant mucosal lesions, including HPV16, HPV33, HPV58, etc. [7]. HPV16/18 are associated with approximately 70% of invasion cervical cancers worldwide, making them the primary targets for research and vaccination alike [8]. Compelling data demonstrate that HPV16 associated with persistence of infection, development of precancer, progression and histologic type of cervical cancer [9]. The relationship between HPV18 and precancerous lesion is not as compelling as HPV16, but many study have suggested the relevance between HPV18 and cancer [10,11]. HPV58 has a especially high prevalence in East Asia and ranks the third in cervical cancer cases [12]. Furthermore, it is also frequently appears in precancerous lesions, even more than HPV18, and takes the second slot (overall in 21.1% of CIN2/3) [13].
The whole genome of HPV contains three regions, an early region (E1, E2, E3, E4, E5, E6, E7), a late region (L1, L2) and a regulatory region called long control region or upstream regulatory region (LCR or URR) [14]. The LCR is an around 850 bp non-coding sequence which has active interaction with many cellular and viral factors. This region include the viral early promoter and transcriptional enhancer, the viral origin of replication, the late polyadenylation site and the late (or negative) regulatory element (LRE/NRE). In this way, it can control late gene expression at various post-transcriptional levels [15].
The LCR has been shown to be the most variable region of HPV genome, mainly because it does not encode any gene and therefore able to accumulate and tolerate more mutations [16,17]. The mutations in this region divide HPV into different lineages and sub-lineages which perform differently in viral persistence and progression of precancer/cancer. In HPV16, non-European (sub-lineage A4, B, C, D) variants get three-fold or higher risk to associated with cervical cancer than European (Sub-lineage A1-A3) variants. Non-European variants of HPV18 are also detected more commonly in cancer tissues and high grade cervical lesions [18][19][20]. HPV58 is the second common HPV-type in Southwest China in previous data [21,22], the variants (C632T and G760A, located on E7) of which have been reported to be highly associated with cervical cancer [23]. The LCR variants have been shown to differently regulate the replication of HPV throughout the viral life cycle [13] and the transcriptional activity of E6 and E7 [14].
In this study, we collected the positive samples of HPV16, 18, 58. Analysis of polymorphism, phylogenetic and functional prediction were performed on LCR which were rarely reported in Chengdu, Southwest China. The data helped us to determine the prevalence of lineages/ sub-lineages and novel mutations/isolates of each type. It is useful for epidemiological survey and biological function research of LCR.
Ethical approval and consent to participants
This study was approved by education and research committee and Ethics Committee of Sichuan University, China (approval number SCU20100196494). All works were followed the guideline of Ethics Committee of Sichuan University. Informed Consent Right was confirmed by patients enrolling. The privacy of patients was assured to protect carefully.
Samples
All 8244 gynecological outpatients' cervical swab samples were collected between September 2017 and June 2019 in Chengdu SongZiNiao Sterility Hospital, Sichuan Reproductive Health Research Center Affiliated Hospital, Chengdu Western Hospital Maternity Unit, and Angel Women's and Children's Hospital. The samples were collected from women aged 20 to 59 who have normal cytology, low-grade squamous intraepithelial lesion or cervical intraepithelial neoplasia. Each sample was stored at − 20°C in cell preservation fluid. Specimen collection to DNA extraction was in a week.
DNA extraction and HPV typing
DNA was extracted using nucleic acid extraction kit (Health gene technologies, Ningbo, China) in accordance with the manufacturer's instructions. Via capillary electrophoresis method by multiple PCR, DNA extraction was amplified using HPV nucleic acid assay and genotyping kit (Health gene technologies, Ningbo, China).
PCR amplification of HPV-LCR LCR sequences of HPV16/18/58 were amplified by primers which were designed based on the reference sequences from GenBank (Table 1). PCRs were performed in a final volume of 30 μL, containing 2*PCR buffer (200 mM Tris-HCl pH 8.3; 200 mM KCl), 2.5 mM dNTPs, 2 U of EasyTaq DNA Polymerase, and 0.5 μM of each primer of the pairs. The PCR program was set as following conditions: an initial denaturation at 95°C for 5 min, then entered 30 amplification cycles, 95°C for 30 s, primer annealing at 45~55°C (52°C for HPV18 and HPV58, 45°C for HPV16) for 30s and elongation at 72°C for 30 s, after 7 min final extension at 72°C, ended up and held the temperature at 4°C. The PCR products were detected using ChemiDoc XRS+ imaging system (Bio-Rad Laboratory, Mississauga, Canada) after electrophoresis through 1.5% agarose gel. The positive DNA fragments were purified and sequenced by TSINGKE, China.
Analysis of DNA sequences
To identify the single nucleotide polymorphisms (SNP) in LCR, HPV prototype reference sequences were used as the standard to compare with the valid LCR sequences of each type by MEGA 6.0 [24]. The phylogenetic trees were constructed by the Neighbor-Joining method using Kimura 2-Parameter model. The number of bootstrap replications was set at 1000. Sub-lineage reference sequences of the specific type of HPV were participated in constructing the branches of phylogenetic tree ( Table 2). All sequences were analyzed using the BLAST (Basic Local Alignment Search Tool) from NCBI (https://blast.ncbi.nlm.nih.gov/Blast.cgi) to detect the novel sites or isolates.
Genomic polymorphisms of HPV-LCR
30 SNPs were identified in HPV18-LCR. T7592C was found in all isolates. T7258A, C7529A, A7567C and A7670T was the second common mutations in HPV18-LCR, got the frequency of 17.6%. These four mutations also appeared together. No insertion or deletion was found. After Blast on NCBI, 8 unique mutations and 9 novel variants were confirmed. (Table 3).
59 SNPs were detected in 56 nucleotide positions of HPV16 LCR. The most common mutations, G7193T and G7521A was found in all isolates except HPV16 NC45. A7730C and G7842A were identified in 57.9% of the variants (33/57). A7175C, T7177C, T7201C and C7270T were detected in 32 isolates. No insertion or deletion mutation was found. After Blast on NCBI, we found that 18 mutations and 9 variants were never reported by anyone else. (Table 4).
In HPV58, 55 SNPS were found at 52 nucleotide positions. An insertion (CTTGTCAGTTTC) was detected between the nucleotide sites 7279 and 7280. The most variable site was 7714 (25/67). All four kinds of nucleotide were found on this position, including 8 of A7714C, 15 of A7714G and 2 of A7714T. The second most prevalence (Table 5).
Phylogenetic analysis
The neighbor-joining phylogenetic trees were constructed by MEGA, using the patterns and reference sequences of sub-lineages . 31 patterns and 9 sub-lineages reference sequences participated in building the tree of HPV16. The phylogenetic tree showed that all patterns were clustered in lineage A,except NC43 (lineage C). The A branch contained 10 patterns (17 isolates) of sub-lineage A1, 4 patterns (7 isolates) of sub-lineage A3 and 18 patterns (32 isolates) of sub-lineage A4. None of patterns belonged to sub-lineage A2 (Fig. 2).
The tree of HPV18 was composed of 16 patterns and 9 sub-lineages reference sequences. All patterns were identified as Lineage A, of which 8 patterns (24 isolates) were divided into sub-lineage A1, 4 patterns (4 isolates) were classified as sub-lineage A2, 1 pattern was identified as sublineage A3 (3 isolates), A4 (1 isolates), respectively. Two patterns (pattern No. 34 and No.33) were not clearly identified, while No.33 was more like sub-lineage A5, No.34 was more close to sub-lineage A3 in the tree (Fig. 3).
39 patterns and 7 sub-lineages reference sequences were selected to build the tree of HPV58. All patterns were gathered in lineage A and B. Lineage A was the most prevalence sub-type of HPV58, including 19 patterns (38 samples) of sub-lineage A1, 10 patterns (16 samples) of sub-lineage A2 and 8 patterns (9 samples) of sub-lineage A3. Three patterns were classified as lineage B, 2 for Sub-B1 and 1 for sub-B2 (Fig. 4).
Prediction of transcription factor binding sites
JASPAR database was used to investigate the potential binding sites for the transcription factors in the HPV-LCR. Also, it was applied to assess wherther the mutations affected the transcription factor binding sites. 7403 7486 7524 7529 7530 7563 7564 7567 7592 7670 7694 7716 7720 7753 7754 7853 7857 2 19 24 33 41 53 55 69 104 Pattern G Same nucleotide compared to the reference sequence were marked with a dash (−). N noted novel variants of this study. We gave a specific pattern No. to identical LCR sequence. The sample amounts of each pattern were listed at last column 7177 7178 7193 7201 7210 7219 7233 7238 7252 7270 7287 7289 7292 7310 7328 7343 7366 7385 7394 7395 7419 7428 7429 7435 7449 Same nucleotides compared to the reference sequence were marked with a dash (-). N noted novel variants of this study. + present the insert (CTTGTCAGTTTC) between 7279 and7280. We gave a specific pattern No.
to identical LCR sequence. The sample amounts of each pattern were listed at last column (Fig. 5a).
The JASPAR result of HPV18-LCR indicated that 6 variations had potential effects on the TFBS. T7592C was detected in all isolates of HPV18 and might be the binding site of GATA3 and SRF. CEBPB was related to variations C7716T and C7720A. C7716T was also potentially affected the binding site of TEAD1. In addition, nucleotide sites 7857, 33 and 41 potentially affected the binding sites for HOXA5, FOXL1 and FOXC1, respectively (Fig. 5b).
In HPV58-LCR, 5 variations and 1 insertion were found potential effect on TFBS. A sequence of CTTGTC AGTTTC was inserted in the potential binding sites of transcription factor ETS1 between position 7279 and 7280. More TFs like SOX10, SOX9, CEBPA and ESR2 indicated underlying binding change on account of the mutation C7265G, A7304G, A7376G and T7791A/ A7793C, respectively (Fig. 5c).
Discussion
Persistence infection of high-risk HPV shows significant link with cervical intraepithelial neoplasia (CIN) and invasion cervical cancer. HPV variants may have codiversified with human populations and thus obtain an intrinsic geographical difference on prevalence and infections [26]. The knowledge of epidemicity about HR-HPV is vital for further study of vaccine and cervical cancer therapy in local area.
These three types of HR-HPV showed different prevalence and mutation rates in our study. More HPV16 and HPV58 were detected than HPV18 in this study. Only one HPV16 isolate was completely same with HPV16 REF sequence, while 16 (24.2%) isolates were same with REF sequence of HPV58. It may attributed to the high prevalence and active interaction in viral genome integration and cancer development of HPV16, so the LCR were able to occur and reserve more variations. HPV18 is less popular than HPV16/58 in Southwest China, and we didn't find as many mutations as those two types in 18. T7592C was a general mutation for HPV18 in this area. It was found in all isolates of HPV18 and also the only variation of 35.3% (12 isolates) HPV18 LCR sequence. HPV58 showed different prevalence through geographical regions. The unusual high prevalence had been reported in East Asia, Africa and some other areas [27]. A few study reported the high prevalence of HPV58 in Japan (8%) and Korea (16%) [28][29][30]. About the data in China, infection frequency of HPV58 was found 9.4% in Zhejiang [31], 10% in Hong Kong [32] Fig. 3 Neighbor-joining phylogenetic tree of HPV-18 patterns based on LCR sequences. Reference sequences of sub-lineages were marked with red dot; *, marked the sub-lineage that the pattern not clearly identified to, but more close to and 10% in Taiwan [33]. In this study, HPV58 occupied 12% of the positive samples. It suggested that further study of vaccine need to consider this type as target.
The distribution of the sub-lineages were basically same with the data had been reported previously. In this study, A lineage accounted for a large proportion of all three types. It has been reported that lineage A related to pathogenicity more than other lineages.
For HPV16, a worldwide phylogenetic analysis indicated that the EUR lineages (A1~A3) were epidemic in many regions. A4 was largely found in Eastern Asia [34]. 56% of HPV16 variants were clustered into sub-lineage A4 which was significantly associated with elevated risk of cervical cancer comparing with A1~3 [35].
All HPV18 isolated were identified as lineage A. The global data of lineage A showed that sub-lineage A1 predominated in Eastern Asia and Pacific, while sub-lineage A3/A4 were prevalent in many region around the world. The isolates of A5 were detected principally in Africa [36]. Our data showed that HPV18 samples were mostly identified as A1 and a small amount of A2/A3/A4. 70.6% isolates of HPV18 were clustered in sub-lineage A1. Amador-Molina et al. found that the variations of sub-A1 affected Ori function significant higher than the Fig. 4 Neighbor-joining phylogenetic tree of HPV-58 patterns based on LCR sequence. Reference sequences of sub-lineages were marked with red dot other variants [37]. This effect may be related to the changes found in the keratinocyte enhancer (KE) region of LCR as it has been reported that mutations in this domain affect HPV replication [38].
For HPV58, globally, the sub-lineage A2 was the most widespread variant, whereas sub-lineages A1 and A3 were rarely found out of Asia. A1 were the most prevalence sub-lineage of HPV58 in our study, A2/A3 were also be found. 95.4% of the HPV58 isolates were clustered in lineage A which showed much more relation with CIN2/CIN3+ rather than lineage BCD [30]. We also found 3 variants of lineage B which was rarely observed in East Asia [39].
The HPV LCR which contain the binding sites for both viral and cellular factors, has shown regulatory functions on replication of HPV, transcriptional activity of the E6/E7 and the other interaction through the virus life cycle [10,13,28]. Mutations on LCR may influence the binding sites and the function of it. The mutations of some sub-lineages which are more related to pathgenicity, showed potential effect on TFBS in our data. In HPV16, A7730C, variant of sub-lineage A4, showed potentially effect to PHOX2A which is a transcription factor involving in cell proliferation and migration in lung cancer [40]. CEBPB, FOXL1 and HOXA5 were the transcription factors that may be affected in sub-lineage A1 of HPV18. CEBPB is a leucine-zipper transcription factor that regulates growth and differentiation of hematopoietic and epithelial cells. One study based on breast cancer found that CEBPB was a novel transcriptional regulator of CLDN4. The upregulation of CEBPB-CLDN4 signaling caused the migration and invasion of cancer cell [41]. Homeobox A5 (HOXA5) is a member of the homeobox (HOX) family and is upregulated in many types of tumors [42]. Forkhead box L1 (FOXL1) is a member of the Forkhead box (FOX) superfamily and was reported to be dysregulated in various types of cancers. Upregulation of FOXL1 greatly inhibits cell proliferation, migration, and invasion in vitro and tumorigenicity in nude mice [43]. In HPV58, the nucleotide sites 7265 and 7266 were not only the most variable sites in LCR but also the potential binding sites of SOX10 which is a transcription factor of sex determining region Y (SRY)-related high motility group (HMG)box gene family. SOX10 has been suggested as a useful marker for corresponding tumors [44], although it was usually silenced or downregulated in malignant tumors such as digestive cancers [45] and prostatic carcinoma [46]. The insertion (between 7279 and 7280) which is usually detected in sub-lineage A3, may affect the binding site of ETS1. ETS1 belongs to the large family of the ETS domain family of transcription factors and is involved in cancer progression. Mostly, ETS1 expression is linked to poor survival and contributed to the acquisition of cancer cell invasiveness, EMT (epithelial-to-mesenchymal transition), the development of drug resistance and neo-angiogenesis [47].
Conclusion
In conclusion, this study investigated the gene polymorphisms, phylogeny, and relevant functional prediction of high-risk HPV-LCR from Southwest China. Although our study showed some limitations on sample capacity and source, it provided more data for understanding the intrinsic geographical relatedness of HPV-16/18/58 variants, the complicated relation among HPV-16/18/58 LCR mutations, transcription factors and carcinogenesis. It also helps performing further study to demonstrate the biological function of HPV-16/18/58 LCR variants and the effect of multiple infection of high-risk HPV on tumor progression. The TFBS we found is still need deeper exploration for the potential of them to be marker in diagnosis and therapy.
|
2023-01-18T15:04:35.563Z
|
2020-07-16T00:00:00.000
|
{
"year": 2020,
"sha1": "bb30c7d1333e396c938c1f342cc11901fa46dee2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12985-020-01349-3",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "bb30c7d1333e396c938c1f342cc11901fa46dee2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
15846686
|
pes2o/s2orc
|
v3-fos-license
|
Mini-Photoselective Vaporization of the Prostate for Difficult Intermittent Self-Catheterization
Bladder neck incision or transurethral incision of the prostate is a procedure described for men with bladder outflow obstruction associated with a gland size of less than 30 ml. We report a case of a man with detrusor dysfunction who was having increasing difficulty performing clean intermittent self-catheterization of the bladder. The successful use of the 120 W lithium triborate laser to perform a "mini-photoselective vaporization of the prostate" ("mini-PVP") enabled discharge of the patient on the same day as well as resolution of the patient's difficulties in performing self-catheterization. Mini-PVP has proven to be a simple and effective approach to resolution of a prostate configuration impeding the process of clean intermittent self-catheterization.
Bladder neck incision or transurethral incision of the prostate (TUIP) is a procedure described for men with bladder outflow obstruction and a gland size of less than 30 ml. Not uncommonly, a transurethral resection of the prostate (TURP) is performed instead and a mini-TURP when the occlusive median lobe is resected. Whether treated by mini-TURP or TURP, most patients undergoing this procedure require continuous bladder irrigation for 24 hours postoperatively and are discharged on the day following surgery. An uncommon indication for performing surgery is to allow for ease of self-catheterization when the configuration of the prostate interferes with this process. We report here the successful use of the 120 W lithium triborate laser to perform a "mini-photoselective vaporization of the prostate" ("mini-PVP"), which allowed for resolution of the patient's self-catheterization difficulties and discharge of the patient from the hospital on the same day.
CASE REPORT
A 75-year-old man was referred owing to difficulty in performing clean intermittent self-catheterization (CISC) of the bladder. Approximately 3 years earlier, he had been found to have chronic urinary retention and a noncontractile detrusor on urodynamic evaluation. The precise etiology behind his detrusor dysfunction has not been clarified. He had subsequently been performing CISC 4 times daily. On occasions, he reported voiding a small amount of urine between CISCs. He was otherwise in excellent health with no significant medical comorbidities. Over a period of 2 months, he described increasing difficulty in passing the catheter. He felt obstructed at the level of the bladder neck, and on occasions was unable to pass the catheter all the way into the bladder. An attempt to improve the success of CISC with the use of a Coude tip catheter was not successful.
Cystoscopic examination revealed a small prostate with only slight lateral lobe protrusion. The posterior lip of the bladder neck was "high-riding" (Fig. 1).
A concavity in the posterior bladder neck is indicated by the arrow in Fig. 2. This is consistent with indentation due to the catheter striking this area "end on." The ureteric orifices were noted to be particularly close to the bladder neck. Consequently, performing a standard bladder neck incision would potentially place the orifices at risk of injury. Using the 120 W lithium triborate laser at a setting of 80 W power, the ridge of the bladder neck tissue was vaporized at the midline. This straightened out the prostatic urethra to allow for easier catheter insertion (Fig. 3). A total of 25 kJ of energy was used, and the laser time was 4 minutes.
There was no bleeding associated with the procedure. A 16 Ch latex Foley catheter was placed at the completion of the procedure. The catheter was removed 2 hours after the procedure and the patient was discharged. The patient recommenced CISC that afternoon. There was no bleeding in the postoperative period. The catheters now pass easily and there have been no further episodes of difficult CISC. Since this procedure, the patient has observed a significantly increased level of spontaneous urethral voiding and now finds it necessary to perform ISC only twice daily.
DISCUSSION
Performing surgery for bladder outflow obstruction for reasons other than obstruction to the flow of urine is unusual. A detailed search of the literature failed to uncover a previous description of endoscopic prostate surgery being performed for the indication of difficulty in passing catheters for CISC.
In this case, there was no objective evidence that the prostate configuration was obstructive to urine flow. This would be difficult to establish in the presence of detrusor dysfunction. The prostate configuration did, however, impede the easy passage of a catheter for the purposes of CISC. This was evidenced by what was observed to be a clear indentation concavity on the surface of the "high-riding" bladder neck seen endoscopically.
This case demonstrates the efficacy of mini-PVP in the treatment of a high bladder neck. The term mini-PVP is used to describe performing a minimal PVP with only as much vaporization as necessary, and in this case the creation of an easy channel for the passage of a catheter. Other than this, there were no differences in technique or patient preparation other than that the procedure was of a short duration. Owing to the small quantity of tissue vaporized, it was possible to perform this procedure as a day case. Consequently, the patient has been minimally inconvenienced and significant inpatient treatment costs have been saved. There are few cost comparison studies between PVP and TURP, although an Australian randomized controlled trial estimated cost savings of 22% in favor of PVP [1]. However, these data were based on a 24-hour admission for PVP. Thus, day case mini-PVP would have an even greater potential cost advantage over mini-TURP. Performing a TUIP or a bladder neck incision requires similar after care as TURP, although most patients are able to be discharged home the following day [2]. This procedure differs in that an incision is primarily made at the 5 or 7 o'clock position, and given the anatomy in this particular case, could have placed these structures at risk. With PVP, the tissue was removed by vaporization rather than by being incised, which in this case enabled safe removal of the tissue in the path of catheterization in the midline.
Alternative methods of treating this patient include using a Coude tip catheter, but this approach was not successful in this patient. Coude tip catheters can make catheterization easier, and this is a popular approach after failed conventional catheterization [3].
It was of interest that the patient experienced a greater degree of spontaneous urethral voiding subsequent to surgery. This implies that despite there being detrusor dysfunction, there was probably an element of bladder outlet obstruction as well. The relief of such obstruction has favorably tipped the balance to enable voiding, although the degree of detrusor function present remains sufficient to necessitate continued CISC.
The mini-PVP approach has been demonstrated in this case to be a simple and effective approach to resolution of a prostate configuration impeding the process of CISC.
|
2017-12-18T23:43:40.318Z
|
2012-09-01T00:00:00.000
|
{
"year": 2012,
"sha1": "7b0cfae6ebbf9404ae38a6e38fbc9245a13c0743",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc3460010?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b0cfae6ebbf9404ae38a6e38fbc9245a13c0743",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
67842764
|
pes2o/s2orc
|
v3-fos-license
|
Comparative study of Dot enzyme immune assay and Widal test in the diagnosis of Typhoid fever in a tertiary care hospital in south Kerala
Typhoid fever is acute and often life-threatening febrile illnesses caused by systemic infection with the bacteria Salmonella enterica serotype Typhi. The disease is endemic in the Indian subcontinent including Bangladesh, South-East and the Middle East Africa, Central and South America 1 . The signs and symptoms of typhoid fever are non specific. So diagnosis relies not only on the clinical features of the disease, but also on investigative methods like culture and sensitivity, detection of agglutinating antibodies to Salmonella typhi by the Widal test. Serologic diagnostic tests for typhoid fever by immunochromatographic test (ICT) are good alternatives 2 . Aim: Comparison of immuno dot and widal test for the detection of typhoid fever among the febrile patients of Medical college, Thiruvananthapuram. Materials and Methods: A cross sectional study was conducted among febrile patients attended in Govt. Medical college Trivandrum for a period of 6months, from March to September 2015. Result: Out of 433 cases 26 (6%) were positive and 407 ( 94%) were negative by Widal test, whereas 21 (4.85%) were positive and 413 ( 95.15%) were negative by immunochromatography. Sensitivity and specificity of immunochromatographic method was 80.8% and 100% respectively considering widal as the standard test. Conclusion: Widal test has been used extensively used as a laboratory tool for diagnosis of typhoid fever in most laboratories, but it is laborious, time consuming and may not be positive in early stages and is to be interpreted judiciously 3 . ICT is a simple and sensitive test for early diagnosis of typhoid fever. The results can be visually interpreted and is available within one hour. Introduction Typhoid fever, caused by Salmonella Typhi, is widely recognized as a major public health problem in many developing countries. India is the second most populous country of the world with majority inhabiting the rural areas with little access to modern diagnostic tools. It is presumed that typhoid fever is a major health problem in all those parts of the world where safe drinking water and sanitation is inadequate. It is a systemic infection and is transmitted through the faeco oral route by the consumption of contaminated water and food, particularly raw or undercooked meat, poultry, eggs and milk. Chronic typhoid carrier status may be responsible for the endemicity and outbreaks of the disease in www.jmscr.igmpublication.org Impact Factor 5.84 Index Copernicus Value: 83.27 ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v5i6.30
Introduction
Typhoid fever, caused by Salmonella Typhi, is widely recognized as a major public health problem in many developing countries. India is the second most populous country of the world with majority inhabiting the rural areas with little access to modern diagnostic tools. It is presumed that typhoid fever is a major health problem in all those parts of the world where safe drinking water and sanitation is inadequate. It is a systemic infection and is transmitted through the faeco oral route by the consumption of contaminated water and food, particularly raw or undercooked meat, poultry, eggs and milk. Chronic typhoid carrier status may be responsible for the endemicity and outbreaks of the disease in www.jmscr.igmpublication.org Impact Factor 5.84 Index Copernicus Value: 83.27 ISSN (e)-2347-176x ISSN (p) 2455-0450 DOI: https://dx.doi.org/10.18535/jmscr/v5i6.30 the region. The O and H antigens are the major antigens used to serotype the Salmonella. The O antigens are Similar to the O antigens of other Enterobacteriaceae but H antigens are different in that they are diphasic. i.e, the H antigens can exist in either of two major antigenic phases. Phase1 (Specific phase) and phase 2 (non specific phase). O antigen is less immunogenic than H antigen. The titre of O antibody in serum after infection or immunisation is generally less than that of H antibody 4 . S. typhi produces surface antigen enveloping the O antigen, referred to as Vi antigen. Vi antigen is poorly immunogenic and induces production of low titre of antibody following infection. Vi antibody disappears in early phase of convalescence. Persistence of this antibody indicates the development of the carrier state 1 . The signs and symptoms of typhoid fever are non specific, so a definitive diagnosis of the disease depending on the clinical presentation alone is very difficult. Therefore laboratory based investigations are essential for supporting the diagnosis of typhoid fever. The gold standard for the diagnosis of typhoid fever is the isolation of Salmonella Typhi from appropriate samples including blood, bone marrow, urine, stool, etc. 5 It is not always available and when it is, it takes 2 to 3 days. Culture isolation of the S. Typhi remains the most effective diagnostic procedure in suspected typhoid fever. Delayed and inaccurate diagnosis and treatment results in increased cost and higher rates of serious complications and deaths. Drug resistance in S. Typhi is a major problem for public health authorities. The emergence of antibiotic resistant strains of the bacteria is closely linked to the irrational use of antibiotic in treating human infections. Resistance to commonly used antibiotics such as chloramphenicol, ampicillin and cotrimoxazole has been reported from different parts of world including India 6. In developing countries, facilities for isolation and culture are often not available especially in smaller hospitals. A definitive diagnosis of the disease is required for treatment and to decrease the morbidity, mortality and transmission. Other methods include detection of S. Typhi-specific antibodies by serological test and antigen by immunological test and identification of nucleic acid by Polymerase chain reaction 7 . The present study was designed to identify the cases of typhoid fever by employing the techniques of widal test and ICT. The ICT method has been shown to be cheap, less time-consuming, applicable for field use, easy to perform and highly sensitive and specific for detection of antibodies in patients with typhoid fever. So the ICT method was applied for the detection of S. Typhi specific IgM antibodies in blood samples.
Materials and Methods
A cross sectional study was conducted among febrile patients attended in Govt. Medical college Trivandrum for a period of 6months, from March to September 2015. 5ml blood samples collected under aseptic precaution and serum separated as soon as possible to avoid haemolysis. The sample were stored at 2-8 0 C for up to 48 hrs. To maintain long term longevity of the serum, stored at -70 0 C. The samples were subjected to immunochromatography and widal test. Enterocheck-WB is a rapid, qualitative, immunoassay for the detection of IgM antibodies to S. Typhi in human serum/plasma or whole blood specimen. It qualitatively detects the presence of IgM class of lipopolysaccharide (LPS) specific to S.Typhi. It is an indirect solid-phase immunochromatographic assay. The specific Salmonella Typhi antigen is immobilized onto cellulose nitrate membrane strip. The conjugate pad contains two components -Anti human IgM antibody conjugated to colloidal gold and rabbit globulin conjugated to colloidal gold. As the test specimen flows through the membrane test assembly, the anti-human IgM antibody-colloidal gold conjugate complexes with the S.Typhi specific IgM antibodies in the specimen and travels on the membrane due to capillary action. This complex moves further on the membrane to the test region (T) where it is immobilized by the S. Typhi specific LPS antigen coated on the membrane leading to formation of a pink to pinkpurple coloured band. Timing of test is important, as antibodies begin to arise during end of first week. The titre increases during second, third and fourth week after which it gradually declines. The test may be negative in early part of first week. Single test is usually of not much value. A rise in titre between two sera specimens is more meaningful than a single test. Positive results are reported only after correlating with clinical features.
Result
The study population included 433 patients attending various outpatient departments and those admitted in Medical College Hospital, Thiruvananthapuram with complaints of fever. Table 1: Serological analysis of sample tested by widal test Table 1 shows that out of 433 cases 26 (6%) were positive and 407 ( 94%) were negative by widal. Table 2: Serological analysis of sample tested for antibodies of Salmonella by immunochromatography Table 2 shows that out of 433 cases 21 (4.85%) were positive and 413 ( 95.15%) were negative by immunochromatography. Table 3 shows that out of 26 widal positive cases, 5 were negative by immuno chromatography. Sensitivity and specificity of immuno chromatographic method was 80.8% and 100% respectively considering widal as standard. Positive predictive value was 100 and negative predictive value was 98.8. Table 4 shows that positivity in both widal and ICT method is maximum in the age group of 0-10 followed by age group of 41-50. Out of 26 widal positive cases 5 cases shows coinfection with leptospirosis. In widal test these cases gives TH titre only. These leptospira positive cases was negative for typhoid Ab by ICT method . 10 .
In this study sensitivity and specificity of ICT were calculated by widal taken as standard. The sensitivity and specificity of ICT in suspected typhoid cases were found 80.8% and 100% respectively. Out of 26 widal positive samples ICT positivity can be seen in 21 cases. The cross reactive samples were ICT negative. ICT has been evaluated in many countries and they found significantly higher sensitivity and specificity 11,12,13 . An evaluation of ICT in India was found to be 100% sensitive and 80% specific compared to a blood culture as gold standard 12 16 .
It has been observed that the water and sewage pipelines lie close together in the slum areas of India and they are prone to leakage and crosscontamination.
In the present study both widal and ICT methods were used for detection of antibodies to typhoid. Sensitivity, specificity and usefulness of ICT were studied. ICT is a simple and sensitive test for early diagnosis of typhoid fever in children. The results can be visually interpreted and is available within one hour.
Conclusion 433 sera samples were analyzed for antibodies to Salmonella Typhi in this study. It was concluded that typhoid prevalence in this area was 6%. All the signs and symptoms of the disease are nonspecific and common with other acute febrile illnesses. A definitive diagnosis of the disease is required for treatment and to decrease the morbidity, mortality and transmission. The ICT can used as the suitable method for rapid diagnosis of typhoid fever. Detection of antibody (whole blood by IgM) from ICT method is more easy, non-invasive and highly sensitive and specific method. It is useful for small, less equipped as well as for the laboratories with fewer facilities. Since detection rate of antibody by ICT method is quite satisfactory. This test can be applied for field level use. So efforts should be made to establish antibody (IgM) detection from whole blood by ICT method at field level, especially in the endemic areas of developing countries like India, even though the standard test is the widal tube agglutination test.
|
2019-03-17T13:03:56.037Z
|
2017-06-09T00:00:00.000
|
{
"year": 2017,
"sha1": "488cc81ac019d3c5614fe63f2abd332ab71f72a2",
"oa_license": null,
"oa_url": "https://doi.org/10.18535/jmscr/v5i6.30",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cf4bef437f9bff67288620256d02d3757f4c242e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258487566
|
pes2o/s2orc
|
v3-fos-license
|
Calculation of the pharmacogenomics benefit score for patients with medication-related problems
Unexpected poor efficacy and intolerable adverse effects are medication-related problems that may result from genetic variation in genes encoding key proteins involved in pharmacokinetics or pharmacodynamics. Pharmacogenomic (PGx) testing can be used in medical practice “pre-emptively” to avoid future patient harm from medications and “reactively” to diagnose medication-related problems following their occurrence. A structured approach to PGx consulting is proposed to calculate the pharmacogenomics benefit score (PGxBS), a patient-centered objective measure of congruency between medication-related problems and patient genotypes. An example case of poor efficacy with multiple medications is presented, together with comments on the potential benefits and limitations of using the PGxBS in medical practice.
Background
There is growing interest in using pharmacogenomics (PGx) broadly in medical practice to improve the chances of therapeutic success in individual patients by precision dosing (Polasek et al., 2018;Polasek et al., 2019). Clinical guidelines are available to instruct doctors on how to prescribe select medications based on patient genotypes (Relling and Klein, 2011). Ideally, this should be done prior to commencing treatment, which is called "pre-emptive" PGx testing. There are many examples in well-resourced healthcare systems of PGx services being implemented successfully, usually via electronic clinical decision support systems (CDSS) (Dunnenberger et al., 2016); patients are screened and almost all (>95%) are found to have genetic variants with so-called "actionable PGx guideline recommendations" that could influence future prescribing (Mostafa et al., 2019). Less frequently addressed in the PGx literature is the clinical scenario where patients have histories of medication-related problems at standard doses without an obvious explanation, either unexpected poor efficacy or intolerable adverse effects. "Reactive" PGx testing can be used in these patients to diagnose whether PGx is the potential cause. Pharmacogenomic testing is therefore a unique pathology test that has dual clinical utility depending on when the test is ordered and/or reviewed relative to the medication prescribed i.e., a screening test to avoid future patient harm and a diagnostic test in the work-up of differential diagnoses. Whilst there is growing evidence for pre-emptive PGx testing to decrease adverse drug reactions (ADRs), by as much as 30% in some studies (Zhou et al., 2015;Cacabelos et al., 2019;Swen et al., 2023), the degree to which reactive PGx testing diagnoses the cause of medication-related problems is unclear.
In this report, a structured approach to PGx consulting by a clinical pharmacologist is described based on referrals of patients with current and/or past medication-related problems (Aronson, 2010). The pharmacogenomics benefit score (PGxBS) is proposed as a patient-centered objective measure of congruency between medication-related problems and patient genotypes. An example case of unexpected poor efficacy with multiple medications is presented to show how the PGxBS is calculated. Finally, consideration is given to the potential benefits and limitations of using the PGxBS in medical practice.
Categories of PGx
There are three categories to consider when diagnosing PGx as the potential cause of medication-related problems.
1) Exposure PGx. Is the patient at risk of extreme exposure to the medication at standard doses? Pharmacokinetic processes determine "how much" a medication is available at the sites of action, and therefore, assuming typical dose-exposure-response relationships, the magnitude of response. Extremely high medication exposures are associated with an increased risk of adverse effects, whereas persistently low medication exposures may result in subtherapeutic concentrations and poor efficacy. Although many genes influence pharmacokinetics, the cytochrome P450 (CYP) enzymes are the most important for PGx (Doogue and Polasek, 2013).
2) Response PGx. Does the patient have the correct molecular target for the medication? At a given exposure, genetic variability in the molecular target can determine the response (pharmacodynamics). This is best exemplified currently in hematology and oncology; patients are treated with targeted pharmacotherapy based on the results of genetic testing of the molecular targets expressed by cancer cells (Polasek et al., 2016). This category will expand in the future as genomic analyses identify novel pharmacodynamic biomarkers of response (Dawed et al., 2023).
3) Safety PGx. Is the patient at risk of a severe adverse drug reaction to the medication at standard doses? There is some overlap here with category 1 (Exposure PGx) and category 2 (Response PGx) but this category primarily includes rare severe cutaneous adverse drug reactions (SCARs) in patients with certain human leukocyte antigen (HLA) genotypes. In these cases, the patients' immune system carries genetic variants that significantly increase the likelihood of ADRs (Kloypan et al., 2021).
Structured PGx consult
A structured approach to PGx consulting by a doctor is suggested here because medication-related problems should be considered under differential diagnoses. This requires diagnostic skill and experience, and a broader understanding of the patient beyond simply medications and genotypes (Aronson, 2010) ( Figure 1). The doctor may or may not have access to an electronic CDSS with PGx guidance (Wake et al., 2021). The following steps outline the information required to calculate the PGxBS. Binary responses to the main steps are required. A spreadsheet can be used to log answers and calculate scores.
1) List current and past medications. Current medications have priority. Past medications are also important to capture if time permits, since clues on how patients respond to medications more broadly may be garnered, further informing the PGxBS.
2) Determine availability of PGx guidelines. For each medication, determine whether Clinical Pharmacogenomics Implementation Consortium (CPIC ® ) level A or A/B evidence is available (www.cpicpgx.org). Assign one if the answer is "yes" and 0 for "no/unsure". If there are no medications with PGx guidelines, then the PGx consult is complete and the PGxBS for the patient is 0.
3) Assess adequacy of therapeutic trials. For each medication with PGx guidelines, determine whether the patient had an adequate therapeutic trial or not. Assign one if the answer is "yes" and 0 for Frontiers in Genetics frontiersin.org "no/unsure". Inadequate therapeutic trials from underdosing or short durations of treatment are common and should be recognised, scoring 0. This section can also be completed for medications without PGx guidelines to improve the medication history, but these responses do not count towards the PGxBS. 4) Determine therapeutic outcomes. Two types of medicationrelated problems indicate negative therapeutic outcomes that could be explained by PGx-unexpected poor efficacy or intolerable adverse effects (Polasek et al., 2018). One is chosen here, scoring 1, with the alternative scoring 0. Medications with inadequate therapeutic trials (step 3) are ignored. 5) Determine congruency between therapeutic outcomes and PGx results. Is each medication-related problem consistent with the genotype-predicted phenotype? Again, this is a binary option, with congruent results scoring 1 and incongruent results scoring -1. The same PGx guidelines from step 2 (CPIC ® ) should be used. An example of a congruent result is a patient who experienced SCAR after starting allopurinol and who was subsequently shown to carry the HLA-B*5801 allele (Lucas and Droney, 2022). Alternatively, a chronic pain sufferer with a CYP2D6 poor metabolizer (PM) phenotype who experienced euphoria and intolerable dizziness and nausea with low dose tramadol is an example of an incongruent result (Crews et al., 2021). 6) Calculate the PGxBS. Congruent and incongruent results are added. Scores ≥1 indicate a possible contribution of PGx to medication-related problems, whereas 0 and negative scores show that PGx is less likely to be important for the patient.
Example case
A 42-year-old man with a 6-year history of depression, anxiety, insomnia, and chronic lower back pain was referred by his general practitioner to a multi-disciplinary ambulatory care clinic staffed by clinical pharmacologists for "poor responses to psychotropics and pain killers". His mental state had deteriorated over the previous 3 months, and he was awaiting psychiatrist review. Figure 2 shows the spreadsheet used to document the consult and calculate his PGxBS. Since the patient had two medications with PGx guidelines and no previous PGx testing, it was recommended, and the patient accepted the cost (~$100USD). The PGx results were reconciled with the medication-related problems at the follow-up appointment. His CYP2D6 ultra-rapid (UM) metabolizer phenotype was incongruent with poor analgesic response to codeine (score = -1). However, there was congruency between CYP2D6 UM and CYP2C19 normal metabolizer (NM) phenotypes and no Frontiers in Genetics frontiersin.org improvement in mental state with clomipramine (score = 1). The PGxBS was 0. Importantly, this patient held strong beliefs about being "abnormal" and "unable to be helped by drugs". Counselling was provided to explain that no known genetic cause for his poor responses was found. The patient was encouraged to be positive about medications in his overall treatment. The PGx spreadsheet was included in the medical consult note and forwarded to his treating general practitioner and psychiatrist (Figure 2).
Potential benefits of the PGxBS
The PGxBS is a clinically useful objective measure of congruency between medication-related problems and patient genotypes. The score is patient-centered rather than focused on individual medication-gene pairs ("this is your PGxBS"). The score is easy to understand for patients and non-expert PGx users-positive results indicate a possible role for PGx, whereas zero and negative scores mean that PGx is less likely to be important. The PGxBS may be applied to patients with single or multiple current and/or past medication-related problems. The PGxBS is dynamic and changes with time and changing medication regimens. Calculating a patient's PGxBS requires particular attention to the medication history, which alone has benefits for clinical care. Importantly, the structured PGx consult allows for patient education on the many factors that explain why different patients respond to medications differently, including pharmacokinetic drug-drug interactions that cause CYP phenoconversion (Mostafa et al., 2021;Mostafa et al., 2022). Although the emphasis in this report is on medical practice, pharmacists with expertise in PGx could calculate the PGxBS and integrate it into their clinical practice, ideally in close collaboration with the treating doctor (Polasek et al., 2015).
Limitations of the PGxBS
The PGxBS does not apply to pre-emptive PGx testing, where, at least in principle, almost all patients will benefit i.e., >95% have genetic variants with so-called "actionable PGx guideline recommendations" (Mostafa et al., 2019;Swen et al., 2023). In patients with medication-related problems who have not been tested, two or more medications with PGx guidelines is the suggested cut-off for reactive PGx testing. This is only a guide since the clinical need (indication) for reactive PGx testing depends on many factors, including disease status, differential diagnoses, severity of treatment outcomes, treatment alternatives, and test affordability. The PGxBS is not validated for clinical decision-making, including prescribing. To date, the score has not been applied beyond one clinical pharmacology referral stream in Australia. Whether a patient's present score reflects the future clinical utility of PGx for that patient is unknown. The PGxBS often depends on the recollection of subjective past experiences with medications, occurring years previously in some cases, and there may be intrinsic biases. Finally, there are nuances to the PGxBS that are debatable, such as the PGx guidelines and levels of evidence chosen (step 2).
Conclusion
Despite the promise of superior patient care and considerable academic and commercial interests, adoption of PGx in routine medical practice has been limited (Pearce et al., 2022). Whilst there is growing evidence for pre-emptive PGx testing to avoid ADRs, the degree to which reactive PGx testing diagnoses the cause of medication-related problems is less clear. Rather than details about individual medication-gene pairs, patients with histories of medication-related problems and their doctors are often more interested in whether PGx is "the answer". In such cases, a structured approach to PGx consulting is recommended to generate the PGxBS, a patient-centered objective measure of congruency between medication-related problems and patient genotypes.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
|
2023-05-05T13:10:10.506Z
|
2023-05-05T00:00:00.000
|
{
"year": 2023,
"sha1": "90a746ebab8b599e2bcdf523b10d834ec47fc257",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "90a746ebab8b599e2bcdf523b10d834ec47fc257",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
222180788
|
pes2o/s2orc
|
v3-fos-license
|
Transcutaneous Slowly Depolarizing Currents Elicit Pruritus in Patients with Atopic Dermatitis
Slowly depolarizing currents applied for one minute have been shown to activate C-nociceptors and provoke increasing pain in patients with neuropathy. This study examined the effect of transcutaneous slowly depolarizing currents on pruritus in patients with atopic dermatitis. C-nociceptor-specific electrical stimulation was applied to areas of eczema-affected and non-affected skin in 26 patients with atopic dermatitis. Single half-sine wave pulses (500 ms, 0.2–1 mA) induced itch in 9 patients in eczema-affected areas of the skin (numerical rating scale 5 ± 1), but pain in control skin (numerical rating scale 6±1).Sinusoidal stimuli (4 Hz, 10 pulses, 0.025–0.4 mA) evoked itch in only 3 patients in eczema-affected areas of the skin but on delivering pulses for one minute (0.05–0.2 mA) 48% of the patients (n = 12) reported itch with numerical rating scale 4 ± 1 in areas of eczema-affected skin. The number of patients reporting itch in eczema-affected areas of the skin increased with longer stimulation (p < 0.005). These results demonstrate a reduced adaptation of peripheral C-fibres conveying itch in patients with atopic dermatitis. Sensitized spinal itch processing had been suggested before in atopic dermatitis patients, and this could be present also in our patients who therefore might benefit from centrally acting antipruritic therapy.
T ranscutaneously delivered rectangular-shaped electrical stimuli of high frequency (up to 200 Hz) has been shown to activate primary afferent skin nerve fibres and, when administered to the wrist and ankle, can evoke itch in healthy control subjects and patients with atopic dermatitis (AD) (1)(2)(3)(4). The long pulse duration (2-8 ms) and the long delay between stimulation and sensation suggest that unmyelinated C-fibres are critically involved (1), but rectangular electrical pulses preferentially activate thick myelinated axons. We recently developed electrical stimulation paradigms that preferentially activate either mechano-sensitive (5) or both mechanosensitive and -insensitive ("silent") C-nociceptors (6) in hairy human skin (7). Chemical activation of these 2 C-nociceptor classes in the skin has been shown to drive itch (8,9), but spinal circuits involved in chemical (and also mechanical) itch processing have to be considered (10). In particular, gastrin-releasing peptide (GRP) and GRP-receptor (GRPR) positive neurones (11,12) as well as natriuretic polypeptide b (Nppb) receptor expressing neurones in the dorsal spinal cord (13,14) have been identified as major components of spinal itch circuits. Intriguingly, only repetitive burst activation of presynaptic GRP positive neurones was sufficient to depolarize postsynaptic GRP-receptor positive neurones and thereby relay pruritoceptive information (11).
In the current study patients with AD were stimulated with slowly depolarizing electrical stimuli that specifically activate unmyelinated C-fibres. In order to assess peripheral nociceptor accommodation and the potential "opening of the spinal gate for itch" (11), sinusoidal pulses were delivered continuously for 1 min to the patients' eczematous and control skin. This particular stimulation paradigm of ongoing sinusoidal stimulation was perceived as increasingly painful in patients with painful neuropathy (6) and it was hypothesized that it might evoke progressively increasing pruritus in patients with chronic itch in a similar fashion.
MATERIALS AND METHODS
The study procedure was approved by the local ethics committee of the University of Heidelberg and the study protocol was
Transcutaneous Slowly Depolarizing Currents Elicit Pruritus in Patients with Atopic Dermatitis
in accordance with the principles of the Declaration of Helsinki. All patients had AD, as diagnosed by an experienced dermatologist (EW), and were recruited at the Department of Occupational Dermatology (University of Heidelberg). A total of 26 patients (10 female, 16 male, mean age 48 ± 21 years) signed written informed consent and participated in the study. None of the patients were told not to scratch the areas that had been most itchy over the last few days. All patients were using non-medical skin care products or moisturizing ointments for eczema treatment at the time of investigation. One patient was on long-term cyclosporine treatment for AD. Two patients took oral non-sedative antihistamines prior to the investigation because of allergic rhinoconjunctivitis. None of the patients were told not to use steroid creams before the investigation.
Study protocol
Patients were informed about the aim of the study and familiarized with the transcutaneous electrode being used for transcutaneous electrical stimulation. A pair of rounded bipolar platinum electrodes (diameter 0.4 mm, distance 2 mm, Nørresundby, Denmark) were mounted in an applicator printed with a 3D-printer and attached to the subject's skin (Fig. 1). A training session was run to familiarize the patients with the slowly depolarizing electrical stimulation protocol and the use of the numerical rating scale (NRS) for stimulus-evoked itch or pain estimation. For electrical stimulation, sine wave and half sine wave pulses were generated by a constant current stimulator (Digitimer DS5, Welwyn Garden City, UK) connected to a Digital-Analogue Converter (DAQ NI USB-6221, National Instruments, Austin, TX, USA) controlled by Dapsys 8 software (www.dapsys.net). A single half sine wave pulse of 500-ms duration was applied to non-affected (normal) skin of the patient and with increasing intensities of 0.2-0.4-0.8 mA. After each stimulus, the patient was requested to rate the intensity of itch or pain on the NRS with the endpoints 0 (no sensation felt) and 10 (maximum sensation that can be imagined). Subsequently, sine wave pulses of 4 Hz and 2.5-s duration (=10 sinusoidal cycles) were delivered to the same skin site, at increasing intensities of 0.05-0.1-0.2 mA, and maximum itch or pain were rated by the patient on the NRS. In addition, patients were instructed to report when the stimulation was no longer felt.
After the training session (data not included in the analyses) the eczema site for electrical stimulation was selected. With this aim, the patients pointed to areas that had been most itchy over the last few days. Only intact skin sites on eczema-affected were selected as test areas. The investigated eczema areas were located on the lower (n = 8) and upper (n = 3) arm, the elbow region (n = 6) and the wrist (n = 2), as well as on the neck (n = 5) and lower leg (n = 2). If possible, a contra-lateral and non-affected site was chosen for stimulating non-affected skin. Should that not be applicable, a site without lesion, and preferably on the forearm, was selected as control.
The electrical stimulation protocols outlined below were applied to the control (no eczema) and eczematous skin site involving one repetition at each site. Mean values of the perceived intensities (NRS) were calculated for each stimulus (Fig. 1) at each site for analysis.
Half sine wave stimulation
Starting at the healthy control skin site, single half sine wave pulses of 500-ms duration were administered with a current intensity of 0.2-0.4-0.6-0-8-1 mA in randomized order. Between each stimulus, an interval of 10 s was maintained, allowing the patient to scale the perceived intensity of sensation (NRS 0-10) and to indicate whether itch or pain was felt. After a pause of 2 min, the half sine wave stimulation protocol was administered to the eczema and the NRS value as well as the quality of sensation (itch or pain) recorded. The stimulation cycle (control/eczema skin) was repeated once.
Sensory electrical thresholds for sine wave stimuli
Next, the perception and pain thresholds of the patients' control and eczema skin to 4 Hz sine wave stimuli were evaluated. Sinusoidal pulses were administered for 2.5 s (10 sinusoidal cycles) with increasing current intensities of 0.005-0.01-0.025-0.05-0.1-0.2-0.4 mA, and the patients were requested to indicate when they first perceived the stimulus (perception threshold) and when it was felt unpleasant (painful or itchy). A time interval of 5 s was applied before the current was increased.
Dose-response to sine wave stimuli
A dose response curve with 4-Hz sine wave stimulation was recorded ( Fig. 1). Ten sinusoidal pulses (2.5 s) were applied with current intensities of 0.025-0.05-0.1-0.2-0.4 mA in randomized order (10-s time interval in between) and patients were asked to indicate the perceived intensity on the NRS (0-10), as well as to report whether the sensation was itchy or painful. The stimulation protocol started at the control skin site, followed by the eczema Stimulation at each site was performed twice.
Continuous sine wave stimulation
In order to record a potential accommodation of C-nociceptors, sinusoidal 4 Hz pulses were delivered continuously for 1 min (Fig. 1). The patients' sensation (NRS 0-10) was recorded at 5 and 10 s after stimulus onset and thereafter in 10 s intervals until the end of stimulation. First, the current intensity for continuous stimulation was set at the individually identified value at which 10 pulses were perceived as unpleasant (see above). Stimuli were delivered to control skin and patients were asked to estimate magnitude of perception (NRS) and whether the sensation became itchy during the 1-min stimulation period. In addition, patients were instructed to indicate as soon as the stimulation was no longer felt. After a resting period of 5 min the stimulation protocol was repeated on the eczema-affected skin. Secondly, the current intensity was set to 0.2 mA and the 4 Hz sine wave pulses delivered for 1 min, again starting on the control skin followed by the eczema site 5 min later. Similar to the measures described above, patients were instructed to rate their sensation (NRS) in regular time-intervals, indicate when perception became itchy, and when stimulation was no longer felt.
Statistical analysis
Data were analysed by analysis of variance (ANOVA) and Bonferroni post hoc tests, using Statistica 7.1 (StatSoft Inc., Tulsa, OK, USA) with p < 0.05 to identify significant differences between the factorial groups "skin site" -"current intensity" -"time of stimulation". Mann-Whitney U test was used as non-parametric comparison of 2 independent groups ("patient perceived itch" vs "patient perceived pain" on electrical stimulation). All values are depicted as mean ± standard error of mean (SEM).
RESULTS
All patients were diagnosed "atopic dermatitis" and had a history of the disease for more than 8 years. At the time of investigation, no patient had acute itch. Electrical stimuli were delivered and corresponding NRS recordings obtained from unaffected (control) and eczematous skin sites. Both skin sites were tested twice in alternating order. Offline analysis revealed no significant (n.s) difference for test repetition (ANOVA, n.s.) and thus mean NRS-values were calculated from each site for statistical analysis.
Sensory thresholds to sine wave stimulation
Perception thresholds for stimulation with 4 Hz sinusoidal pulses were 0.05 ± 0.02 mA and not significantly different between control and eczema (ANOVA, n.s.). Current thresh olds for inducing an unpleasant sensation of pain or itch were virtually identical in both sites (0.1 ± 0.08 mA; ANOVA, n.s.).
Half sine wave stimulation
The perceived intensity of sensation after single half sine wave pulse stimulation was stronger with increasing current intensity (ANOVA, p < 0.0001), but did not differ significantly between control and eczema (ANOVA, p > 0.4, Fig. 2A). Intriguingly, half sine wave pulses evoked an itch sensation in 9 patients, whereas 17 reported pain. Significant interaction was identified between the factorial groups "evoked itch", "current intensity", and "skin site" (ANOVA, p < 0.04), revealing that stronger half sine wave stimuli caused increasing itch. Maximum sensation on application of a 1-mA half sine pulse was, on mean NRS 4 ± 0.5 in control skin of patients without itch and NRS 5.8 ± 0.8 in patients perceiving itch (Mann-Whitney U test, p > 0.06, Fig. 2A). In eczema-affected skin, the mean intensity of sensation was NRS 4.5 ± 0.6 (no itch), and NRS 5.1 ± 0.9 in patients responding with itch (Mann-Whitney U test, p > 0.5). No significant sex differences were identified (ANOVA, p > 0.4).
Sine wave dose-response
Sinusoidal 4 Hz stimuli evoked a current intensity-dependent increase of pain (ANOVA, p < 0.0001). No significant difference was recorded between control and eczema sites (ANOVA, p > 0.8), revealing an mean NRS of 6.2 ± 0.4 (control) and 5.4 ± 0.4 (eczema) upon 2.5 s stimuli at 0.4 mA (Fig. 2B). Only 3 patients reported an itch during the 10 sine wave pulses, but no significant difference in the NRS values was calculated between the patient groups (ANOVA, p > 0.4). No significant sex differences were identified (ANOVA, p > 0.4).
Continuous sine wave stimulation for 1 min
In order to identify whether ongoing sine wave stimulation induces increas ing itch in AD the current study delivered sinusoidal pulses at intensities of 0.05, 0.1 and 0.2 mA for 1 min. The choice of whether a current intensity of 0.05 or 0.1 mA was delivered was dependent on the patients' individual sensory threshold of stimulus-perceived unpleasantness, which had been measured previously in control and eczematous skin. Accordingly, continuous sine wave stimuli of 0.05 mA were delivered to 23 and a current of 0.1 mA to 19 patients with AD. In addition, all but one patient www.medicaljournals.se/acta (who withdrew due to stimulus unpleasantness) received stimuli of intensity 0.2 mA.
Continuous sine wave stimulation of 0.05 mA (n = 23) induced itch in 9 patients and pain (no itch) in 14 patients. Of the 9 patients, 2 reported itch in control skin and 8 reported itch in the eczema-affected skin (88%; Fig. 3A). The intensity of the sensation was significantly different between the patients' groups (itch vs non-itch, ANOVA, p < 0.02). In the eczema, a maximum NRS of approximately 3 ± 0.7 was recorded in itch patients compared with NRS 1 ± 0.3 in non-itch patients (Mann-Whitney U test, p < 0.04) at 40-60 s of stimulation (Fig. 3A). No significant sex differences were identified between patients (ANOVA, p > 0.1).
When delivering current intensities of 0.1 mA (n = 19), 8 patients reported itch and 11 patients reported pain (Fig. 3B). No significant difference of intensity was recorded between control and eczema (ANOVA, p > 0 .1). At both skin sites, mean maximum intensities of NRS 4 ± 1.1 were recorded from patients with itch. In contrast, significantly lower NRS of 1.5 ± 0.3 were assessed in the non-itch group (ANOVA, p < 0.005). In particular, significant NRS differences were calculated during 10-60 s of stimulation (Mann-Whitney U test p < 0.05, Fig. 3B). No significant sex differences were identified (ANOVA, p > 0.3).
Finally, a sine wave current intensity of 0.2 mA was delivered for 60 s (n = 25), which evoked itch in 12 patients and burning pain (no itch) in 13 patients. Itch or pain intensity did not differ significantly between patient groups (itch vs no itch, ANOVA, p > 0.5) or the investigated skin sites (control vs eczema, ANOVA, p > 0.3). A significant interaction was identified between the factorial groups "itch patients", "skin site" and "duration of stimulation" (ANOVA, p < 0.005), revealing that pain sensation continuously declined in the eczema-affected skin, whereas itch remained significantly elevated until 60 s of stimulation (Mann-Whitney U test p < 0.04, Fig. 3C). No significant sex differences were identified between patients (ANOVA, p > 0. 3) Note that the number of patients reporting itch increased progressively with increasing length of stimuli (depicted in columns, Fig. 3). Eventually, at the end of the stimulation period, the maximum numbers of itch-responders was recorded. Three individuals reported itch in non-affected control skin upon sine wave stimulation, i.e. 2 patients at 0.05 and 0.1 mA, and one patient at 0.2 mA. Also, the patients' sensation stopped almost immediately within 2 s after termination of the 60-s sine wave stimulation (not shown).
Stimulus duration dependent itch development
Increasingly more patients developed an itch sensation the longer the sine wave stimulation was delivered to the eczema (Fig. 4). Within 10 s of sinusoidal stimulation 5 patients (19%) reported itch, at 30 s 9 patients (35%) The intensity of sensation was significantly different between the patient groups (itch vs non-itch, analysis of variance (ANOVA), # p < 0.02,), particularly in the eczema-affected skin during 40-60 s of stimulation (Mann-Whitney U test *p < 0.04). Also, 2 patients reported itch when stimulating control skin. (B) Sinusoidal currents of 0.1 mA evoked itch in 8 (squares) and pain in 11 (circles) patients with atopic dermatitis (AD). The intensity of sensation was significantly different between the groups (ANOVA, # p < 0.005) at 10-60 s of stimulation (Mann-Whitney U test, *p < 0.05) in both control skin (left panel) and eczema (right panel). (C) Continuous sine wave stimulation of 0.2 mA evoked in approximately 50% of patients itch (n = 12) and in 50% pain (n = 13). The intensity of sensation was not significantly different between the patient groups (itch vs no-itch, ANOVA, p > 0.05) or the skin sites (control vs eczema, ANOVA, p > 0.3), but pain declined continuously (solid circles), whereas itch remained significantly elevated in the eczema-affected skin (Mann-Whitney U test, *p < 0.04).
reported itch, and at the end of the stimulation period (60 s) itch was reported by 14 of the overall 26 patients (54%, Fig. 4A). Delivering a single half sine wave pulse of 500ms duration evoked itch in 8 patients (30%) in the eczema and in one patient in control skin (Fig. 4B).
DISCUSSION
This study investigated somatosensory responses in patients with AD to slowly depolarizing currents, delivered transcutaneously, with 500-ms half sine wave pulses, and 4-Hz sine wave stimuli, both delivered to eczematous and non-affected (control) skin. Half sine wave pulses induced itch in the eczema of approximately one-third of patients. Sine wave pulses delivered continuously for 1 min evoked itch in approximately 50% of the patients (all of them also perceived half sine wave itch). Intriguingly, the number of patients reporting itch upon sinusoidal stimulation increased progressively with increasing (ongoing) sinusoidal stimulation time. Employing this novel electrical stimulation protocol we confirm that activation of polymodal nociceptors (half sine wave pulses (5)) as well as additional recruitment of silent nociceptors (sine wave pulses (6)) induces itch in affected skin in a subgroup of patients with AD. The progressively increasing occurrence of itch upon ongoing sinusoidal stimulation indicates that sustained peripheral input from unmyelinated primary afferent neurones may facilitate spinal itch transmission; for instance by acti vating GRPR neurones, as shown recently (11).
Itch upon electrical stimulation
Traditionally, itch is induced experimentally by the application of chemicals, for instance histamine (endogenously released from, for example, mast cells) or mucunain (cowhage spicules), which leads to consecutive activation of C-nociceptor subclasses characterized as mechano-insensitive (responding to, for example, histamine) or mechano-responsive (responding to, for example, cowhage spicules) (9). Indirect neuronal activation in the skin using itch-provoking chemical stimuli suggested a differential contribution of C-fibre classes in atopic itch (15,16). Such chemically induced nociceptor activation involves a receptor-mediated transduction mechanism. For direct identification of particular neuronal subclasses involved in pathological itch, axonal electrical stimulation protocols of primary afferent neurons would be needed in order to circumvent the aforementioned chemical signal transduction mechanisms. It was demonstrated decades ago that rectangular electrical pulses of high frequency (25-200 Hz) and up to 5-ms pulse duration can elicit itch in the wrist and ankle in humans (1)(2)(3)(4). Recently, slowly depolarizing electrical stimulation profiles that specifically activate mechano-responsive and mechano-insensitive C-fibres have been determined (5,6). The current study found that, in eczematous skin of AD, this electrical stimulation paradigm caused itch in approximately 50% of patients, and thereby confirmed that both subclasses of C-nociceptors can provoke itch. It is notable that the electrically induced itch sensation disappeared immediately after termination of the stimulus. It is therefore assumed that the recorded itch is not a chemical response; for instance, caused by the release of histamine from skin mast cells, in which case the itch sensation would have lasted for several minutes after electrical stimulus offset.
It may be considered that a reduced descending inhibitory control is present in some (i.e. those patients responding with itch), but not all, of our investigated patients with AD. The electrical stimulation paradigm caused intense burning pain in the skin of healthy subjects (5,6). Similarly, the majority of patients in the current study reported pain on electrical stimulation of non-affected skin. Given that itch can be suppressed by painful stimuli (17)(18)(19)(20) pain is expected to be the dominant sensation rather than itch. However, some patients with AD perceived itch in the eczema, and this observation perhaps might be due to an altered itch inhibitory control mechanism comparable to the recently reported decreased conditioned pain modulation observed in subjects with chronic pruritus (21). Admittedly, a reduced descending inhibition of itch is difficult to control. Sine wave stimuli delivered with threshold intensity (0.05-0.1 mA) caused itch in fewer patients than it did at supra-threshold (0.2 mA) electrical stimulation. This result appears rather contradictory, as lower intensities of pulses would cause less painful counter stimuli, and thus should be more likely be perceived as itch. On the other hand, threshold sine wave stimulation might be too low to evoke a substantial spinal synaptic input sufficient to drive central pruriceptive neurones.
One intriguing observation was the long-lasting and progressively increasing itch sensation during the 1-min electrical sine wave stimulation period. In patients with chronic pain a similar dynamic of (in this case) pain perception was observed previously upon continuous sinusoidal stimulation, particularly at neuropathically painful skin sites, but also in non-painful areas (6). In AD, the addressed "itch"-fibres (mechanically responsive and mechano-insensitive C-nociceptors) apparently reveal a comparable lack of adaptation, both in the affected and non-affected (control) skin from patients who reported itch upon slowly depolarizing stimulation. In these patients an axonal sensitization of peripheral pruritoceptors may be considered, but central (spinal or supra-spinal) mechanisms of itch sensitization, as discussed below (11), could also be involved.
Triggering spinal itch?
Approximately 30% of patients in the current study responded with pruritus to half sine wave stimulation in eczema-affected areas, and the occurrence of itch increased with higher current intensities (0.6-1 mA). Notably, stronger half sine wave currents enhance action potential discharges of polymodal nociceptors (5). The longer the C-nociceptors were stimulated by electrical sine wave stimulation (6) the more patients with AD felt this stimulation as an itch. The progressively increasing development of itch with electrical stimulation might be due to an increased spinal synaptic input that is required to trigger itch, as shown recently (11). The authors demonstrated that repetitive bursts of presynaptic GRP neurones induce progressive depolarization of postsynaptic GRPsensing neurones sufficient to relay spinal pruriceptive information (11). It may thus be hypothesized that the supra-threshold half sine wave, as well as the ongoing sine wave stimulation in the eczema-affected areas in the current study provides the peripheral input to trigger sufficient spinal GRP release entailed to provoke itch in a subgroup of patients with AD. The electrical stimulation profile in the current study thus provides a simple and fast experimental tool to assess axonal peripheral sensitization or facilitated central itch processing in patients with chron ic itch. Patients identified as likely to have facilitated spinal processing of itch might benefit from centrally acting antipruritic therapy.
|
2020-10-08T13:05:54.681Z
|
2020-10-07T00:00:00.000
|
{
"year": 2020,
"sha1": "72f8a8d949c8b1daf7dcdd0d25aa803356d6ddb1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2340/00015555-3658",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1bf7282cbdfebba066beee1eb63351af95ac93c8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270000092
|
pes2o/s2orc
|
v3-fos-license
|
Cartilage-hair hypoplasia–anauxetic dysplasia spectrum disorders harboring RMRP mutations in two Korean children: A case report
Rationale: Cartilage-hair hypoplasia (CHH, OMIM # 250250) is a rare autosomal recessive disorder, which includes cartilage-hair hypoplasia–anauxetic dysplasia (CHH-AD) spectrum disorders. CHH-AD is caused by homozygous or compound heterozygous mutations in the RNA component of the mitochondrial RNA-processing Endoribonuclease (RMRP) gene. Patient concerns: Here, we report 2 cases of Korean children with CHH-AD. Diagnoses: In the first case, the patient had metaphyseal dysplasia without hypotrichosis, diagnosed by whole exome sequencing (WES), and exhibited only skeletal dysplasia and lacked extraskeletal manifestations, such as hair hypoplasia and immunodeficiency. In the second case, the patient had skeletal dysplasia, hair hypoplasia, and immunodeficiency, which were identified by WES. Interventions: The second case is the first CHH reported in Korea. The patients in both cases received regular immune and lung function checkups. Outcomes: Our cases suggest that children with extremely short stature from birth, with or without extraskeletal manifestations, should include CHH-AD as a differential diagnosis. Lessons subsections: Clinical suspicion is the most important and RMRP sequencing should be considered for the diagnosis of CHH-AD.
Over 90 RMRP variants have been identified in CHH-AD spectrum disorders. [1]Affected individuals with CHH-AD have been reported in most populations, but especially high incidences were found in Finland and the Amish populations, with a prevalence of 1:23,000 (carrier frequency of 1:76) and 1 to 2:1000 (carrier frequency of 1:10), respectively, and about 700 individuals are currently reported having CHH. [2]CHH is an autosomal recessive disorder caused by a homozygous or compound heterozygous mutation in the RNA component of the mitochondrial RNA-processing Endoribonuclease (RMRP) gene on chromosome 9p13. [3]HH is characterized by metaphyseal chondrodysplasia with disproportionate short stature (reported adult heights range from 104 to 149 cm), fine and sparse hair, ligamentous laxity, immunodeficiency, hypoplastic anemia, cancer predisposition, neuronal dysplasia of the intestine (including congenital megacolon), and normal intelligence. [2]Short, thick long bones, metaphyseal flaring, and irregularities with globular epiphyzes at the knees and ankles are typical radiologic features of CHH. [4]mmunodeficiency of CHH can present as either T-cell or B-cell deficiencies and can manifest in infancy as severe combined immunodeficiency (SCID) or slowly progress and manifest in late adolescence/adulthood. [3,5]Anemia, usually hypoplastic anemia, is also seen in over 80% of CHH patients and is usually mild and self-limited; however, some patients demonstrate severe, persistent anemia. [6]MDWH cases only present as metaphyseal chondrodysplasia with disproportionate short stature without extraskeletal manifestation, such as defective immunity or hypoplastic hair.[6][7] AD, described as a form of spondylo-meta-epiphyseal dysplasia, is characterized by the prenatal onset of extreme short stature (adult heights <85 cm), hypodontia, and mild mental retardation. [7,8] diagnosis of a CHH-AD spectrum disorder is established by the clinical and characteristic radiologic findings.Following this, the identification of biallelic pathogenic or likely pathogenic variants in the RMRP gene by molecular genetic testing can confirm the diagnosis, and the patient can be considered for family studies.[9] To the best of our knowledge, there are no previous reports of CHH in Korea.In the present study, we describe 2 cases of Korean patients with CHH-AD, 1 of which is the first case of CHH in Korea.We report their clinical, radiographic, and diagnostic processes with genetic testing.
Case 1
A 13-month-old male visited our clinic owing to his short stature.The patient was born at the gestational age of 40 weeks with a birth height of 46 cm (z-score calculated by the 2017 Korean National Growth Chart for children, −2.04 standard deviation score [SDS]) and birth weight of 3040 g (−0.55 SDS) from nonconsanguineous healthy Korean parents.The patient had a younger male sibling of normal height and showed no skeletal dysplasia.At the time of visit, the patient's height was 61.4 cm (−6.47 SDS), and weight was 6.1 kg (−3.35 SDS).The mid-parental height (paternal height, 180 cm; maternal height, 159 cm) was 176 cm.The patient had no history of recurrent infections or congenital megacolon.The patient did not have joint hypermobility, facial anomaly, hypodontia, hypotrichosis, or hepatosplenomegaly.The patient had 1 Mongolian spot on the posterior lower trunk, the only cutaneous finding.The patient had a rhizomelic short stature, brachydactyly, bowing of the tibia, genu varum, short feet, and kyphosis.The patient had normal development milestones.
The patient's radiographic features revealed subtle cupping and widening at the metaphysis of both metacarpal bones, phalanges, and ulnae, as well as bilateral acetabulum dysplasia, femur shortening (rhizomelia), and brachy-phalanges.The chest X-ray revealed a normal thymus (Fig. 1).The complete blood count was normal, but the neutrophil ratio was low at 35.0% (reference range [RR], 41.5%-73.5%)and the lymphocyte ratio was high at 54.0% (RR, 19.9%-49.2%).The electrolytes, renal function, liver enzymes, and alkaline phosphatase were within the normal ranges.The thyroid, adrenal function test, and insulin-like growth factor-1 (IGF-1) levels were normal.The sella MRI and 2D echocardiography were normal.The patient was suspected of having skeletal dysplasia, and a growth hormone (GH) stimulation test was not done.A GH treatment was not tried.
At the age of 7 years and 4 months, the patient revisited our clinic due to progressive delayed growth.The patient's height was 87 cm (−7.21 SDS), and his weight was 13.5 kg (−2.65 SDS).Whole exome sequencing (WES) was performed to investigate the genetic cause of the patient.After obtaining written informed consent, genomic DNA was extracted from peripheral blood; a cDNA library was prepared using a TruSight One Sequencing Panel (Illumina, Inc., San Diego, CA), which enriched a 12-Mb region spanning 62,000 target exons from a total of 4813 clinically relevant genes.Massively parallel sequencing was achieved on an Illumina NextSeq platform.Sequence reads were mapped to the UCSC hg19 standard base for conducting a comparative analysis.
The WES results revealed 2 variants in the RMRP gene (NR 003051.3:n.-22-3dup and NR 003051.3:n.196C> T).The patient's father and male sibling have also been confirmed to carry the NR 003051.3:n.-22-3dup, a known pathogenic variant, while the patient's mother carries the NR 003051.3:n.196C> T, a known pathogenic variant (Fig. 2).Consequently, these 2 variants were confirmed to be compound heterozygous, and the patient was diagnosed with MDWH.At the time, the patient had a negative immunodeficiency test, and regular immunology tests have been performed.Since the diagnosis, in the outpatient clinic, the patient is continuously tracking whether symptoms have occurred.At the time of 15 years and 4 months, the patient was 103.3 cm (−11.63SDS) tall and weighed 28.8 kg (−3.20 SDS), showing no evidence of immunodeficiency or hair hypogenesis.Long-term follow-ups are planned to determine whether CHH-related symptoms have occurred.
Case 2
An 8-years 9-months-old female was referred to our clinic to evaluate her short stature and frequent episodes of infections.The patient was born at the gestational age of 40 weeks with a birth weight of 2700 g (−1.14 SDS) from nonconsanguineous healthy Korean parents.At the time of the visit, the patient's height was 99.9 cm (−5.45 SDS), and her weight was 16.3 kg (−2.21 SDS).The mid-parental height (paternal height, 178 cm; maternal height, 163 cm) was 164 cm.The patient suffered from recurrent pneumonia (1-2 times per year) and watery diarrhea.The patient had erythematous patches around her right elbow, and both knees and heels.The patient had thin hair and showed no evidence of breast development or other secondary sexual characteristics.The patient had normal psychomotor development.The patient's karyotype result was 46, XX, and her bone age was 1 year and 8 months younger than her chronologic age.The patient has a 4-year-old healthy younger female sibling, and 1 died younger male sibling died from pneumonia with neutropenia at 18 months old.
At 10 years and 1 month of age, the patient came to our dermatology clinic because of the uncontrolled warts that they acquired from her female sibling.At that time, fine, sparse hair was detected.A WES was performed and revealed 2 variants in the RMRP gene, confirmed by Sanger sequencing, NR_003051.3:n.-22_-3dup(likely pathogenic variant) and NR_003051.3:n.171G> T (variant of uncertain significance), and the patient was diagnosed with CHH (Fig. 2).Parental genetic testing was impossible since neither parent consented to genetic testing.
Screening tests for immunodeficiency revealed both cellular and humoral immunodeficiencies.The results of a lymphocyte subset analysis revealed a T-cell count of 373/μL (RR, 700-4200/ μL), CD 4 + helper T-cell count of 192/μL (RR, 300-2000/μL), CD 8 + helper T-cell count of 142/μL (RR, 300-1800/μL), B-cell count of 47/μL (RR, 200-1600/μL), and NK cell count of 377/ μL (RR, 90-900/μL).However, the results of immunoglobulin tests, including IgG, IgA, IgM and IgE, were all within normal ranges.The patient had a hemoptysis at the age of 11 years and 6 months, and a chest computed tomography revealed that the patient had postinfectious bronchiolitis obliterans with bronchiectasis (Fig. 1).The patient received regular immune and lung function checkups, and her last visit was when she was 11 years and 10 months old.Since the patient has been lost to follow-ups for 2 years and 4 months, her survival is unknown.
Discussion
This report describes 2 cases of CHH-AD spectrum disorders with RMRP variants (Table 1).Two patients had extremely short statures (below −4 SDS) on presentation.Knowing the birth length is an important prognostic factor for the mortality of patients with CHH.Vakkilainen et al [10] reported that in patients with a birth length adjusted for gestational age below −4 SDS, mortality, which is directly related to immune dysfunction, was significantly higher.The patient in case 1 had a low birth height, but it was not below −4 SDS, and the patient in case 2 lacked birth data.Skeletal dysplasia was present in both cases, but the radiographic findings of case 1 were subtle, whereas those of case 2 were typical.In case 1, bilateral acetabulum dysplasia, femur shortening (rhizomelia), and brachy-phalanges were the only findings of the skeletal survey at the age of 13 months.Although metaphyseal changes are the most distinctive feature of CHH, radiographic findings, including metaphyseal changes, in infants can be subtle. [11,12]In case 2, the skeletal survey demonstrated metaphyseal striation with irregular contour around both knees and ankles, as well as a compression fracture of the lumbar spine, which are typical findings in CHH.
Case 1 was diagnosed with MDWH among the CHH-AD spectrum disorders because the patient exhibited only skeletal dysplasia and lacked extraskeletal manifestations such as hair hypoplasia and immunodeficiency.However, according to a recent study, patients with MDWH features in childhood can develop late-onset extraskeletal manifestations, such as immunodeficiency or malignancy, in adulthood. [13]onsidering that study, the patient in case 1 should be followed up for late-onset events.MDWH is a very rare disease; only about 20 cases have been confirmed by genetic analysis, and the first case of MDWH with compound heterozygous variants (NR_003051.3:n.81G> A and NR_003051.3:n.100C> A) in Korea was reported in 2021. [14]ase 2 was a classic example of CHH, characterized by skeletal dysplasia, hair hypoplasia, and immunodeficiency.Changes in hair are typically described as being easily broken and lighter in color than in their siblings. [15]Eighty-eight percent of CHH patients have abnormal cellular immunity, and their clinical symptoms are generally limited to early childhood. [2,16]owever, some cases can progress to SCID, which is associated with an increased mortality risk.Because of the accompanying immunodeficiency, patients with CHH can suffer from recurrent respiratory infections, leading to lung structure diseases, such as bronchiectasis.Furthermore, pneumonia in the first year of life or recurrent infections in adulthood is another risk factor for early death. [10]Autoimmune disease has been reported among individuals with CHH, including immune-mediated thrombocytopenia, autoimmune hemolytic anemia, enteropathy, thyroid disease, and juvenile idiopathic arthritis.The prevalence of autoimmune diseases in CHH was reported to be 10.6% in a Finnish cohort. [17]The 2 patients in our report did not show any autoimmune dysfunctions.Reportedly, the most prevalent cancers in CHH patients are non-Hodgkin lymphoma and skin cancer, particularly basal cell carcinoma. [18]The typical age for a cancer diagnosis in CHH patients is 15 to 44 years, and multiple malignancies have also been reported. [19]Several studies have demonstrated that inappropriate control of infectious disease is associated with the pathogenesis of certain cancers in CHH, and a correlation between warts and the development of skin cancer in CHH patients has also been reported. [5]everal pathogenic variants in the RMRP gene, insertions, and duplications between the TATA box and the transcription start site are associated with CHH. [1]The genotype-phenotype correlation and underlying mechanism are not entirely understood.However, the phenotype of the CHH spectrum can be affected by whether variants are in the transcribed region or promoter region and are in the RNA-to-protein binding region or not.In addition, it has been shown that reduced cleavage of rRNA correlates with the severity of skeletal dysplasia.In contrast, reduced cleavage of mRNA correlates with milder skeletal dysplasia with additional extraskeletal features. [1]We display the schematic representation of the RMRP gene, the sequencing outcomes for the 3 variants discovered in our cases 3. Two previously reported pathogenic variants [1,[20][21][22] (n.-22_-3dup; n.196C > T) were identified as pathogenic variants.One variant (n.-22_-3dup) involves the duplication of 20 nucleotides in the promoter region of the RMRP gene, which is located between the TATA box (−33 to −25) and the transcription initiation site.Another variant (n.196C > T) is located near the P12 domain and has been reported to result in a mild to intermediate decrease in RNA cleavage activities. [1]The other variant (n.171G > T) has not been reported, but a similar variant (n.171G > A) has been reported to be associated with the CHH spectrum. [23]dditional functional studies are needed to confirm the pathogenicity of the n.171G > T variant.
Conclusions
Here, we present 2 Korean cases of CHH-AD with RMRP gene mutations, 1 of which is the first Korean patient with CHH.With skeletal dysplasia (typically metaphyseal abnormalities), the CHH-AD spectrum can present with various clinical manifestations depending on the presence or absence of extraskeletal manifestations and immunodeficiencies, which significant impact on the prognosis.This is the first Korean CHH reported, and our cases suggest that children with short stature and immunodeficiency should include CHH as a differential diagnosis.The RMRP gene is a noncoding, untranslated RNA gene and may not be included in the scope of next-generation sequencing or WES.An exome kit from WES performed well at capturing the targets, however, the coverage is an important matter.In diagnosing CHH, clinical and radiologic suspicion is most important, but the RMRP gene should also be considered in genetic testing.
JHP and MI contributed equally to this work.
Figure 1 .
Figure 1.Skeletal survey of both patients and chest computed tomography (CT) of case 2. (A) Radiographs of the patient in case 1, aged 13 mo; both hands and distal forearms show metaphyseal irregularities and sclerosis of the distal radius and ulnae and mildly cone-shaped epiphyzes of the phalanges.They also show kyphosis and genu varum.(B) Radiographs of the patient in case 1, aged 8 yr and 2 mo.(C) Radiographs of the patient in case 2, aged 8 yr and 9 mo; both knees and hips show metaphyseal irregularities and sclerosis.(D) Chest CT of the patient in case 2 aged 10 yr and 1 mo.
Figure 3 .
Figure 3. Schematic representation of the RMRP gene.RMRP = the RNA component of the mitochondrial RNA-processing Endoribonuclease.
|
2024-05-26T05:09:01.892Z
|
2024-05-24T00:00:00.000
|
{
"year": 2024,
"sha1": "35bc76018483e98db5ad115f55dc328dcfd57ca8",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "35bc76018483e98db5ad115f55dc328dcfd57ca8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
62148369
|
pes2o/s2orc
|
v3-fos-license
|
Design Of Remote Elevator Monitor Systembased On Coldfire Microcontroller
— The paper introduces the design of elevator monitor system, and the system can monitor the running status of the elevator and ensure the safety of the elevator. The system is based on Freescale corporation MCF51AC128 Coldfire microcontroller. We adopted Free Real Time Operating System (FreeRTOS) as software platform. We adopted General Packet Radio Service (GPRS) and Controller Area Network (CAN) bus technology to implement remote elevator monitor, and users can choose according to their requirement. In addition, the system used capacitance isolation technique to design CAN bus interface and software control flow error check technique to assure reliability. The application shows that the system runs stably. The system is very cost effective and easy to use.
INTRODUCTION
With the social development, the elevator safety problems attract more and more attention. Although elevator companies have their own monitoring systems, these systems are only developed for their respective brands and are not compatible. In practical applications, such systems are inconvenient to use and are in the idle state substantially. This paper introduces a remote elevator monitoring system design which is based on Freescale Coldfire micro controller and Free Real Time Operating System (FreeRTOS) real-time operating system. Compared to conventional solution based on Programmable Logic Controller (PLC), this design has lowcost and cost-effective advantages. The system uses optical isolation technology to collect input signals of the sensor. By analysing signals of the sensor, grasp the operation state of the elevator accurately and record in real time. When the elevator runs abnormally, the system realizes remote alarm and monitoring through General Packet Radio Service (GPRS) and combines Controller Area Network (CAN) bus to achieve a wired remote monitoring. GPRS is a continuation of the GSM (Global System for Mobile Communications). GPRS transmits with the packet type. The complete system consists of Personal Computer (PC) management software and the elevator monitoring equipment.
This paper mainly expounds the design of the elevator monitoring equipment.
II. SYSTEM FRAMEWORK
Elevator monitoring device is based on Freescale industryleading Coldfire microcontrollers MCF51AC128 [1,2]. The device integrates GPRS wireless communications, Moving Picture Experts Group Audio Layer-3 (MP3), Secure Digital Memory Card (SD card) storage, CAN, Recommend Standard 232 (RS232), sensor signal detection and so on which are placed inside the elevator.
Sensor module includes two photoelectric switches which are fixed on the elevator to detect position and motion state of an elevator. One is body pyroelectric infrared sensor which is used to detect whether someone is inside the elevator, the other sensor detects whether the door of elevator is closed. The output of the two limit switches and the sensors is switch signal. The two limit switches are used to test whether elevators reach the top and the bottom, and a number of the sensors test safety circuit.
Two photoelectric switches are fixed on the elevators, and one above and one below. Each floor is set up a baffle. When the elevators pass, the upper and lower photoelectric switches are blocked by baffles, resulting in two switch signals. By detecting the signals and the sequence, Software judges the floor of the elevator as well as the direction and speed of the operation.
System software consists of FreeRTOS operating system, driver, File Allocation Table File System (FATFS) file system and the auxiliary libraries [3]. System software contains each concrete hardware module driver and FATFS file system which is used to manage an SD card. In addition it also contains a concise standard C library to provide string operations, memory allocation and other necessary functions. On the basis, develop the application software to accomplish specific functions.
International Conference on Software Engineering and Computer Science (ICSECS2013)
III. HARDWARE DESIGN
This system adopts 50 MHZ MCF51AC128 micro controller. Respectively extend an SD card and VS1003 MP3 decoding chip through two integrated Serial Peripheral Interface (SPI) controller to realize large-capacity storage of the operating data and MP3 files and MP3 decoding playing. Connect Inter-Integrated Circuit (I2C) with PCF8563 real time clock to record the system time. In addition, the system extends a piece of AT24C256 Electrically Erasable Programmable Read -Only Memory (EEPROM) of I2C interface to hold the system critical configuration data, such as the Internet Protocol (IP) address of GPRS module and the gateway address.
There are two Universal Asynchronous Receiver Transmitter (UART) modules of MCF51AC128. One is for communication with the GPRS module, and the other realizes RS232 interface through which the user can connect the PC to configure the system related parameters, such as changing the alarm telephone number, setting the device address etc. Use General Purpose Input/ Output (GPIO) to realize sensor switch signal detection.
The system hardware block diagram is shown in Fig.1. Power affects the reliability and stability of the overall system directly, due to large range of external input supply voltage Vext (10V ~ 24V) and low quality of power. But the system hardware needs various supply voltage within the range of 2.5V ~ 5V. We use LM2576 DC-DC switch power supply chip to generate 5V (VCC5) for MP3 module audio amplifier circuits. Considering MCF51AC128 and GPRS module level matching, the system uses LM2576 to generate 5V voltage. The 5V voltage is generated a stable 3.1V by LM1117 Low Dropout Regulator (LDO) to supply the main chips of system. GPRS module requires 3.3V ~ 4.4V power supply. When the peak currents of sending and receiving data are up to 2A, the system uses the Sipex Corporation SPX29302 LDO to generate 4.0V directly from the Vext for GPRS modules. In addition, 2.5V which supply VS1003 decoder chip is generated from VCC5 directly.
In order to improve stability and anti-jamming capability of the system, we use optocoupler isolation technology to collect signals of external switch quantity sensor, including detecting leveling, safety circuit, door locks and other states. Thus, track and record the current operating status of the elevator accurately.
System uses Siemens MC52i GPRS module and Texas Instruments (TI) ISO1050 isolated CAN transceiver to communicate with residential property or other managements.
The ISO1050 is a galvanically isolated CAN transceiver that meets or exceeds the specifications of the ISO11898 standard.The device has the logic input and output buffers separated by a silicon oxide(SiO2) insulation barrier that provides galvanic isolation of up to 4000 VPEAK. As a CAN transceiver, the device provides differential transmit capability to the bus and differential receive capability to a CAN controller at signaling rates up to 1 megabit per second(Mbps) [4]. Fig.2 shows the CAN interface circuit diagram. ISO1050 has 4000V isolation voltage and conforms to ISO11898 standard, and the communication speed is up to 1Mbps, which greatly simplifies the traditional CAN-bus isolation design scheme which is based on optocoupler [4]. System adopts Positive Temperature Coefficient (PTC) resistor HR60 for overcurrent protection and adopts NUP2105 bidirectional Transient Voltage Suppressors (TVS) of ON Semiconductor for overvoltage protection. In addition, for providing the isolation power supply, the system adopts ZY0505BS-1w micro-power isolated power supply module of Zhou Ligong microcontroller technology co., LTD to generate isolated 5V power for ISO1050.The isolated 5V power is from 5V power supply which is the output of LM2576.
Mp3 decoding uses VS1003 to achieve, and its output is played by the audio amplifier drive speakers. When the elevator fault occurs, the system will automatically play an mp3 audio files which is pre-stored in the SD card to calm trapped people. Audio power amplifier is shared by VS1003 and GPRS. VS1003 and GPRS switch through a relay. When mp3 audio files play, MCF51AC128 gates VS1003 output to audio power amplifier through GPIO. On the contrary, when the voice calls, gate GPRS voice output to audio power amplifier. System sets a microphone in the elevator for voice calls.
IV. SOFTWARE DESIGN
The system software is mainly divided into two layers: system software layer and application software layer. System software layer mainly includes FreeRTOS, FATFS file system, streamlining C standard library, and each module driver of the hardware. Application software layer is based on the services which are provided by the system software layer to complete specific functions of elevator monitoring system.
A. System Software Design
FreeRTOS is a small embedded system which has mini operating system kernel. As a lightweight operating system, features include: task management, time management, semaphores, message queues, memory management, logging, etc. It can basically meet the needs of smaller systems.
FreeRTOS is a lightweight real-time kernel. It supports priority-based scheduling and round-robin scheduling which is based on the time slice. It supports preemptive kernel and collaborative kernel. Relative to domestic popular μC/OS-II, it is completely free and open-source. The system real-time performance is equivalent with μC/OS-II, but resources consume fewer.
This system transplants FreeRTOS on MCF51AC128 and uses on-chip timer of MCF51AC128 to generate the operating system clock tick. The system clock tick is 20ms. In addition, we implement a streamlined C library. Only provide the C standard library functions which the system needs. Therefore, greatly reduce the size of the final object code [5,6].
B. Application Software Design
Application software relies on the services which are provided by the system software to realize the function of customer demand. After the system reset and system initialization of the main function, create six FreeRTOS tasks and start FreeRTOS scheduler. Thereafter, FreeRTOS which is based on priority schedules tasks to complete the corresponding functions.
Sensor_task queries and analyzes input signals once every 500ms. If it is found that the elevator operation is abnormal, wake warn_task to alarm through message queue. If there are people exist in the elevator when the elevator detects abnormally, the system wakes mp3_task to play appeasing music. If it is detected that emergency call button in the elevator is pressed, sensor_task determines whether the current state of the elevator is normal. If it is not normal, automatically call management center telephone. In addition, the task also regularly stores running status of the elevator to the SD card.
Warn_task is responsible for sending out alarm signal through GPRS and CAN when an exception of the elevator occurs. During normal operation of the elevator, the task is in pending state [7,8]. The functions of gprs_task and can_task are similar. The former is responsible for monitoring input data of GPRS on the line, and the latter is responsible for monitoring the data on the CAN bus. When data are received, identify the data. If it is a valid command, then process it. Commands include that the host computer queries current state of the elevator, phone calls, setting up the system parameters and so on. When receiving a telephone call of management center, gprs_task hangs mp3_task. Switch audio output of GPRS to the audio amplifier circuit and turn on the phone to talk with people in the elevator.
Mp3_task is responsible for playing the audio files in SD card. Sensor_task will wake mp3_task to play music, when people are trapped in an elevator. On the one hand it can help trapped people save themselves. On the other hand it can help appease the trapped people.
RS232_task achieves to configure the system via RS232 for users. This task is usually in a suspended state. Once the data are received, UART interrupt service routines wake up the task to deal with configuration command of the PC.
C. Program Control Flow Error Checking Technology
In order to improve the stability and reliability of the entire system, we use a pure software method called Relationship Signatures for Control Flow Checking (RSCFC) to test program flow errors. The basic idea is to divide tasks into a number of basic blocks. Each basic block is assigned a number. In the beginning and the end position of each basic block, insert error detection code which is used to judge whether the execution flow of the program occurs errors due to interference. The basic block refers to a code block without switch-case, jump codes. That is, in the basic block except the last one instruction, it does not include any instruction such as jump, switch-case or call instruction which changes control flow. Program control flow can be described using the jump relation between each basic block. For example, basic block could jump into the block j1, j2...jm. I is called the predecessor block of these blocks. Andj1, etc are the subsequent blocks of i [9,10].
The jump relation can be determined in advance. So, record the block number at the end of the each basic block. When entering into the next basic block, detect whether the current basic block has a valid subsequent block. If not, the control flow error occurs. For example, record number i at the end of the basic block i. When entering into the next basic block j, determine whether j is one of valid subsequent blocks j1、 j2、…、jm of i. If not, error must happen.
Once the control flow error is detected, the system immediately disables interrupts and sends error alarm via GPRS and the CAN. At the same time, record the error log in the SD card. Then restart the system.
V. CONCLUSION
The system of elevator monitoring enhances the safety of the elevator. When a failure occurs, it can alarm in time to protect the personal safety of users. This paper describes a system design which is based on micro controller MCF51AC128 and FreeRTOS. The system uses GPRS and CAN bus to realize remote monitoring. The system design fully considers requirements of the antiinterference and reliability. The system has been tried at the scene half a year. The system running is stable, and evaluations of users are good.
ACKNOWLEDGMENT This work is supported by the China National Science and Technology major Project (2011ZX01034-001-002-003).
|
2018-12-30T11:10:42.908Z
|
2013-09-24T00:00:00.000
|
{
"year": 2013,
"sha1": "023eceb9e9202440118bf51c4e2294e78eaa52e5",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/8764.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "023eceb9e9202440118bf51c4e2294e78eaa52e5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
245539742
|
pes2o/s2orc
|
v3-fos-license
|
Cultural Variations in Evaluation of Creative Work: A Comparison of Russian and Emirati Samples
The study investigates how cultural variations influence evaluation of creative work. Russian and Emirati undergraduate college students were asked to judge alien creature drawings produced by their country mates in previous studies’ structured imagination test. We found cultural differences in creativity judgment. Emirati participants’ judgments were significantly lower than Russian participants’ judgments. We also found that Russians judged their compatriots significantly higher than the Emirati judged their compatriots. Russians also judged foreigners significantly lower than the Emirati judged foreigners. These findings were speculatively placed in the context of the cultural differences in the implicit theory of creativity.
INTRODUCTION
A while ago, the authors participated in the Science Film Festival in Russia where they presented Herman Vaske (2018) documentary "Why Are We Creative? The Centipede's Dilemma. " In this film, Vaske asks prominent representatives of various creative professions why they are creative. Their answers differed to the extent that virtually no common denominator could have been identified. The film clearly demonstrated that people, even those directly involved in creative professions, seem to be guided by very different notions of creativity.
This phenomenon is addressed in the literature as implicit theory of creativity, which is traditionally contrasted with explicit theory of creativity. The latter usually reflects the scientific study of a concept. Sternberg (1985, p. 607) defined explicit theories as "constructions of psychologists or other scientists that are based on or at least tested on data collected from people performing tasks presumed to measure psychological functioning. " These theories are developed by professional scholars and shared through academic and semi-academic venues, such as journals, conferences, and talk shows (Runco et al., 1998).
In contrast, implicit theories or folk conceptions refer to constructs tacitly presented in people's minds regardless of their expertise. These are sets of beliefs shared by a sociocultural group about the world, such as folk concepts of intentionality (Malle and Knobe, 1997), lay epistemics (Kruglanski, 1989), or implicit personality theories (Schneider, 1973). These are "the ideas held by laypeople that are usually not discussed, questioned, or consciously considered" (Paletz and Peng, 2008, p. 288). As became evident from Vaske's documentary, people hold very different opinions about the construct of creativity and consequently experience difficulties formalizing it. Therefore, an in-depth examination of implicit theories of creativity can help Frontiers in Psychology | www.frontiersin.org 2 December 2021 | Volume 12 | Article 764213 scholars obtain a more realistic and practical opinion about the construct (Runco and Bahleda, 1986). The question is how the implicit theory of creativity can be studied empirically. To answer this question, we need to understand how it is manifested in different facets of people's activity. Paletz et al. (2011) specified two ways in which people can express their implicit theory of creativity: externally (through interaction with others and generation of various products) and internally (by means of personal and inner processes). The external manner of expressing implicit theory of creativity could be addressed by looking at how creative production and performance is influenced by people's tacit conception of creativity. The internal approach develops in two directions. Some studies focused on how the implicit theory of creativity is realized in people's evaluation and assessment of their own and other people's creative abilities and personality traits (e.g., Lim and Plucker, 2001;Pizzingrilli and Antonietti, 2010;Hass, 2014;Stone and Hess, 2020). Others looked at how the implicit theory of creativity influenced people's judgment of creative work produced by others (e.g., Niu and Sternberg, 2001;Storme and Lubart, 2012;Long, 2014;Benedek et al., 2016;Loewenstein and Mueller, 2016;Stemler and Kaufman, 2020). The latter appears to be an important topic for investigation considering that people's subjective judgment of creative work could play a fatal role in one's creative aspirations and career. A number of studies demonstrated that one's career and creative achievements are predicted by self-efficacy (e.g., Tierney and Farmer, 2002;Baer et al., 2008), which in turn, is related to the judgment of one's creative works and creative capacity (e.g., Bandura, 1997;Jaussi et al., 2007).
All these studies address the concept of implicit theory of creativity indirectly. That is, they assume the manifestation of the implicit theory in creative production and performance as well as evaluation of creative work produced by oneself and the others. The present study is grounded in the assumption that the evaluation of creative work of others is influenced by the implicit theory of creativity. Note that we do not intend to test this assumption empirically. Rather, we look at its derivation, namely, evaluation of creative work produced by the others, and cultural variations thereof. In doing so, we employ a judgment paradigm.
The judgment paradigm is widely used in both creative enterprises and creativity research. Expert judges form juries at film festivals, art fairs, and musical contests. Guided by their expertise, they provide a consensual opinion about the creative value of a product and/or performance. In empirical research, Amabile (1982) developed Creativity Assessment Technique, which relies on the consensual assessment of creative production by several judges. This technique is widely used in creativity research (e.g., Kaufman et al., 2008;Baer and McKool, 2009;Freeman et al., 2015;Stefanic and Randles, 2015; see overview in Barth and Stadtmann, 2020). At the same time, Amabile pointed out that "These studies, too, often suffer from a definitional void by failing to explicitly articulate the definition of creativity…" (Amabile, 1982(Amabile, , p. 1000. Research shows that individuals' judgment of creative work varies due to their own level of creativity (e.g., Caroff and Besançon, 2008;Guo et al., 2019) and their expertise (e.g., Kaufman et al., 2009Kaufman et al., , 2013Plucker et al., 2009). Hence, we should expect a variation in creativity judgment between experts and laypeople. The former group was well studied, and Creativity Assessment Technique was widely used as an assessment of creativity in different domains (e.g., Yuan and Lee, 2014;Daly et al., 2016;see overview in Cseh and Jeffries, 2019). The latter, however, did not receive sufficient attention. Our study focuses on laypeople who have neither expertise nor experience in creative enterprise. Therefore, their judgment can be assumed to be influenced by their implicit theory of creativity.
Contemporary creativity literature suggests that cultural aspects of the environment have considerable influence both on levels of creative potential and on how creativity is evaluated (e.g., Simonton, 1994;Lubart and Sternberg, 1998;Niu and Sternberg, 2001;Glăveanu, 2010;Glăveanu et al., 2014). Cultural psychologists often describe culture as a set of beliefs, moral norms, customs, practices, and social behaviors of a particular nation or a group of people whose shared beliefs and practices identify the particular place, class, or time to which they belong (e.g., Rohner, 1984;Peng et al., 2001;Paletz and Peng, 2008). A set of common mental models, cultural scripts, and "interpretive frames" (Pavlenko, 2000) characterizes these people and suggests strategies in solving problems and dealing with a variety of situations in a culture-specific way. Cultural values and norms are assumed to determine and shape the concept of creativity, which in turn may influence the manner in which creative potential is apprehended and incarnated (e.g., Rudowicz, 2003;Westwood and Low, 2003;Lubart, 2010). There is even a radical opinion that "no account of creativity can be satisfactory unless it is culture-inclusive" (Glăveanu, 2010, p. 151).
Hence, we should expect cultural variations in people's perception of the concept of creativity. That is, people in the same cultural group tend to share some defining aspects of creativity, whereas people from different cultural groups tend to differ in some defining aspects of creativity. This can be illustrated with a distinction in the view of creativity in the West and in the East (e.g., Niu, 2019). Literature distinguishes between the West and the East with respect to individualism and collectivism (Triandis, 1975(Triandis, , 1977 or with respect to an independent and interdependent perspective (Markus and Kitayama, 1991). The distinction between these two social systems is grounded upon the degree of subordination of an individual's personal goals to the goals of some collective (Triandis et al., 1985). The individualist society values the person's unique qualities, initiative, and achievement, whereas the collectivist one places more emphasis on consensus with the community, on being in line with the others. This distinction manifests itself in different perceptions of creativity.
Within the tradition of Western psychology, creativity is understood as a factor, which determines the generation of a novel and appropriate ideas or solutions to a problem. It is closely related to originality (novelty) and usefulness (appropriateness; see Sternberg, 1999, for an overview). By analyzing the anthropological and philosophical literature on creativity in Indian, East Asian, and African societies, Lubart (1990Lubart ( , 1999 of creativity: Western concept of creativity is understood as having a finite beginning and end; in contrast, Eastern understanding of creativity supposes development. So, Western understanding of creativity emphasizes innovation, whereas Eastern concept is more dynamic assuming creative people's ability to reuse and reinterpret existing traditions (Lubart, 1999;Raina, 1999; see overview in Paletz and Peng, 2008). For example, contemporary Western art appreciates novelty and radical changes in existing paradigms or even rejection of them. In contrast, Confucian esthetics is related to the re-consideration of existing ideas, which reflects own values and beliefs (Tu, 1985). That is the case of traditional Arabic calligraphy or Chinese brush painting: the old ideas could be modified to reflect an artist's authentic perception. The latter could be regarded as a creative tool that is capable of catching the essence of the object. Instead of trying to establish a unique phenomenon by breaking up with old traditions, a person cultivates one's authentic approach, which can be applied to both old and new (Averill et al., 2001).
If representatives of different cultures may differ in their perception of the creativity construct, these differences may influence the evaluation of creative work produced by other people. A few cross-cultural studies investigated the agreement on creativity ratings of the work produced by the others. Participants were asked to judge creative work produced by other people. Chen et al. (2002) had American and Chinese college students evaluate drawings produced by their respective peers based on geometric shapes (circle, rectangle, and triangle). The agreement between the US and Chinese judges was nearly perfect (overall correlation was 0.97). Niu and Sternberg (2001) asked US and Chinese graduate students in psychology to make collages and to draw an alien creature (cf. Ward, 1994). Then, they asked the US and Chinese judges to evaluate these works. Americans were found to produce more creative artworks than did their Chinese peers, and this performance difference was recognized by both American and Chinese judges. Moreover, the difference between the use of criteria by American and Chinese judges was small. Very similar findings were reported in a study comparing German and Chinese participants' performance on occupational creative problem-solving task (Tang et al., 2015). The task performance was evaluated using the Consensual Assessment Technique mentioned above with judges from respective countries. That study revealed that both German and Chinese judges rated the German respondents' outcomes higher on most creativity dimensions. In a similar vein, Yi et al. (2013) reported that both German and Chinese judges found that German participants produced more creative and esthetically pleasing artwork than did their Chinese counterparts.
The Present Study
The present study takes a similar approach and looks at how cultural variations may influence the evaluation of creative work produced by other people.
We employed samples from Russia and the United Arab Emirates (UAE). Traditionally, the Western conception of creativity is ascribed to people from North America and Western Europe (e.g., Niu and Sternberg, 2001;Yi et al., 2013;Tang et al., 2015;Loewenstein and Mueller, 2016). A few studies demonstrated that people from Eastern and Central Europe also tend to reveal Western perspective on the creativity construct (e.g., Glăveanu and Karwowski, 2013;Hojbotă, 2013;Pavlović et al., 2013;Szen-Ziemiańska, 2013). At the same time, scientific literature lacks research on perception of creativity among Russians, although they appear to share a mindset with those residing in Central and Eastern Europe and there is an argument that Russia is inclined toward a more Western way of thinking (see discussion below). In a similar vein, the Eastern perspective on creativity is largely represented by Asian countries (specifically, China, see discussion above) and underrepresented by Middle Eastern countries. Hence, it appears to be plausible to test samples from the countries that are underrepresented in the literature.
Thus, we selected Russia as a representative of a European country (more West oriented) and the UAE as a representative of the Middle Eastern country (more East oriented). This selection is supported by the literature demonstrating Russia's rapid transition toward a less collectivist and more democratic society (e.g., Stetsenko et al., 1995;Ryan et al., 1999;Naumov and Puffer, 2000). The UAE, on the other hand, is considered a traditional collectivist Middle Eastern society (e.g., Hameed et al., 2016;Rao et al., 2021). Kharkhurin and Yagolkovskiy (2019) and Kharkhurin and Charkhabi (2021) used structured imagination test (Ward, 1994) to collect drawings of an alien creature from Russian and Emirati participants, respectively. The test evaluated participants' ability to surpass their "structured imagination" (cf. Ward, 1994), which presumably limits individuals' thinking outside the box. That is, people have difficulties violating the conceptual boundaries of a standard category when creating a new exemplar of that category. The drawings of the alien creature obtained from Russian and Emirati participants in those studies were judged by a different group of Russian and Emirati participants in the current study.
Considering reviewed evidence, we advanced a hypothesis that taps into cultural variations in creativity judgment. We expected to find that representatives of different cultural groups judge creative work produced by the others differently.
Procedure
The study consisted of two phases. Phase I involved selection of drawings of an alien creature produced in the test of structured imagination (see description below). These drawings were randomly selected from the ones produced in Russia and reported by Kharkhurin and Yagolkovskiy (2019) and in the UAE and reported by Kharkhurin and Charkhabi (2021), respectively. To save space in the present article, we refer the reader to those studies for a detailed description of the methods and present here only the information relevant to the purpose of the present study, namely, the description of the randomly selected samples and the test of structured imagination. Thus, in Phase I, we formed a pool of 100 drawings of alien creatures: 50 drawings were randomly selected from the Russian sample and another 50 were randomly selected from the UAE sample. To eliminate any language related bias, the drawings were cleaned from any text using Adobe Photoshop CS5.1.
A total of 100 drawings of alien creatures were used in Phase II's creativity judgment procedure (see description below). They were presented using the open-source survey platform LimeSurvey 2.06. The order of presentation was random.
In Phase II, after signing the consent form, participants received the creativity judgment procedure.
Participants
Participants from Russia and the UAE were recruited in both phases of the study. To reduce potential sampling biases, we recruited undergraduate students from the highly reputable Universities in the respective regions. HSE University was ranked ninth in Russia and the American University of Sharjah was ranked seventh in the Middle East according to QS World University Rankings. 1 In Phase II (current study), the Russian sample consisted of 53 (13 male Participants were invited to participate in the study through the Introduction to Psychology subject pool powered by the SONA systems. 2 They received a course credit for participation in the study. The drawings selected in Phase I (previous studies) were produced by 50 (20 male and 30 female) HSE University (Russia) undergraduate students aged between 17 and 21 (M = 18.18, SD = 0.75), who were randomly selected from a sample used by Kharkhurin and Yagolkovskiy (2019) and by 50 (20 male and 30 female) American University of Sharjah (UAE) undergraduate students aged between 17 and 23 (M = 20.10, SD = 1.42), who were randomly selected from a sample used by Kharkhurin and Charkhabi (2021).
Instruments
Emirati participants received all tests in English and Russian participants received all tests in Russian. The Russian versions of the tests were produced from the original English versions using back-translation (Brislin, 1970).
The Test of Structured Imagination
Structured imagination was assessed using modified version of the Invented Alien Creatures task (cf. Ward, 1994;Kozbelt and Durmysheva, 2007). The task was reduced from the original version to suit the purpose of the present study. The participants were asked to imagine, draw, and describe a creature living on a planet very different from Earth. They were encouraged to be as imaginative and creative as possible and not to worry about how well or poorly they draw. They had 12 minutes to complete the task. An invariant coding system (Kozbelt and Durmysheva, 2007) was used to categorize each drawing on three invariants, the features that commonly appear in most participants' responses: bilateral symmetry, two eyes, and four limbs. The chosen invariants were similar to the ones extracted in Ward's (1994, p. 1) original study, in which he found that "the majority of imagined creatures were structured by properties that are typical of animals on earth: bilateral symmetry, sensory receptors, and appendages. " Each invariant had five categories each of which was assigned a value indicated in parentheses. For bilateral symmetry, the categories were: clearly bilaterally symmetric (0), bilaterally symmetric if the creature was rotated (0), superficially violating bilateral symmetry (e.g., an extra limb on one side; 1), clearly not bilaterally symmetric (2), and unclear (0). For eyes and limbs, the categories were as: clearly following the invariant (two eyes or four limbs; 0), drawing more features than the invariant (more than two eyes or four limbs; 2), drawing fewer features (one eye or one to three limbs; 2), drawing no relevant features (1), and unclear (0). So, each drawing received an invariant value ranging from 0 (not violated) to 2 (clearly violated). Subsequently, the total invariants violation score was calculated as a sum of three invariants scores. The invariants violation score ranged from 0 to 6; a higher score suggested a greater tendency to violate the standard invariants in the drawing. Figure 1 presents two cases (before language related material was removed) illustrating (a) no violation of invariants and (b) some violation of invariants. The creature in Figure 1A is bilaterally symmetric (score 0), has two eyes (score 0) and four limbs (score 0); its invariants violation score is 0. The creature in Figure 1B is bilaterally symmetric (score 0), has more than two eyes (score 2) and no limbs (score 2); its invariants violation score is 4. The invariants violation score was used in the present study as an indicator of creative production.
Creativity Judgment Procedure
Participants received 100 drawings of alien creatures (see Procedure above). They were asked to judge the level of creativity of each drawing and indicate it on a five-point Likert-type scale. A higher creativity judgment score indicated greater perceived creativity of the drawing.
RESULTS
For each alien creature drawing produced in Phase I, we separately calculated a mean judgment score produced by Russian and Emirati participants in Phase II. A rate of agreement between Russian and Emirati judgment scores was high (r = 0.89, p < 0.001). judgment) of the drawings produced by Russian and Emirati participants (Phase I, production). From now on, we call cultural groups in Phase I drawing production groups and those in Phase II -drawing judgment groups. As Figure 2 demonstrates, the Russian judgment group gave higher ratings to both production groups in comparison with their Emirati peers. Also, both judgment groups gave higher ratings to the Russian production group than to the Emirati production group. These observations were tested in the following analysis.
Note that we found an effect of cultural groups on drawing production. That is, in the Phase I, Russian production group obtained significantly greater invariants violation scores than the Emirati production group [ΔM = 0.62, SE = 0.27, t(98) = 2.53, p < 0.05]. Since the production groups differed in their invariants violation scores, its potential confounding effect should be taken into account. In other words, to ensure that the differences in cultural groups' judgments were not stipulated by the differences in the quality of the drawings produced by respective cultural groups, we controlled the potential effect of the latter.
DISCUSSION
We explored whether cultural variations influence perception of creative merit in the work produced by the others. Our hypothesis that the representatives of different cultural groups judge creative work differently was confirmed. The main finding demonstrated variations in the cultural groups' judgment of the work produced by the others. Emirati participants' judgments were significantly lower than Russian participants' judgments.
Why did the Emirati tend to rate the alien creature drawings lower than the Russians? A possible answer to this question taps into perception of the creative value of the violation of standard category boundaries as assessed by the structured imagination test. Recall our finding that in the structured imagination test presented in Phase I of the study, Emirati participants tended to violate invariants less likely than their Russian counterparts. The reluctance to violate invariants in creative production can be related to disfavoring the creative work violating those invariants. That is, Emirati participants may find the drawings with more invariant violation less attractive and judge them as less creative. This idea is supported by an additional analysis demonstrating that invariant violation scores obtained in Phase I correlated significantly with Russian judgment scores (df = 48, r = 0.26, p < 0.05) and insignificantly with Emirati judgment scores (df = 48, r = 0.18, p = 0.07). Thus, Emirati participants may perceive the drawings that violate invariants as less creative, whereas Russian participants may perceive those drawings as more creative. The violation of invariants in structured imagination test (Ward, 1994) presumes violating the conceptual boundaries of a standard category when creating a new exemplar of that category. In general, unstructured imagination, thinking outside the box is considered to be an important criterion of creative thinking. People rate the drawings of the alien creatures that revealed more violations of a standard set of properties characterizing a category as more creative (e.g., Marsh et al., 1996;Kozbelt and Durmysheva, 2007;Kharkhurin, 2009). However, it is entirely possible that unstructured imagination appears to be a defining property of creativity in the Western, but not in the Eastern tradition. Kharkhurin (2014) claimed that the Western creative tradition places more emphasis on novelty and originality in thinking, whereas the Eastern tradition values esthetics, goodness, and authenticity. Li (1997) distinguished between horizontal and vertical traditions in the production of art. According to the former (typical for Western cultures), the symbols, methods, and aims of art are subject to modification and even radical change. In contrast, the latter tradition (more characteristic of Eastern cultures) constrains both the content and the techniques of the artistic work and places more emphasis on the esthetic values of the product. Hence, the Western perspective on creativity encourages violation of standards, whereas the Eastern perspective assumes the conformity to the standard. In the present study, Russians guided by the Western creative tradition may value unstructured imagination more than Emirati. As a result, the Russian participants produced higher judgment scores than their Emirati counterparts.
Further, similar to the above mentioned cross-cultural studies of the judgment agreement (Niu and Sternberg, 2001;Chen et al., 2002;Yi et al., 2013;Tang et al., 2015), we found a high rate of agreement between the Russian and Emirati participants evaluating the alien creature drawings. Both cultural groups gave higher rating scores to the drawings produced by the Russians. A possible explanation of this agreement is rooted in compatibility of evaluation criteria used by both cultural groups. This idea was expressed by Haritos-Fatouros and Child (1977) who found a consistency in evaluation of esthetic qualities of artwork by American and Greek judges. Interpreting their findings, they proposed that judges from different countries use similar criteria in their assessment of artworks, and esthetic components of these judgments have transcultural stability. Note that we cannot make a parallel between that study and ours, because Greek and American cultural settings appear to be closer to each other than the Russian and the Emirati ones. However, we appreciate Haritos-Fatouros and Child's conclusion about the transcultural stability of esthetic components of creative work.
However, some scholars provide a counter argument claiming that people tend to reproduce their own cultural systems in their artistic expression and evaluation of artwork (e.g., Shweder, 1991;Morling and Lamoreaux, 2008). This argument was supported by a number of studies demonstrating that people prefer to judge creative work produced by their country mates higher than the one produced by the foreigners (e.g., Wang et al., 2012;Ishii et al., 2014;Bao et al., 2016). For example, Bao et al. (2016) presented Chinese and international students from Western countries with traditional Chinese paintings and Western classicist paintings. They found a significant interaction between the cultural origin of the painting and the cultural background of the judges. Western participants rated Western paintings higher than Chinese paintings, whereas Chinese participants evaluated traditional Chinese paintings higher compared to Western paintings. Similarly, Ishii et al. (2014) revealed differences between European Americans and Japanese participants in the preference for unique and harmonious colorings. Wang et al. (2012) found differences between East Asians and European Canadians in their preferences for web page complexity.
An additional finding of the present study revealed that the Russian participants judged the drawings produced by their country mates as more creative than those produced by the foreigners, whereas the Emirati judged the drawings of their country mates as less creative than those produced by the foreigners. An obvious explanation of this finding can be inferred from the cultural differences in performances on structured imagination test and judgment procedure. The Russian participants in Phase I obtained higher invariant violation scores than their Emirati peers. At the same time, the Emirati participants in Phase II tended to provide lower creativity judgment scores than the Russian peers. Hence, lower ratings by the Emirati judgment group of the lower performance of the Emirati production group resulted in the lowest judgment scores by the Emirati judgment group of the Emirati production group.
CONCLUSION
In conclusion, we would like to place our findings in a broader context pertinent to contemporary research in creativity as well as to potential developments of the project.
First, we consider the methodological implications of our findings. The present study demonstrated that culture-specific variations may have an impact on the evaluation of creative work. As was stated in the introduction, culture-specific mental models and interpretive frames suggest common strategies in solving creative problems and thereby influence creative behavior. Our study demonstrated that these cultural aspects may also create a bias in the evaluation of creative performance of the representatives of alien cultural groups. Although this issue has not received sufficient coverage in empirical research, it appears to be quite an urgent matter.
For example, Kharkhurin (2014) argued that the Western creative tradition places more emphasis on novelty and originality in thinking. In contrast, in the manifestation of creative abilities in the Eastern tradition, esthetics (e.g., Kay, 1996;Zuo, 1998), goodness (e.g., Chan, 1967;Lao, 1983;Lao-Tzu, 1992), and authenticity (e.g., Tu, 1985;Averill et al., 2001) rather than originality play a pervasive role. Most of the existing assessment techniques adopted a Western construct of creativity, which emphasizes originality in thinking. Therefore, they could be biased toward typical Western creative behavior and disregard creative principles inherent to non-Western cultural groups. This bias in creativity assessment could explain the empirical findings of the Western dominance pervasive in creativity research.
Second, we would like to go back to our initial idea and place our findings of cultural differences in judgment in the context of the implicit theory of creativity. Professionals in creative domains base their evaluation of a creative product on specific knowledge and expertise, i.e., on explicit theory of creativity. In contrast, lay people tend to use popular knowledge about creativity without necessarily precise specifications, i.e., implicit theory of creativity. These people's judgments are instructed by their own (often naïve) understanding of the concept of creativity. This is the case of our study's participants. Therefore, we could suppose that the judgment of the alien creature drawings in the present study was guided by participants' implicit theories of creativity. In other words, there is a link between implicit theory of creativity and judgment of creative work. Our study revealed that the specifics of the sociocultural environment could have an impact on the judgment of creative products. Hence, we could make an inference that sociocultural context may influence implicit theory of creativity. This assumption finds support in the literature demonstrating cultural effects on the implicit theory of creativity (e.g., Lim and Plucker, 2001;Paletz and Peng, 2008;Lan and Kaufman, 2012).
Third, we extend our considerations about the implicit theory of creativity to a newly developed conception of creative perception. Let us imagine ourselves mingling with a museum crowd. It is implicit rather than explicit theory of creativity that instructs our opinion about the merit of exhibited artworks. Variations in implicit theory of creativity influence not only how creativity is incarnated, but also how it is appreciated. Our evaluation of a creative product is instructed by our perception of the construct of creativity. This idea brings us to an arising creative perception paradigm. In a first sketch of this paradigm, Kharkhurin and Charkhabi (2021, p. 10) proposed that "creative perception can be defined as an individual's ability to identify creative elements in oneself, others, and the environment. " These creative elements appear to be essential constituents of phenomenal reality that reflect the fundamental truth of nature (Kharkhurin, 2014). The ability to identify these elements encourages an individual to engage in the process of expressing them in one's creative act. In fact, perception of creativity construct refers to implicit theory of creativity. Creative perception may not only facilitate expression of phenomenal reality in a creative work. It can also instruct the beholder's perception of the creative work produced by the others. This is a well-known phenomenon in contemporary art. One needs to be prepared (and even educated) to appreciate an artwork.
Finally, we would like to propose several directions for future research. Cultural variations in implicit theory of creativity imply that people with different cultural backgrounds may vary in their explicit definitions of creativity. It would be interesting to see how those distinctions reflect the differences in creative perception, production, or judgment? A plausible methodology would analyze the definitions of creativity provided by the representatives of different cultural groups and relate them to their performance on various assessments of creativity. This could be a theme for the next study. Another study could explore the specific criteria used by people from different cultural groups to evaluate creative work produced by the others.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board of American University of Sharjah. The participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
AK designed the study, collected data in UAE, and drafted the manuscript. SY collected data in Russia and edited the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
The article was prepared in the framework of a research grant funded by the Ministry of Science and Higher Education of the Russian Federation (grant ID: 075-15-2020-928).
|
2021-12-30T14:09:54.478Z
|
2021-12-30T00:00:00.000
|
{
"year": 2021,
"sha1": "438b071e91e672f794bd8b8212382c1ed65f2ce9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "438b071e91e672f794bd8b8212382c1ed65f2ce9",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
201834473
|
pes2o/s2orc
|
v3-fos-license
|
Co-administration of iloprost and eptifibatide in septic shock (CO-ILEPSS)—a randomised, controlled, double-blind investigator-initiated trial investigating safety and efficacy
Background Part of the pathophysiology in septic shock is a progressive activation of the endothelium and platelets leading to widespread microvascular injury with capillary leakage, microthrombi and consumption coagulopathy. Modulating the inflammatory response of endothelium and thrombocytes might attenuate this vicious cycle and improve outcome. Method The CO-ILEPSS trial was a randomised, placebo-controlled, double-blind, pilot trial. Patients admitted to the intensive care unit with septic shock were randomised and allocated in a 2:1 ratio to active treatment with dual therapy of iloprost 1 ng/kg/min and eptifibatide 0.5 μg/kg/min for 48 h or placebo. The primary outcomes were changes in biomarkers reflecting endothelial activation and disruption, platelet consumption and fibrinolysis. We compared groups with mixed models, post hoc Wilcoxon signed-rank test and Mann-Whitney U test. Results We included 24 patients of which 18 (12 active, 6 placebo) completed the full 7-day trial period and were included in the per-protocol analyses of the primary outcomes. Direct comparison between groups showed no differences in the primary outcomes. Analyses of within-group delta values revealed that biomarkers of endothelial activation and disruption changed differently between groups with increasing levels of thrombomodulin (p = 0.03) and nucleosomes (p = 0.02) in the placebo group and decreasing levels of sE-Selectin (p = 0.007) and sVEGFR1 (p = 0.005) in the active treatment group. Platelet count decreased the first 48 h in the placebo group (p = 0.049) and increased from baseline to day 7 in the active treatment group (p = 0.023). Levels of fibrin monomers declined in the active treatment group within the first 48 h (p = 0.048) and onwards (p = 0.03). Furthermore, there was a significant reduction in SOFA score from 48 h (p = 0.024) and onwards in the active treatment group. Intention-to-treat analyses of all included patients showed no differences in serious adverse events including bleeding, use of blood products or mortality. Conclusion Our results could indicate benefit from the experimental treatment with reduced endothelial injury, reduced platelet consumption and ensuing reduction in fibrinolytic biomarkers along with improved SOFA score. The results of the CO-ILEPSS trial are exploratory and hypothesis generating and warrant further investigation in a large-scale trial. Trial registration Clinicaltrials.com, NCT02204852 (July 30, 2014); EudraCT no. 2014-002440-41 Electronic supplementary material The online version of this article (10.1186/s13054-019-2573-8) contains supplementary material, which is available to authorized users.
Introduction
Septic shock is a leading cause of death in the intensive care unit (ICU) with mortality rates above 40% [1,2]. Treatment strategies consist of early recognition and diagnosis to facilitate timely initiation of antibiotic therapy and supportive care [3]. A series of pathogenic events are responsible for the transition from sepsis to septic shock. The initial reaction to infection is a neurohumoral, generalised pro-and anti-inflammatory response [4,5] resulting in mobilisation and/or "spill over" of plasma substances and excessive cellular, coagulation and endothelial activation. The proinflammatory response induces widespread endothelial and microvascular injury resulting in disseminated intravascular coagulation with microvascular thrombosis, consumptive thrombocytopenia, coagulopathy, bleeding and a loss of endothelial integrity ultimately leading to capillary leakage, tissue oedema, tissue ischaemia and shock [5][6][7]. In the later stages of sepsis, immunodeficiency is a critical component of the pathology that causes multiple organ failure and death [8].
There are three major pathogenic pathways associated with the coagulopathy in sepsis: (1) tissue factor-mediated thrombin generation, (2) dysfunctional anticoagulant pathways and (3) blocked fibrinolysis [9]. Treatment strategies aimed at reducing coagulation activation with antithrombin [10], tissue factor pathway inhibitor [11] and activated Protein C [12,13] have all failed to show improved survival in large clinical trials refuting this as a pathophysiological explanation.
The platelets and endothelium are interdependent in the vicious cycle of endothelial damage, microcirculatory failure, consumptive thrombocytopenia, coagulopathy, bleeding, immunodeficiency, tissue ischaemia, shock, organ failure and death, in patients with severe sepsis/septic shock. Selective targeting of either platelets or the endothelium may be sufficient to prevent the progressively more activated and damaged endothelium and activation of the platelets [14].
Prostacyclin is an endogenously produced molecule with anti-platelet, vasodilatory and cytoprotective properties released from the healthy endothelium as part of the natural anticoagulation system [15]. Intravenous prostacyclin in doses of 0.5-2.0 ng/kg/min has been reported to be successful at achieving endothelial modulating/preserving effects in patients with traumatic brain injury, without significant haemodynamic or platelet aggregation complications [16,17].
Eptifibatide is a platelet glycoprotein (GP) IIb/IIIa receptor inhibitor that prohibits clot development in a predictable and easy controllable way. Inhibition of the GPIIb/IIIa receptor does not alter the paracrine function of platelets, which is considered a crucial part of maintaining vascular integrity and preventing haemorrhage in conditions with inflammation [18,19]. Animal studies have reported that treatment with GPIIb/IIIa inhibitor protects against endothelial dysfunction in experimental endotoxemia [20,21]. Furthermore, casuistic findings have shown that GPIIb/IIIa inhibition leads to clinically relevant thrombolysis in patients with mechanical prosthetic mitral valve thrombosis [22,23].
The objective of the CO-ILEPSS trial was to investigate safety and efficacy of a combined infusion of lowdose prostacyclin (iloprost) and GPIIb/IIIa inhibitor (eptifibatide) for 48 h in patients with septic shock. We hypothesised that this dual treatment with iloprost and eptifibatide would deactivate the endothelium and restore vascular integrity, reduce formation of microvascular thrombosis and dissolve existing thrombi in the microcirculation and maintain platelet counts leading to improved platelet-mediated immune function and reduced risk of bleeding. Compared to the standard treatment (placebo), this was expected to translate into reduced organ failure and improved outcome in patients with septic shock.
Design
The CO-ILEPSS trial was an investigator-initiated singlecentre randomised, placebo-controlled, double-blind phase 2a trial in patients with septic shock.
The trial was conducted from October 2014 to May 2016, and there were no significant changes to the trial protocol during the course of the trial. The trial is reported in accordance with the CONSORT statement [24], and a populated CONSORT checklist is available in Additional file 1. The trial was approved by the regional ethics committee, and all patients and/or their next of kin gave informed consent to participate. The full trial protocol is available in Additional file 2.
Participants
Patients were allocated in a 2:1 ratio with 15 intentionto-treat (ITT) patients allocated to active treatment and 9 ITT patients allocated to control treatment (placebo). Patients who dropped out or were withdrawn from the trial prior to day 7 were replaced to ensure adequate data points, and 12 active treatment and 6 placebo patients, respectively, were treated as per protocol (PP). To replace patients, who were withdrawn, unblinded trial personal added envelopes containing the same allocation as the ones who dropped out and we recruited and rerandomised new patients.
We screened patients admitted to the intensive care unit (ICU) at Nordsjaellands Hospital (NOH) during the inclusion period. Patients were screened within 24 h of admission according to the following inclusion criteria: A full description of the inclusion and exclusion criteria is provided in Additional file 3.
Randomisation
The random allocation sequence was computer generated, and allocation pages were packed in sealed opaque envelopes. The envelopes were prepared by the principal investigator (SRO) at Rigshospitalet (RH) and delivered at the trial site NOH. At NOH, the envelopes were stored in a locked office at the post-anaesthesia care unit (PACU) located in a separate building from the ICU. The local investigators (REB and MHB) did not have access to this office. When a patient fulfilled inclusion criteria and consent had been obtained, randomisation was done by placing a phone call from the ICU to a nurse at the PACU. The nurse then opened the next envelope in line and prepared the trial drug or placebo according to the instructions. Syringes containing trial drug or placebo drug were then delivered to the investigator (REB) at the ICU where trial treatment was initiated.
Intervention
Patients allocated to the active treatment arm received dual infusions of prostacyclin (iloprost) 1.0 ng/kg/min and GPIIb/IIIa inhibitor (eptifibatide) 0.5 μg/kg/min for 48 h. Iloprost and eptifibatide were both diluted in saline to a concentration with which the targeted treatment was achieved with an infusion of 4 ml/h. Treatment in the placebo group consisted of dual infusions of normal saline 4 ml/h for 48 h. The infusions of both active and placebo treatment were given either in two separate legs of a central venous catheter or in two separate peripheral venous catheters.
Outcomes
The primary outcome of the CO-ILEPSS trial was divided in three different measures. These were:
1) Change in biomarkers indicative of endothelial
activation and damage (sE-selectin, syndecan-1, soluble thrombomodulin (sTM), sVE-cadherin, nucleosomes, vascular endothelial growth factor (VEGF) and soluble vascular endothelial growth factor receptor 1 (sVEGFR1)) from baseline to 48 h post-randomisation 2) Change in platelet count from baseline to 48 h post-randomisation 3) Change in D-dimer and fibrin split products indicative of fibrinolysis (fibrin monomer complex, fibrin degradation products, D-dimer) from baseline to 48 h post-randomisation.
The reason for having three primary sub-endpoints was that they reflect different effects of active treatment on the vascular system that we wished to evaluate, i.e. endothelial activation, platelet consumption and fibrinolysis.
Secondary outcomes included severe bleeding (intracranial or clinical bleeding with the use of 3 RBC units or more/24 h); use of blood products in the ICU post-randomisation; mortality at days 7, 30 and 90; changes in Sequential Organ Failure Assessment (SOFA) score from baseline; and days of vasopressor, mechanical ventilation and renal replacement therapy (RRT) post-randomisation.
Sample size/power calculation
The sample size for the CO-ILEPSS trial was not based upon a power calculation because there were no available data on the specific active dual drug therapy vs placebo in patients with septic shock.
However, in a previous study of safety and efficacy of prostacyclin vs placebo in patients undergoing Whipple surgery, post-operative levels of sVE-cadherin increased 1978 ± 461 pg/ml in the placebo group [25]. Based on this, we would be able to detect a difference of 33% in sVE-cadherin increase between groups with 12 patients in the active treatment group vs 6 patients in the placebo group, with a power of 0.8 and an alfa of 0.05.
Blinding
The CO-ILEPSS trial was a double-blind, placebo-controlled trial, and all participants, next of kin, caregivers, investigators and sponsors were blinded for the trial allocation.
Both trial medications were colourless when diluted in saline, and it was impossible to distinguish the syringes with trial medicine from those with saline. Since the number of patients in the different groups was unequal, it was not possible to maintain blinding during the statistical analyses, but these were conducted according to the statistical analysis plan generated as part of the trial protocol.
Statistical analysis
Summary statistics of continuous variables are presented as median with interquartile range (IQR). Summary statistics of frequency tables are presented as n (%). p values < 0.05 are considered significant.
The primary outcomes were analysed for efficacy in PP analyses. The difference between treatment groups for continuous data was evaluated with the analysis of variance (mixed model) and post hoc pairwise comparisons of means. Furthermore, delta values (numerical change in variables between time points) within and between groups were compared by paired (Wilcoxon signed-rank test) and non-paired (Mann-Whitney U test) non-parametric tests.
Biomarker measurements are presented as absolute values in Figs. 2 and 3 and as relative changes in percentage from baseline in Additional file 4.
Secondary outcomes were analysed on an ITT basis. The differences between treatment groups for categorical data were evaluated with McNemar's test (change over time), frequency tables and chi-square statistics. The difference between treatment groups for continuous data was evaluated using the analysis of variance (mixed model) followed by post hoc pairwise comparisons of means. If the assumption of normality was not fulfilled, non-parametric test and Wilcoxon rank-sum test were used. Statistical analysis was performed using SAS 9.1.3 SP4 (SAS Institute Inc., Cary, NC, US).
Results
During the study period, we screened 509 patients and included 24. Most patients were excluded due to the absence of septic shock or completed/scheduled surgery within ± 48 h. Of the included patients, two patients were withdrawn prior to initiation of trial treatment, and four patients were withdrawn prior to day 7 (Fig. 1). These six patients were replaced in the trial. Reasons for withdrawal were incorrect inclusion (1), emergency surgery (1), transfer to another ICU (1), therapeutic anticoagulation therapy (2) and treatment with inhaled prostacyclin (1). Table 1 shows baseline characteristics, use of organ supportive therapy and outcomes of patients included in the PP analyses. Only alkaline phosphatase was significantly different between groups at baseline, and it is worth noting that the disease severity was considerable with SOFA scores of 8-10, SAPS II scores of 46-48 and an observed 90-day mortality of 25-50%.
Primary outcomes
The PP primary analysis included data from the 18 patients (12 active and 6 placebo) who completed the full 7 days of the trial.
Endothelial disruption biomarkers
At baseline, sVE-cadherin was significantly higher in the placebo group (p = 0.047) (Fig. 2a). Apart from this, there were no differences in the measured biomarkers between groups at baseline or at any time point during the 5-day follow-up ( Fig. 2a-f). There were, however, differences in the within-group changes over time: At 6 h, there was a significant increase in both sTM (p = 0.03) and nucleosomes (p = 0.02) (Fig. 2b, c) only in the placebo group. Furthermore, in the placebo group, there was a tendency towards increasing levels of nucleosomes for up to 72 h (p = 0.06) (Fig. 2c).
At 48 h and throughout day 5, there was a significant decrease in sE-Selectin (p = 0.007) and sVEGFR1 (p = 0.005) only in the active treatment group (Fig. 2d, e).
Platelet count
The platelet count did not differ significantly between groups at any time point during the trial. Similarly to the endothelial disruption biomarkers, there were differences in the within-group changes over time with a decline from baseline to 48 h only in the placebo group (p = 0.049) and an increase from baseline to day 7 only in the active treatment group (p = 0.023) (Fig. 3a).
Fibrinolytic biomarkers
D-dimer and fibrinogen degradation products were similar in both groups (Fig. 3b, c). Levels of fibrin monomers were higher in the active treatment group than in the placebo group at baseline. Comparison of within-group delta values showed a significant decline within the first 48 h (p = 0.048) and onwards only in the active treatment group (time effect p = 0.04) (Fig. 3d).
Secondary outcomes
In the ITT secondary analyses, a total of 24 patients (15 active and 9 placebo) were included.
All secondary endpoints including safety measures of bleeding, use of blood products in the ICU and mortality were comparable between groups (Additional file 5). Additionally, there was no difference in the occurrences of serious adverse events (SAEs) between groups and there were no suspected unexpected serious adverse reactions (SUSARs) in either group. Occurrences of SAEs and reasons for withdrawal and exclusions are summarised in Table 2.
Comparison of within-group changes over time revealed a significant reduction in SOFA score at 48 h (p = 0.024) and onwards in the active treatment group, but not in the placebo group.
Main findings
Dual therapy with iloprost and eptifibatide for 48 h in patients with septic shock had no significant effect on absolute values of biomarker levels compared to placebo treatment.
Endothelial injury
Taken together, the observed changes in sTM, nucleosomes, sE-selectin, VEGF and sVEGFR1 could reflect reduced levels of endothelial activation, disruption and cell damage. Prostacyclin doses, corresponding to the low dose chosen for this trial (1.0 ng/kg/min), have previously been demonstrated not to increase bleeding risk or haematoma size in patients with traumatic brain injury [17] and to reduce the need for blood transfusion in patients undergoing Whipple surgery due to pancreatic cancer [25]. Our results are in alignment with these former trials which also demonstrated beneficial effects of iloprost on vascular integrity in critically ill patients reflected by similar changes in sE-selectin, sTM and nucleosomes [17,25,26].
Endothelial protection could be ascribed to the cytoprotective effects of prostacyclin, which in its endogenous form induces a reduction in inflammation and stabilisation of lysosomal and cell membranes [27]. The dose of 1.0 ng/kg/min is approximately five-to tenfold higher than the normal endogenous production of prostacyclin from the healthy endothelium [28], and we expected this to restore vascular integrity in septic patients with endothelial injury and dysfunction. The clinical impact of within-group changes in these biomarkers remains to be seen, but observational data have linked increased levels of sTM to reduced survival in patients with septic shock [29].
Platelet protection and thrombotic activity
The increasing platelet count in the active treatment group could indicate protective effect against platelet consumption. In a previous study by Link et al., it was demonstrated that administration of a platelet GPIIb/IIIa receptor inhibitor in combination with unfractionated heparin for 96 h was tolerated in patients with cardiogenic shock and need for dialysis. Importantly, in this study, treatment with the GPIIb/IIIa receptor inhibitor was not associated with increased bleeding but was rather associated with a significantly lower number of platelet transfusions and a higher, maintained platelet count, as compared to controls anticoagulated with heparin alone [30]. This finding of preserved platelet count is in accordance with the finding in our study.
In addition to preserved platelet counts, the inhibition of platelet-monocyte aggregation might have an anti-inflammatory effect, and thus serve to protect the endothelium and microvasculature [31]. A reduced thrombotic activity was reflected in declining levels of fibrin monomers in the active treatment group.
Safety
The individual doses of iloprost and eptifibatide selected for this trial are lower than the recommended doses for their respective approved indications. The dosages chosen for the current trial are in alignment with doses that have been reported to result in the desired effect for each agent without causing significant adverse side effects [17,25,30].
The safety of the co-administration of eptifibatide and iloprost in a dose comparable to the dose applied in the present study (eptifibatide 0.5 μg/kg/min + iloprost 1.0 ng/kg/min infused for 24 h) is supported by a completed phase I/II trial in patients undergoing primary PCI due to ST-elevated myocardial infarct [26]. This trial found no bleeding-related adverse events, and no treatment-related adverse events occurred.
Limitations
Our main inclusion criterion was septic shock defined as the use of norepinephrine in patients with sepsis. This ensured that the screening and inclusion process was rather pragmatic and easy to perform, but we might have limited the potential effect of our intervention, since it is not given that all patients with septic shock have equal degrees of endothelial dysfunction and/or consumption coagulopathy. If we had used one or more specific markers in our screening, we might have been able to show higher efficacy of our trial treatment.
The CO-ILEPSS pilot trial is exploratory and hypothesis generating. Our small sample size and the single-centre design limit both external validity and the ability to draw any definitive conclusions from our results. The lack of a power calculation might have limited our ability to detect a difference in our primary outcome. Furthermore, our primary outcome is a composite of three categories of biomarkers with a total of 11 sub-components, which poses a problem of multiplicity.
|
2019-09-05T10:00:28.883Z
|
2019-09-05T00:00:00.000
|
{
"year": 2019,
"sha1": "8bb00d2408509938d9ce120dc73574e7d06eacd2",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-019-2573-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8bb00d2408509938d9ce120dc73574e7d06eacd2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271423891
|
pes2o/s2orc
|
v3-fos-license
|
HIGD1B, as a novel prognostic biomarker, is involved in regulating the tumor microenvironment and immune cell infiltration; its overexpression leads to poor prognosis in gastric cancer patients
Background HIGD1B (HIG1 Hypoxia Inducible Domain Family Member 1B) is a protein-coding gene linked to the occurrence and progression of various illnesses. However, its precise function in gastric cancer (GC) remains unclear. Methods The expression of HIGD1B is determined through the TCGA and GEO databases and verified using experiments. The association between HIGD1B and GC patients’ prognosis was analyzed via the Kaplan-Meier (K-M) curve. Subsequently, the researchers utilized ROC curves to assess the diagnostic capacity of HIGD1B and employed COX analysis to investigate risk factors for GC. The differentially expressed genes (DEGs) were then subjected to functional enrichment analysis, and a nomogram was generated to forecast the survival outcome and probability of GC patients. Additionally, we evaluated the interaction between HIGD1B and the immune cell infiltration and predicted the susceptibility of GC patients to therapy. Results HIGD1B is markedly elevated in GC tissue and cell lines, and patients with high HIGD1B expression have a poorer outcome. In addition, HIGD1B is related to distinct grades, stages, and T stages. The survival ROC curves of HIGD1B and nomogram for five years were 0.741 and 0.735, suggesting appropriate levels of diagnostic efficacy. According to Cox regression analysis, HIGD1B represents a separate risk factor for the prognosis of gastric cancer (p<0.01). GSEA analysis demonstrated that the HIGD1B is closely related to cancer formation and advanced pathways. Moreover, patients with high HIGD1B expression exhibited a higher level of Tumor-infiltration immune cells (TIICs) and were more likely to experience immune escape and drug resistance after chemotherapy and immunotherapy. Conclusion This study explored the potential mechanisms and diagnostic and prognostic utility of HIGD1B in GC, as well as identified HIGD1B as a valuable biomarker and possible therapeutic target for GC.
Introduction
Gastric cancer (GC) is one of the most widespread and fatal diseases in the world.In 2020, there have been over 1 million new instances of GC worldwide, placing it fourth in terms of mortality among malignant tumors and fifth in terms of morbidity rate (1).In recent years, patients with GC have an improved outlook thanks to advancements in endoscopic and surgical procedures, as well as the application of adjuvant therapies such as chemotherapy, targeted therapy, and immunotherapy (2,3).Nonetheless, due to the substantial molecular and phenotypic heterogeneity of GC (4), most patients with advanced gastric cancer still have a dismal prognosis, with a 5-year survival rate of less than 30% (5,6).Therefore, searching for new, highly sensitive, and specific biomarkers and therapeutic targets is imperative to improve the present treatment approaches for GC.
Hypoxia is one of the crucial stress modes that cause cell damage and even death (7), which is intimately linked to conditions including cancer, heart disease, and stroke (8).It aids in the reconstruction of the tumor microenvironment (TME) and facilitates the growth and metastasis of malignancies.The HIG1 hypoxia inducible domain (HIGD) gene family is a putative anti-apoptotic factor since it is elevated during hypoxia and can influence several critical biological processes (9,10).For instance, in hypoxic settings, hypoxia-inducible factor 1a (HIF-1a) induces HIGD1A expression, which exerts antiapoptotic properties by blocking the release of cytochrome C (Cc) and diminishing caspase activity (11-13).In addition, by controlling AMPK activity and cellular reactive oxygen species (ROS) levels in the body, HIGD1A can lessen tumor cell death and contribute to the development and spread of malignancy (14).
HIGD1B is a significant member of the HIGD family, with the HIGD1B genome located on chromosome 17q21.31.The gene encodes the protein HIG2A, composed of 99 amino acids and abundantly expressed in the brain, heart, lung, and subcutaneous adipose tissue (9,15).Other homologous proteins include HIGD-1A, -1C, -2A, and -2B.With more than 40% homology, HIGD-1B and HIGD-1A are extremely analogous in the transmembrane domain (15).Studies have proved that by postponing the cleavage of OPA1, HIGD1B can inhibit hypoxia or CCCP-induced mitochondrial rupture and cell death.Its mechanism of governing mitochondrial fusion is similar to HIGD1A, and knocking down HIGD-1B can accelerate apoptosis of myocardial cells in hypoxic surroundings (16).Furthermore, HIGD1B is involved in the onset and advancement of intracranial aneurysms (IA), growth hormone-secreting pituitary adenomas (GHomas), and lung cancer (17)(18)(19).Nevertheless, little is known about the expression and mechanism of HIGD1B in GC, and its diagnostic and prognostic value in GC is not fully understood.
In this article, we analyzed the expression of HIGD1B in GC and normal gastric tissues by multiple independent cohorts from public databases, and we verified our findings with cell experiments.We accessed the possible roles of HIGD1B in the genesis and progression of GC through various enrichment analysis methods.Researchers then explored the relationship between HIGD1B and clinicopathological elements, TME, and immune cell infiltration of GC, thoroughly and systematically evaluated the diagnostic and prognostic value of HIGD1B in GC, predicted the effectiveness of chemotherapy and immunotherapy, and ultimately identified HIGD1B as a novel prognostic biomarker for GC.
Data collection
Transcriptome information (TPM) and matching clinical data of gastric cancer and adjacent tissues downloaded from the Cancer Genome Atlas (TCGA) (https://portal.gdc.cancer.gov/)and Gene Expression Comprehensive (GEO, https://www.ncbi.nlm.nih.gov/)databases (Supplementary Table S1).The TCGA-STAD cohort has 36 normal and 410 cancer specimens; 439 of these samples provide prognostic data.The GSE29272 queue represents 134 normal and 134 cancer samples.The GSE54129 queue includes 21 normal and 111 cancer samples.There are 433 and 300 gastric cancer patients and their prognosis information in the GSE84437 and GSE62254 queues, respectively.HIGD1B expression is validated using GSE29272 and GSE54129.GSE84437 and GSE62254 were used for forecasting outcome.The Cancer Immunohistochemical Atlas (TCIA, https://tcia.at/patients)provides information on immunotherapy.Somatic mutation data are derived from UCSC Xena database (https://xenabrowser.net/datapages/).
GAPDH as an internal reference, employing the 2 −DDCt method to determine the relative expression level of HIGD1B.The experiment was repeated three times.
Western blot
After washing the cultivated cells with PBS, the total protein was extracted using RIPA buffer (KWB002; KIGENE Biotech, China).Collect the supernatant after centrifugation at 4°C for 10 minutes and measure the protein concentration using the BCA assay kit (KWB011, KIGENE Biotech, China).The protein was transferred to the PVDF membrane (KWB047; KIGENE Biotech, China) after being separated on a 10% SDS-PAGE gel.Incubate the membrane at room temperature in a 5% skim milk solution for 1-2 hours, and then primary antibodies anti-HIGD1B (ABIN2175800, antibodies-online, China; 1:1000) and b-actin (KWB040-R; KIGENE, China; 1:1000) was incubated overnight at 4°C.After that, wash the membrane in TBST for 30 minutes and leave it in the secondary antibody conjugated with HRP at 37°C for 1 hour.After TBST washing again, visualize protein bands using ECL assay kits (KWB032; KIGENE Biotech, China).
The relationship between HIGD1B and clinicopathological characteristics of GC
Researchers generated high and low expression groups for individuals in the TCGA-STAD, GSE62254, and GSE84437 cohorts based on the median expression of HIGD1B and derived survival curves employing Kaplan-Meier analysis (21).In the TCGA-STAD dataset, we studied the association between HIGD1B and clinical pathological indicators, as well as the correlation between HIGD1B and GC prognosis among various clinical subgroups.The accuracy of HIGD1B in predicting survival time and survival rate in GC patients was then assessed using receiver operating characteristic (ROC) curves (22).Further, using Cox analysis, the expression of HIGD1B and other clinical features (such as age, gender, cancer grade, and stage) were elucidated concerning the overall survival of GC patients (23).
Differential analysis and functional enrichment analysis
Initially, we utilized the expression of HIGD1B to categorize GC patients in the TCGA cohort into high and low-expression groups and implemented the "limma" package (| LogFC |>1 and FDR<0.05) to seek out differentially expressed genes (DEGs) between the two groups (24).Next, using the "enrichplot" and "clusterProfiler" packages, GO and KEGG enrichment analyses were undertaken to discover biological processes and signaling pathways linked to HIGD1B (q-value<0.05)(25).Likewise, Gene Set Enrichment Analysis (GSEA) was also applied to clarify the possible mechanisms and pathways of HIGD1B in GC.The selected reference molecular database was "c2.cp.Kegg.Hs. symbols.gmt", and | NES |>1 and p-value<0.05were regarded as significantly enriched (26).
Establish and assess a nomogram
To better align with clinical practice, researchers designed a nomogram (27) through the "rms" and "survival" packages (28) based on all independent prognostic risk factors determined by Cox regression analysis to measure the survival time and survival rate of GC patients.Then, ROC curves were used to compare the nomogram's prediction power with other clinicopathological parameters, and the dependability of the nomogram was evaluated by calibrating the curve.
Analysis of tumor microenvironment and immune cell infiltration
We compute the corresponding scores by employing the ESTIMATE methodology (29) to assess the fraction of immune, stromal, and tumor cells in the tumor microenvironment of gastric individuals with cancer.For patients in the high and low HIGD1B expression groups, the infiltration proportion and abundance of TIICs were evaluated via the CIBERSORT and single sample gene set enrichment analysis (ssGSEA) methods (30)(31)(32).Additionally, the correlation between HIGD1B expression and certain TIICs was examined using Spearman analysis.
Prediction of immunotherapy and drug sensitivity
We measured each GC sample's Dysfunction, Exclusion, and TIDE scores through the TIDE website (http://tide.dfci.harvard.edu/), and the connection between HIGD1B and MSI-related indicators was analyzed.Subsequently, the tumor mutation burden (TMB) score was determined from somatic mutation data, and a waterfall plot presented the somatic mutation landscape.The prognostic differences between several TMB groups were evaluated using K-M curves.To forecast the clinical effectiveness of immunotherapy in patients with GC, we also examined the relation between HIGD1B and CTLA-4 in conjunction with PD-1 immunotherapy.Standard chemotherapeutic medicines were checked for sensitivity utilizing the "oncoPredict" package (33).Drug sensitivity was demonstrated in relation to the semi-inhibitory concentration value.
Statistical analysis
R software (version 4.2.2) was applied for bioinformatics statistics and plotting.The "timeROC" and "survival" packages of R were employed for the ROC curve and Cox regression analysis, respectively.Wilcoxon test was utilized for intergroup analysis.Kaplan-Meier curve was implemented for survival analysis, and Spearman was used for correlation analysis.The experimental data was analyzed using GraphPad Prism (version 9.3.1),and a one-way analysis of variance (ANOVA) was used to compare the relative expression levels of HIGD1B.Statistics are deemed significant when p<0.05.*P<0.05;**, P<0.01; ***, P<0.001; ****, P<0.0001.
Expression of HIGD1B in pan-cancer and gastric cancer
According to the analysis of pan-cancer data downloaded from the TCGA database (Supplementary Table S2), HIGD1B was observed to be significantly upregulated in the tumor tissues of COAD, ESCA, HNSC, KIRC, LIHC, STAD, and THCA, but lowered in BRCA, KICH, LUAD, LUSC, and UCEC (Figures 1A, C).In the TCGA-STAD cohort, HIGD1B was discovered to be considerably higher in gastric cancer tissue as compared to normal gastric tissue (p<0.001) (Figure 1B).Likewise, it was noticed that gastric cancer tissue had a higher expression of HIGD1B in comparison to 27 paired adjacent tissues (p<0.05)(Figure 1D).Then, we downloaded the GSE29272 (containing 134 GC and 134 adjacent samples) and GSE54129 (containing 111 GC and 21 adjacent samples) cohorts from the GEO database for analysis to further examine the expression of HIGD1B in GC and adjacent tissues.The findings confirmed that HIGD1B in gastric cancer tissue was considerably greater (p<0.01)than adjacent tissue in both cohorts (Figure 1E).Additionally, researchers implemented qRT-PCR and Western blot assays to determine the expression of HIGD1B mRNA and protein in GC cell lines (Supplementary Table S3), suggesting the expression of HIGD1B in HGC-27 and AGS cells was substantially greater than GSE-1 (Figures 1F, G).
The relationship between HIGD1B and clinical pathological characteristics of gastric cancer
In the TCGA-STAD, GSE65524, and GSE84437 cohorts, all patients were classified into high and low groups based on the median expression of HIGD1B, respectively.K-M curves were applied to investigate the association between HIGD1B and overall survival (OS), results revealed that patients with high expression of HIGD1B had shorter survival time in all cohorts (p<0.05)(Figure 2A).To gauge the diagnostic worth of HIGD1B, we set up receiver operating characteristic (ROC) curves using GC patients from the TCGA database.The AUC values for 1, 3, and 5-year survival rates were 0.562, 0.598, and 0.741, respectively, indicating that the diagnostic efficacy of HIGD1B in predicting GC survival is appropriate (Figure 2B).In the TCGA-STAD cohort, HIGD1B expression was greater in the population reaching PFS (p<0.05), while there was no significant difference in DFS and DSS (Figure 3A).In addition, patients with GC who expressed high levels of HIGD1B also showed shorter PFS, DFS, and DSS (Figure 3B), implying a worse prognosis for this population.The correlation between HIGD1B and the clinicopathological parameters of gastric cancer was then evaluated, and it was discovered that there was no statistically significant variance in HIGD1B expression among age, gender, N and M staging populations (Figures 2D, E, I, J), but that there was higher expression of HIGD1B in the death population (fustat=1), higher pathological grade (G3), later stage and T stage groups (Figures 2C, F-H).Further, based on clinical-pathological feature stratification, we investigated the prediction capacity of HIGD1B for OS in GC patients.K-M analysis revealed that in terms of age (>65/ <=65), gender (male and female), high grade, Stage I-II, T3-T4, N0, and M0 subgroups, patients with low HIGD1B expression had a considerably better prognosis than those with high HIGD1B expression (Figures 3C-I).
The potential mechanism of HIGD1B affecting gastric cancer
Initially, the TCGA-STAD cohort's GC patients were split into groups with high and low expression, and the differential expression genes (DEGs) between the two groups were identified (3 down-regulated and 802 up-regulated) (Figure 4A; Supplementary Table S4).The possible processes and pathways of HIGD1B were then explored by doing GO and KEGG analyses on these genes (Supplementary Table S5).Among them, GO analysis uncovered that these DEGs mainly involve biological processes and molecular functions like "muscle system," "integral binding," and "extracellular matrix and structure organization" (Figure 4B).The KEGG analysis indicated that these DEGs were enriched in cell-matrix pathways such as "Cell adhesion molecules" and "ECM receptor interaction," as well as cell signaling pathways such as "cAMP," "cGMP-PKG," "Rap1", and "PI3K-Akt" (Figure 4C), which are closely connected to the occurrence and development of hypoxia, inflammation, and cancer (34)(35)(36)(37).Additionally, the researchers employed GSEA analysis to examine the functional distinctions between the groups with different HIGD1B expressions.As per the research findings, the low HIGD1B group is linked to cellular metabolic processes like "Cell cycle," "DNA replication," "Ribosome," and "Oxidative phosphorylation." Conversely, the high HIGD1B group's pathways are obviously linked to "Calcium," "Hedgehog," 、 "TGF-b," 、 "Wnt," and "Focal adhesion" signaling paths (Figure 4D; Supplementary Table S6).Thus, we speculate that these tumors and stromal signaling pathways are connected to the poor prognosis of patients with elevated HIGD1B.
Construct and evaluate a clinical nomogram
To investigate the potential of HIGD1B as an independent predictor, we first performed a univariate Cox analysis and found that age (HR=1.020,p=0.018), stage (HR=1.618,p<0.001), and HIGD1B (HR=1.221,p=0.002) were significantly correlated with prognosis (Figure 4E).Multivariate Cox analysis exhibited that age (HR=1.031,p<0.001), stage (HR=1.678,p<0.001), and HIGD1B (HR=1.190,p<0.009) can independently predict the outcome of GC patients (Figure 4F; Supplementary Table S7).Subsequently, we produced a nomogram using parameters with p<0.05 from Cox analysis to further enhance clinical practicality.Figure 5A indicated that the nomogram predicts the 1, 3, and 5year survival rates of a patient in the TCGA-STAD cohort to be 0.812, 0.528, and 0.396, respectively.The calibration curve (Cindex: 0.658) illustrated the consistent capacity for prediction of the nomogram (Figure 5B).Moreover, the AUC values of the ROC curves for the 1, 3, and 5-year survival rates in the nomogram were 0.675, 0.689, and 0.735, respectively (Figure 5C), and they were superior to conventional clinical features in predicting the prognosis of GC patients (Figures 5D-F).In summary, we have demonstrated the efficiency and precision of the nomogram from various perspectives.
The relationship between HIGD1B and immune cell infiltration
Tumor microenvironment (TME) is known for its immunosuppression and induction of drug resistance (38,39), which can promote tumor cell proliferation and invasion, thereby adversely influencing the prognosis (40, 41).In addition, tumor-infiltrating immune cells (TIICs) are an important component of the tumor microenvironment, and an ever-growing body of reports have confirmed that TIICs are involved in cancer progression and recurrence (42-45), both TME and TIICs are crucial to the initiation and development of cancer.We first employed the ESTIMATE method to determine the proportion of tumor, stromal, and immune cells in the TME of GC patients.The findings implied that the high HIGD1B group had higher stromal, immune, and estimated scores, while the tumor purity score was lower (Figure 5G).Subsequently, using the CIBERSORT technique, the researchers assessed the percentage of all sample TIICs in the TCGA cohort (Figures 6A, B).The low HIGD1B group showed higher infiltration of T cells CD4 memory activated B with anti-tumor effects (46), while the high HIGD1B group had more macrophage M2 infiltration.Studies have indicated that it is associated with high expression of TGF-b, IL-4, IL-13, and IL-10, suppressing the inflammatory response and encouraging tumor angiogenesis and distant metastasis (47,48).Furthermore, we investigated the association between TIICs and HIGD1B using ssGSEA and discovered that the majority of immunosuppressive cells were highly infiltrated in the high HIGD1B group (Figure 6C).Spearman analysis revealed HIGD1B has a negative correlation with activated CD4T cells but a positive correlation with T-Reg, MDSC, and Mast cells (Figure 6D).According to these findings, HIGD1B may regulate the infiltration and differentiation of TIICs to form highly inhibitory TME, thereby inhibiting immune response, promoting immune escape, and worsening the prognosis for patients with gastric cancer.All immune infiltration related data are in Supplementary Table S8.
Prediction of immunotherapy efficacy
As immunotherapy continues to advance, cancer patients' survival times have extended, and their quality of life has improved substantially compared to before, demonstrating its enormous application prospects in tumor treatment (49,50).However, not all cancer populations are sensitive to immunotherapy because of individual variances.Thus, we need to identify more targets to expand the options for immunotherapy.The researchers computed the TIDE score of GC patients in the TCGA dataset and investigated its connection with HIGD1B (Supplementary Table S9).They found that the high HIGD1B group exhibited higher TIDE, exclusion, and dysfunction scores (Figure 7A), which means that the high HIGD1B group may be more prone to immune escape and less responsive to immunotherapy (51).In addition, Microsatellite instability (MSI)/DNA mismatch repair (MMR) is of great significance for the diagnosis, prognosis assessment, and treatment selection of various malignancies (52), especially digestive tract tumors such as gastric cancer and colorectal cancer (53,54).Figure 7B illustrates that the HIGD1B expression in the MSI-H subgroup is considerably lower (p<0.01) in comparison to the MSS subgroup, indicating that patients with low HIGD1B expression have a greater chance of receiving immunotherapy.Research has indicated that most cancer mutations are somatic mutations, and approximately 90% of oncogenes exhibit somatic mutations, such as TP53 and TERT gene mutations that frequently occur in cancer lineages.These mutations also have a significant role in developing treatment strategies for tumors (55).We downloaded the TCGA-STAD queue's somatic mutation data from the UCSC website for analysis.The waterfall plot uncovered that the mutation incidence was higher in the group with low HIGD1B expression (92.67% vs. 80.87%), with the most common type of mutation being missense mutations.The three genes with the highest prevalence of mutations were TTN, TP53, and MUC16 (Figure 7C).We then calculated the TMB scores of each GC patient.As shown in Figures 7D, E, the TMB value was significantly higher (p<0.001) in the low-HIGD1B expression group, and patients in the H-TMB subgroup had a longer survival time (p<0.01).The population in the H-TMB+L-HIGD1B group had the most excellent prognosis in the combined study of TMB and HIGD1B (Figure 7F).Immunotherapy provides a new approach to tumor treatment with unique advantages and enormous potential.Immune checkpoint inhibitors (ICIs) are a vital component of immunotherapy (56), and we forecast the immune response of GC by examining ICIs (Figure 8A).In addition, ICIs (PD-1 combined with CTLA-4) demonstrated that the PD-1 positive combined with CTLA-4 negative treatment group showed superior efficacy in the population with low HIGD1B expression, whereas there was no significant difference in the other three groups (Figure 7G).
Drug sensitivity analysis
Drug sensitivity analysis revealed that the semi-inhibitory concentrations of several clinically standard first-or second-line chemotherapy drugs (including oxaliplatin, 5-fluorouracil, cisplatin, and irinotecan) and targeted drugs are positively correlated with HIGD1B expression (Figures 8B-G).This indicates that individuals with low HIGD1B expression are more responsive to these drugs and have a higher likelihood of benefiting from them.
Discussion
Gastric cancer is one of the malignant tumors with the highest incidence rate in the world.Most patients are in the advanced stage when they are diagnosed.At present, only chemotherapy, targeted drugs (like trastuzumab), and some immune checkpoint inhibitors (such as nivolumab and pembrolizumab) are available in clinical practice, and the prognosis is poor.Therefore, exploring novel biomarkers has excellent prospects for early detection of gastric cancer, prognostic assessment, and prediction of therapeutic efficacy.The relationship between hypoxia and tumor is inseparable, one of the main characteristics of cancer is hypoxia (57).Cancer cells have traits such as vigorous metabolism, rapid proliferation, and high energy demand.A hypoxic environment forms when there is a more significant requirement for oxygen than there is supply, which causes metabolic alterations.On the one hand, it induces neovascularization by stimulating cells to release erythropoietin (EPO) and angiogenic factors (58)(59)(60).On the other hand, it promotes the activation and proliferation of stromal cells, reshapes the tumor microenvironment, and exacerbates tissue hypoxia (61).These will help the tumor progress and make the patient more resistant to treatment.In addition, hypoxia can also generate a lot of reactive oxygen species (ROS), harm healthy cell's DNA, increase the frequency of gene mutation, and ultimately cause cancer (62).Pursuant to current research, the HIGD gene family is induced expression by hypoxiainducible factor-1a (HIF-1a) in hypoxic conditions, participates in the assembly of mitochondrial complexes, and regulates mitochondrial homeostasis, affect a range of physiological and pathological processes, and be a significant factor in numerous illnesses (particularly cardiovascular diseases, diabetes, and cancer).
It is worth noting that HIF is a transcription factor extensively distributed in the human body during hypoxia.The activation of HIF, which contributes to the metabolic reprogramming of tumor cells (like the renowned Warburg effect) and supports the formation of an immunosuppressive microenvironment (by inhibiting CTLs and recruiting Tregs), is one of the primary mechanisms by which tumor cells can survive in hypoxic environments (63).Moreover, HIF is modulated by multiple paths, including PI3K-mTOR, JAK-STAT3, and Notch signaling pathways (64)(65)(66).Its overexpression is intimately linked to the growth, metastasis, and recurrence of cancer and may lead to tumor resistance to chemotherapy and immunotherapy.
The HIGD family includes HIGD1A, -1B, -1C, -2A, and -2B.The most studied gene is HIGD1A, a mitochondrial inner membrane protein that plays a crucial role in regulating cellular metabolic homeostasis and anti-apoptosis (67,68).HIGD1A has a dual effect of promoting and inhibiting cancer and is regarded as HIF-1a's target genes.HIGD1A weakens oxidative stress during hypoxia and glucose deficiency by activating the AMPK pathway, inhibiting mitochondrial respiration, reducing ROS generation, and mediating cell dormancy, alleviating tumor cell death (14,15,69).HIGD1A has been proven to be a meaningful biomarker in pancreatic cancer and glioma (70,71).The HIGD2A-encoded protein mainly exists in nuclei and mitochondria, and it is essential for the assembly of human mitochondrial complex IV, which can prolong the cell's lifespan in hypoxic environments.HIGD2A's expression is markedly elevated in a few cancer tissues, including LUAD, DLBCL, LIHC, and BRCA (72 In this article, the researchers initially analyzed and corroborated the expression of HIGD1B in queues (TCGA-STAD, GSE54129, and GSE29272) from public databases.Studies implied HIGD1B was significantly upregulated in human GC tissues, as well as qRT-PCR and Western blot, also confirmed that this gene was more expressed in human GC cell lines, suggesting that HIGD1B may have carcinogenic and promoting effects on GC.Next, we downloaded prognostic data for GC patients, and the K-M curve revealed that the OS, PFS, DFS, and DSS of the population with high-HIGD1B expression in the TCGA cohort were shorter (p<0.05).The dependability of this gene in predicting overall survival was verified in the GSE62254 and GSE84437 cohorts, and the ROC curve indicated that HIGD1B may reasonably predict the survival rate of GC patients.In addition, HIGD1B's expression was elevated in GC patients with G3 grade, later stage, and T stage in clinical groups.Cox regression analysis revealed that age, stage, and HIGD1B expression are independent elements for predicting the outcome of gastric cancer.Subsequently, a nomogram was created using these indicators to predict the outcome of GC patients, and its efficiency and reliability were examined through ROC and calibration curves.Moreover, differential analysis was subjected to the high and low HIGD1B groups, yielding 805 DEGs for enrichment analysis.According to the GSEA results, the "Hedgehog," "TGF-b," "MAPK," and "Wnt" signaling pathways, as well as matrix activation and adhesion, are linked to the HIGD1B high expression group.
In recent years, rapid development in immunotherapy has drawn attention to the tumor microenvironment (TME), which comprises several components, including cells, extracellular matrix, and blood vessels.Among these, immune cells play a dual role in promoting and combating cancer.This study clarified the relationship between HIGD1B, TME, and TIICs and discovered a positive correlation with immune scores and infiltration of tumor-promoting immune cells (such as Tregs, MDSC, and M2 macrophages).It is pertinent to note that indicators connected to immunotherapy are critical in formulating treatment plans for gastric cancer.Consequently, we thoroughly assessed the potential association between HIGD1B and ICIs, TMB, TIDE, and MSS.This research demonstrated that individuals with high HIGD1B had higher TIDE scores and a higher risk of immune evasion.In contrast, persons with low HIGD1B had higher TMB values and MSI-H ratios in gastric cancer and had better efficacy for CTLA-4 immunotherapy.Eventually, drug sensitivity analysis also revealed that the group with low HIGD1B expression exhibited lower IC50 values and better sensitivity to commonly used chemotherapeutic medicines in clinical.
It's critical to recognize the limitations of this study.Firstly, the data used in the research were all retrieved from public databases, although involving multiple independent queues, there may still be sample bias.Second, although we have validated the differential expression of HIGD1B in gastric cancer cells and normal gastric epithelial cells through partial experiments (qRT-PCR and Western-Blot), further validation in human tissues is still needed.Additionally, more study is required to determine the exact mechanism by which HIGD1B encourages the onset and progression of gastric cancer, and additional experimental data is required to bolster our hypothesis.Finally, this gene has shown considerable potential in predicting immune therapy and clinical outcomes, but it needs to be validated in a clinical cohort of gastric cancer patients.Therefore, in-depth research is crucial for understanding the exact mechanism of this gene.
Conclusions
This article performed a comprehensive study on the expression pattern and prognostic relevance of HIGD1B in gastric cancer using bioinformatics analysis, elucidated its potential involvement in critical pathways, explored the effects of HIGD1B on the tumor microenvironment (TME) and tumorinfiltrating immune cells (TIICs), and projected the immuno-and chemotherapeutic effects of GC based on HIGD1B expression.In addition, researchers have confirmed the differential expression of HIGD1B in gastric cancer cells and gastric epithelial cells through partial experiments.Thus, there is cause for us to believe that HIGD1B may be a promising biomarker for predicting the outcome of gastric cancer and guiding clinical immunotherapy and personalized treatment.
Data availability statement
The datasets presented in this study can be found in online repositories.The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material.
1
FIGURE 1 Analyzing and validating the expression of HIGD1B in pan-cancer and gastric cancer.(A) Expression of HIGD1B in pan-cancer non-paired samples.(B) Expression of HIGD1B in GC and adjacent tissues (non-paired) of the TCGA-STAD cohort.(C) Expression of HIGD1B in pan-cancer paired samples.(D) Expression of HIGD1B in GC and paired adjacent tissues of the TCGA-STAD cohort.(E) Expression of HIGD1B in GC and adjacent tissues in the GSE54129 and GSE29272 cohorts.(F) Detection of HIGD1B expression in gastric epithelial cells (GSE-1) and GC cell lines (AGS and HGC-27) by the qRT-PCT assay.(G) Detection of HIGD1B expression in gastric epithelial cells (GSE-1) and GC cell lines (AGS and HGC-27) by Western blot assay.*P < 0.05; **P < 0.01; ***P < 0.001.TCGA, The Cancer Genome Atlas; STAD, stomach adenocarcinoma; GC, gastric cancer.
2
FIGURE 2 Systematic evaluation the relationship between the HIGD1B and clinicopathological features.(A) Kaplan-Meier curves of high and low HIGD1B expression subgroups in the TCGA-STAD, GSE62254 and GSE84437 queues.(B) ROC curve of HIGD1B for predicting 1, 3, and 5-year survival in the TCGA-STAD queue.(C) The expression levels of HIGD1B in the surviving (fustat=0) and deceased (fustat=1) populations.(D) Expression of HIGD1B in age subgroups.(E) Expression of HIGD1B in gender subgroups.(F) The expression of HIGD1B in different pathological grading populations.(G) The expression of HIGD1B in staging subgroups.(H-J) The expression of HIGD1B in T stage, N stage, and M stage subgroups.TCGA, The Cancer Genome Atlas; STAD, stomach adenocarcinoma; ROC; receiver operating characteristic; GC, gastric cancer.
3
FIGURE 3 The relationship between HIGD1B and the prognosis of GC. (A) The relationship between HIGD1B's expression and PFS, DFS, and DSS.(B) K-M curves of PFS, DFS, and DSS in the high and low HIGD1B expression subgroups.(C-G) The K-M curve of OS between different HIGD1B groups based on age, gender, pathological grading, stage, T-stage stratification.(H) The K-M curve between different HIGD1B subgroups in N0 population.(I) The K-M curve between different HIGD1B subgroups in M0 population.GC, gastric cancer; PFS, Progression Free Survival; DFS, Disease Free Survival; DSS, Disease Free Survival; K-M Kaplan-Meier; OS, overall survival.
4
FIGURE 4 Functional enrichment analysis and Cox regression analysis.(A) Volcano maps of all DEGs between high and low HIGD1B expression groups.(B) GO analysis of DEGs between high and low HIGD1B subgroups.(C) KEGG analysis of DEGs between high and low HIGD1B subgroups.(D) GSEA analysis of the primary enriched pathways in high and low HIGD1B groups.(E) Univariate Cox regression analysis of HIGD1B and clinical parameters in the TCGA cohort.(F) Multivariate Cox regression analysis of HIGD1B and clinical parameters.DEGs, differentially expressed genes; GO, Gene Ontology; KEGG, Kyoto Encyclopedia of Genes and Genomes; GSEA, gene set enrichment analysis.
7
FIGURE 7 Prediction of immunotherapy for GC.(A) The scores TIDE, dysfunction, and exclusion in high and low HIGD1B groups.(B) Analysis of HIGD1B and microsatellite state (MSI).(C) Waterfall plotting of somatic mutations.(D) TMB levels in high and low HIGD1B expression groups.(E) Kaplan-Meier curve of OS in high and low-TMB groups.(F) Kaplan-Meier curve show different survival among the four groups that combined TMB with HIGD1B.(G) Analysis of the combined application of anti-PD-1 and anti-CTLA-4 antibodies in distinct HIGD1B groups.***P < 0.001.GC gastric cancer; TIDE, tumor immune dysfunction and exclusion; TMB, tumor mutational burden; OS, overall survival.
8 ICIS
FIGURE 8 ICIS and drug sensitivity analyses.(A) Expression of ICIs in high and low HIGD1B expression groups.(B-E) Sensitivity analysis of chemotherapy drugs used for standard treatment of gastric cancer in clinical practice.Differences in sensitivity to chemotherapy drugs (including Oxaliplatin, Irinotecan, Cisplatin, 5-fluorouracil) among different subgroups of HIGD1B, and correlation between chemotherapy drugs IC50 value and HIGD1B expression.(F, G) Sensitivity analysis of targeted drugs (like Sorafenib and Savolitinib) in populations with high and low expression of HIGD1B.*P < 0.05; **P < 0.01; ***P < 0.001.ICIs, immune checkpoint inhibitors.
|
2024-07-25T15:23:47.795Z
|
2024-07-23T00:00:00.000
|
{
"year": 2024,
"sha1": "909800416157d2c90ae40908b116e9b48fe4356d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fimmu.2024.1415148",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cfeb72cea5cbfa8755b44e0408ebdcf980d879d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
95718587
|
pes2o/s2orc
|
v3-fos-license
|
Temperature dependence of the pressure broadening of spectral lines
The aim of this work is to obtain a formula relating the pressure broadening coefficient of the spectral line β with the temperature T, when the difference potential ΔV(R) between the upper and lower states of the emitting atom is represented by (Lennard – Jones) potential, The obtained formula is a power index law of β on T. This formula is applied for calculating β for different interactions of Ar, Ne, TI, Hg, Cd and Zn with the inert gases (Xe, Kr, Ar, Ne and He) at different temperatures. The results of these calculations are in good agreement with the corresponding values obtained before numerically. The obtained formula is considered very important in astrophysical problems.
Introduction
In some applications there is a tendency to represent the dependence of the pressure broadening coefficient β = γ /N on the temperature T in the form of a power law: k T ≈ β (1) where γ is the half width of the spectral line and k is the temperature dependence index [1,2]. γ in the impact region is proportional to the density N of the perturbing atoms. The power dependence can be derived theoretically from the classical phase -shift theory of line broadening in the impact limit if the interatomic potentials, V(R) for the upper and lower levels of the emitting atom are approximated by V(R) = C p R -p , where R is the interatomic separation and C p is a constant. In such a case the impact theory predicts for the index k a formula given by [3]: This means that k is dependent only on the power p characterizing the type of interaction. Bielski, et al, [4], obtained the power law dependence of β for a Lennard -Jones through the function B(α), which was obtained by Hindmarsh, et al, [5]. The obtained formula by Bielski, et al, [4] did`t give any explicit form for the power law dependence of β on temperature. Moreover, the calculation of β in [4] is performed numerically. The authors [6][7][8] proposed algebraic formulas for the broadening ρο b and shift ροδ impact parameters of a spectral line based on the assumption that the ranges of the interaction respectively for the broadening and shift are different. In this work the algebraic formula for the broadening is solved and the broadening coefficient β is obtained as a power index law on the temperature T.
If V'(R) and V''(R) represent respectively the interaction potentials for the lower and upper states of the emitting atom, are given by Lennard -Jones potential, then the difference potential ΔV(R) = V''(R) -V'(R) can be written as ) ( ) ( where (ΔC 6 and ΔC 12 ) are respectively the interaction parameters between the colliding particles in case of the long range attractive and repulsive van der Waals potentials. In previous papers [6][7][8], the authors obtained a formula relating the impact broadening parameter ρ ob with temperature T using the potential equation 3 as: where α p is a constant depending on the number p ( , and µ is the reduced mass between the interacting particles. The parameter ρ ob is related to the pressure broadening coefficient β , and the relative velocity in angular frequency unit by the relation: C here is the velocity of light. The first and second terms on the left hand side of equation 4 represent respectively the phase shifts of the broadening = / !( ! ) ] due to the repulsion [ΔC 6 =0] and attraction [ΔC 12 = 0]. Comparing equations.1 and 5, and using equation 2 the parameter ρ ob is related to T by the relation: It is seen from equation 2 that for the long range attractive van der Waals potential (p = 6) and the pure repulsive form of the potential equation 3 (p =12), the temperature index k are respectively equal to 0.3 and 9/22. These are the cases of equation 6 when the formula 4 is used in its attractive form (ΔC 12 = 0), or in its repulsive form (ΔC 6 = 0).
The Fitting formula
To solve equation (4) for ρ ob , we put: The function F 1 (x) was fitted to the power law of X, so that.
The values of the constants L 1 and L 1` depend on the ranges of x as shown in From this, we conclude that, if the temperature T and the interaction parameters (ΔC 6 and ΔC 12 ) are known, then F 1 (x) is obtained and consequently x from the table 1 and the relation 11. As x is known, then ρ ob can be obtained using equation 8, and consequently β from equation 5.
3. The broadening parameters β
We have from equation 7a
The relations between the two functions F 1 (x) and F 2 (x)
We have from equation 13 T From equation 30, we see that by applying the (L-J) potential, the broadening parameter β is a power law of temperature as in cases when the difference potential Δ (R) for the upper and lower levels of the emitting atom is given by Δ (R) = C p R -P .
Results
The broadening coefficients were calculated for different interactions of Ar, Ne, TI, Hg, Cd and Zn atoms with the inert gases (Xe, Kr, Ar, Ne and He). The results are illustrated in the table 2 with the corresponding Hindmarsh and Roston and Helmi values β H and β [ 7,8] with the temperature dependence index k and the values of 0 β ʹ′ [7,8 ] in units 10 -20 cm -1 /atom cm -3 for Ar, Ne, TI, Hg, Cd and Zn perturbed by inert gases. ΔC 6 in units 10 -32 cm 6 rad s -1 and ΔC 12 in units 10 -74 cm 12 rad s -1 are taken from the labelled references.
Conclusions
From the foregoing discussions and relations we conclude the following: 1-If T and (ΔC 6 , ΔC 12 ) are known, then the broadening parameters β can be obtained using equations 9, 8 and 5. 2-If β and (ΔC 6 , ΔC 12 ) are known, then F 2 (x) is obtained using equation 16, which leads to the function F 1 (x) from equation 24 and then T using equation 9 3-If β and T are known, then ΔC 6 and ΔC 12 can be obtained using equations 9 and 16. 4-The broadening parameters β were calculated for different interactions. These calculations are illustrated with the corresponding values of Hindmarsh and the previous work of the authors [7,8] as in table 2. It is seen that our calculating results for β are in good agreement with the values obtained before especially for interactions containing light particles. The discrepancy between the values of β for heavy particles can be attributed to the change of the sign of the phase value ± 0.63. 5-The obtained analytical formulas can be used simply to obtain information about the far source parameters "astrophysical problems". 6-The values of the temperature dependence index k for all interactions are very near to the value (9/22=0.4091) when the repulsion part from the potential only is applied.
|
2019-04-05T03:38:55.344Z
|
2012-12-06T00:00:00.000
|
{
"year": 2012,
"sha1": "0107c6ed7b12f69b429fd52e6a33ce679ccbe000",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/397/1/012041",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "58bbc38ad9ef77d63cb7f01cc972ec7c2218aedb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
}
|
221571858
|
pes2o/s2orc
|
v3-fos-license
|
Patient-and Family-Centered Care and Patient Safety: reflections upon emerging proximity
Rev Bras Enferm. 2020;73(6): e20190672 http://dx.doi.org/10.1590/0034-7167-2019-0672 4 of ABSTRACT Objective: To present reflections upon conceptual and pragmatic relationships between the Patient-and Family-Centered Care and patient safety. Method: A discussion about constructs related to the Patient-and Family-Centered Care and patient safety, which shows their interface with pragmatic issues of clinical nursing practice. Results: Considering patients and families as partners and agents promoting safe care is mandatory for the safety culture. Final considerations: Decreasing errors and adverse health care events can be accomplished by understanding manners to incorporate the principles of Patient-and Family-Centered Care into issues related to patient safety. Descriptors: Family Nursing; Patient Safety; Nursing Care; Patient-Centered Care; Quality of Health Care.
INTRODUCTION
Patient safety is dedicated to the study of interactions occurring in the health care system that can result in errors and adverse events, in order to analyze, develop and re-evaluate the inclusion of strategies that mitigate the occurrence of health care-related failures. Patient-and Family-Centered Care (PFCC) is defined as an approach to health care planning, delivery, and assessment based on mutually beneficial partnerships among health care providers, patients, and families. The central construct of this care approach is partnership, which implies that nurses recognize equality among individuals involved in the care process (professionals, patients and families) (1) . Implementation of PFCC in care enacts several actions and strategies that can contribute to patient safety, by emphasizing the need for a relational practice based on partnership, the principles of dignity and respect, sharing of information, collaboration and participation (1) as guidelines for institutional policies and professional practice.
OBJECTIVE
To present reflections upon conceptual and pragmatic relationships between the Patient-and Family-Centered Care and patient safety.
CONCEPTUAL AND PRAGMATIC DISCUSSION
First, let us consider that, even before patient safety achieved expressive visibility in health care, almost two decades ago, with the publication of To err is human: building a safer health care system, and before the formal definition of the PFCC model in 1987, an approximation between both ideas was identified in the Ottawa letter, published in 1986, when the World Health Organization (WHO) recognized the active participation of family in care, as a strategy for promoting the health of individuals.
Participation is the core concept in publications related to patient safety that consider PFCC as one of the vital strategies for promoting safe care. According to this model, the principle of participation includes providing support and encouragement to patients and families to participate in care and decision-making at the level they choose (1) .
Great importance has been assigned on the participation of patients and families in initiatives for their own safety in the WHO Program Patients for Patient Safety (PFPS) (2) . This program is still one of the priorities in the health care area, whose proposal encourages families, health professionals and managers to work together in the development of action plans, policies and programs aimed at safety and the promotion of a culture of participatory learning.
We emphasize the proximity of the description above with the principle of collaboration outlined by the PFCC, which characterizes it as a collaborative process between health care professionals, managers and families in the development, delivery and evaluation of institutional policies and programs, health care facility planning, professional education process, as well as in the provision of patient care.
The interface between the family participation concept and issues related to patient safety was marginalized in government documents in Brazil, until the publication of the reference document for the National Patient Safety Program (NPSP) (3) in 2014, when the involvement of patients and family members in patient safety actions was established as one of the goals. However, no recommendations or guidelines are identified in the programs mentioned above that guide patients and families on how to be involved in the care practice in order to promote safe care. Patients and families are mainly seen as a learning resource when errors occur, than as active participants in the prevention of errors in the clinical practice.
Active involvement of the patient and family in care situations are recommended in the guide "Your Health Care -Be Involved" published by the Ontario Hospital Association (OHA) (4) . According to this publication, patients and families are considered members of the health care team. The PFCC also recognizes patients and families as essential members of the health care team and essential to ensure quality and patient safety (5) .
By analyzing the OHA guide (4) , through the PFCC conceptual lens, it is possible to derive that shared information is the construct related to the recommendations, encouraging patients and families to obtain information about their diagnosis, treatment and prognosis, as well as to provide accurate information about their current and previous health status, and use of medication.
As advocated by the PFCC, information sharing involves communicating and exchanging complete, truthful, impartial and useful information with patients and their families in a timely manner, so that they can effectively participate in care and decision making (1) .
Studies show that one of the main needs of families experiencing illness and hospitalization is to receive information about the care delivered and health status of the patient (6)(7)(8) , and that, more and more families have been seeking information about the patient's health conditions online (9) . Therefore, it is important to reflect upon how this virtual information, often not evidence-based, can compromise aspects related to the safety of care, and how the lack of information sharing by health professionals with patients and families can lead them to access unsafe and low quality sources of information.
The following questions have arisen from the reflections above: Does the family have a voice to ask their questions? Do the professionals give voice to the family? What makes the family unable to question and talk about their needs? What prevents professionals from providing answers to the family needs? Therefore, families need to be empowered, and professionals must be skilled, so that questions are focused on the needs of the patients and their family, listening to them and meeting their demands.
We realize that if the answers to such questions were as simple as they seem, they would not be as complex to apply in practice. We will not always have all the answers to the questions of patients and families, and the answers will not always be precise; the Cartesian thinking does not extend to all questions of health and illness, nor to the situations of suffering experienced by patients and families. However, we believe that, the betterinformed patients and families are, the more they will be able to collaborate in promoting a safe care.
By using the concept of information sharing, we want to introduce the idea that failure to obtain information from the patient and family is a contributing factor to the occurrence of the error or adverse event. In contrast, sharing information can be a strategy both to prevent the error or the adverse event, and to understand the failures when they occur.
Although there is evidence that failure in interprofessional communication is one of the most common causes of adverse events in health care (10)(11) , failure of professionals to communicate with the patient and family is barely associated with issues related to patient safety. Giving information to patients and families about their diagnosis, treatment and prognosis is considered a threatening situation by some professionals. They only give information they consider necessary, attributing to the institution the policy of not providing information to the patient. For nurses who have adopted such conduct, we recommend that their moral judgment of what is necessary, appropriate and important should be the basis for deciding what will be and when (if any) shared with the patient and family Thus, a reflection is proposed on two aspects related to the same outcome, from different perspectives: How do health care professionals welcome the communication of information by the family that reveals itself as a health care related error? How do health care professionals report the occurrence of a health care-related error to the patient and family?
To the first question, we propose answers based on the symbolic meaning the family has for the health care professional. Whether he/she believes that the family is an essential member of the team and, consequently, a partner, the communication of an error by the family is understood as something constructive, which enacts early intervention and shows itself as a learning opportunity to review processes and mitigate future errors On the other hand, if the family is seen as an observer and judging element, the revelation of a mistake can be interpreted with hostility, which denotes lack of professional competence, implying inertia following the sharing of information.
Regarding the second question, reflection is proposed on three possible answers that reflect the absence of the PFCC principles in the professional actions: a first one in which the adverb "how" of the question would not be applicable, because professional silence would imply the non-disclosure of the error to the family; a second one in which some explanation would be given by the professional, in a quick and superficial manner, without providing an opportunity for a dialogue with the family; and a third one, in which the use of inappropriate language by the health care professional would make the communication of the error incomprehensible to the family.
Given the complexity of the systems and processes inherent to care practice, obtaining the collaboration of those who are the recipients of care and who have a privileged view of the situations experienced is an intelligent strategy for preventing errors. It is necessary to recognize that patients and families, provided with accurate information about the care and health status of the patient, and considered as partners and collaborators of the health care team, have much more chances to recognize failures in the care process than health care professionals, mainly because of the work overload and the shortage of human resources.
A study conducted in the pediatric setting found that families detected more errors and adverse events than health care professionals. Families identified communication failures as the main contributing factor to the occurrence of errors. Given these results, the study made recommendations that are directly in line with PFCC: a) to improve communication between physicians, nurses and families on safety issues; b) to consider families' knowledge; c) to actively involve families in the process of care supervision (to help prevent errors and/or identify them early); d) to involve families in care planning, to update them on changes, to encourage them to talk and participate (to be effective partners in preventing errors); e) to educate families on how to report possible errors.
Regarding pragmatic issues involving family participation and collaboration in patient safety during nursing care, we will return to the first global WHO challenge, which was oriented to the prevention and management of health care-associated infections (HAIs) in health care services. Its focus has been on the practice of hand hygiene as a primary measure for infection prevention, with the theme Cleaner Care is Safer Care.
About this theme, a study showed that families, aware of the importance of hand hygiene and HAIs, would accept invitations to participate in actions promoting hand hygiene. However, some family members reported the possibility of feeling uncomfortable when having to remind health care professionals about hand hygiene, reporting fear of a possible negative impact on the family-professional relationship. Regarding professionals' perceptions about this collaborative action with families, they also believed that such intervention by the family could impact negatively on the relationships between them and their families (13) .
Given the nurses' perception shown in the study above, a self-reflection on the family meaning for the professionals must be performed. We have already presented the conception of family as an observer and judging agent, but there are still some professionals who perceive them as an opponent. The fact is that these definitions, as well as considering patients and families incapable of contributing to the reduction of errors in health care, are restrictive beliefs that need to be modified. It is vital to the safety culture to consider patients and families as partners and agents for a safe care.
FINAL CONSIDERATIONS
The constructs discussed in this paper should be understood as a set of principles that guide the professional practice and, therefore, need to be absolutely present in any communication and relationship established with the patient and the family in the professional practice. When considering that health care professionals should also offer ethical care, based on the principles of autonomy, beneficence, non-maleficence and justice, and that these are intrinsically related to the principles postulated by the PFCC, their nullity in clinical practice may characterize an unethical care. Moreover, initiatives to promote the participation of patients and families in health care in our country may be much more frequent in the private sector than in the public one, because some are seen as consumers and others as merely "users" who benefit from free healthcare, and thus the possibility of errors are implicit.
As research advances, the approximation between PFCC and patient safety has been increasingly encouraged and gained prominence in different health care systems. Thus, it is necessary to think about the incorporation and articulation of the common concepts of PFCC and patient safety as professional competencies of nurses. The association of these two components in a grounded, conscious and intentional manner is capable of raising the level of care, promoting the provision of quality care that results in a better experience of the patient and family in contact with the health care system As far as educational and health care institutions are concerned, it is necessary to recognize that the lack of training of health care professionals on establishing effective relationships with patients and family members is an obstacle to be surmounted, so that knowledge translation about the interface between the PFCC and patient safety can be accomplished.
|
2020-09-10T10:12:21.838Z
|
2020-09-07T00:00:00.000
|
{
"year": 2020,
"sha1": "850fac47130b3077d8a635dcf8fec101fb64e89d",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/reben/a/mk8PrbvG7bZ696PkRBvHXcK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5db82b170fad4a2e4bf605169dae8497b0b34a7f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253385977
|
pes2o/s2orc
|
v3-fos-license
|
Deep learning for the diagnosis of suspicious thyroid nodules based on multimodal ultrasound images
Objectives This study aimed to differentially diagnose thyroid nodules (TNs) of Thyroid Imaging Reporting and Data System (TI-RADS) 3–5 categories using a deep learning (DL) model based on multimodal ultrasound (US) images and explore its auxiliary role for radiologists with varying degrees of experience. Methods Preoperative multimodal US images of 1,138 TNs of TI-RADS 3–5 categories were randomly divided into a training set (n = 728), a validation set (n = 182), and a test set (n = 228) in a 4:1:1.25 ratio. Grayscale US (GSU), color Doppler flow imaging (CDFI), strain elastography (SE), and region of interest mask (Mask) images were acquired in both transverse and longitudinal sections, all of which were confirmed by pathology. In this study, fivefold cross-validation was used to evaluate the performance of the proposed DL model. The diagnostic performance of the mature DL model and radiologists in the test set was compared, and whether DL could assist radiologists in improving diagnostic performance was verified. Specificity, sensitivity, accuracy, positive predictive value, negative predictive value, and area under the receiver operating characteristics curves (AUC) were obtained. Results The AUCs of DL in the differentiation of TNs were 0.858 based on (GSU + SE), 0.909 based on (GSU + CDFI), 0.906 based on (GSU + CDFI + SE), and 0.881 based (GSU + Mask), which were superior to that of 0.825-based single GSU (p = 0.014, p< 0.001, p< 0.001, and p = 0.002, respectively). The highest AUC of 0.928 was achieved by DL based on (G + C + E + M)US, the highest specificity of 89.5% was achieved by (G + C + E)US, and the highest accuracy of 86.2% and sensitivity of 86.9% were achieved by DL based on (G + C + M)US. With DL assistance, the AUC of junior radiologists increased from 0.720 to 0.796 (p< 0.001), which was slightly higher than that of senior radiologists without DL assistance (0.796 vs. 0.794, p > 0.05). Senior radiologists with DL assistance exhibited higher accuracy and comparable AUC than that of DL based on GSU (83.4% vs. 78.9%, p = 0.041; 0.822 vs. 0.825, p = 0.512). However, the AUC of DL based on multimodal US images was significantly higher than that based on visual diagnosis by radiologists (p< 0.05). Conclusion The DL models based on multimodal US images showed exceptional performance in the differential diagnosis of suspicious TNs, effectively increased the diagnostic efficacy of TN evaluations by junior radiologists, and provided an objective assessment for the clinical and surgical management phases that follow.
Introduction
Thyroid cancer has become the most common endocrine malignancy, with an increasing incidence of approximately 7%-15% annually (1,2). Ultrasound (US) is widely used as a first-line screening tool for the clinical examination of thyroid lesions, with the advantages of no exposure to radiation, real-time dynamic imaging, and simplicity of procedure (1,3). Multiple versions of the Thyroid Imaging Reporting and Data System (TI-RADS) have been proposed for US imaging to standardize and improve the diagnostic consistency and accuracy of thyroid lesions, and each risk stratification system has its advantages (1,(3)(4)(5)(6).
Nevertheless, US diagnosis of thyroid nodules (TNs) is subjective to a certain extent. Various diagnostic results of US evaluation of TNs were obtained from different observers, especially less-experienced radiologists, who showed relatively lower accuracy. In previous studies, moderate variability in the interobserver agreement was found among different TI-RADS scores (7). There was fair agreement in margin, echotexture, and echogenicity (k = 0.34, 0.26, and 0.34, respectively) for interobserver variability (8)(9)(10). Clinically, there is a wide range of malignant risks (approximately 2%-90%) and some overlapping US features for the TNs of TI-RADS 3-5 categories; therefore, it was difficult for radiologists to accurately differentiate between benign and malignant TNs (11)(12)(13), resulting in overdiagnosis or misdiagnosis.
Fine-needle aspiration (FNA) is a relatively effective method for the preoperative diagnosis of TNs (14). The radiologists assess the malignant probability of TNs and then recommend patients for FNA or US follow-up according to TI-RADS. However, FNA is an invasive procedure with some possible complications, such as bleeding, and FNA results are also dependent on the size, composition of TNs, and skills of radiologists. Moreover, approximately 20% of the FNA results were rendered inconclusive, which led to uncertainty in the next course of clinical treatment (15)(16)(17). The development of artificial intelligence (AI) technology has shown great potential in reducing the influence of subjectivity and improving the consistency of diagnosis.
In the past two decades, machine-learning methods have been used in TN characterization, which is usually known as "radiomics" (18, 19). Radiomics can automatically extract features in the region of interest (ROI), which tends to be difficult to discern with the naked eye. It should be noted that high-throughput features extracted by radiomics from the ROI are easily affected by the segmentation strategy and imaging parameters. Deep learning (DL) is a machine-learning concept that has shown strong capability in medical image characterization and outperforms traditional machine-learning methods. With the help of artificial neural networks, DL has been widely applied to differentiate breast, thyroid, and liver lesions with good performance (20)(21)(22). However, radiologists cannot be completely replaced with AI technology. It is crucial to integrate DL methods into clinical practice; therefore, they can aid radiologists in diagnosis, evaluation, and decision-making (23). In this study, the diagnostic performances of junior and senior radiologists with and without a DL assistant were compared.
Most previous studies using DL for the diagnosis of TNs have concentrated on grayscale US (GSU) imaging. However, beyond conventional GSU, some new US technologies such as color Doppler flow imaging (CDFI), elastography, and contrastenhanced ultrasonography are commonly used to assist in the diagnosis of GSU for TNs, which have been proven to improve the diagnostic accuracy in the clinical evaluation (13,24,25). This indicated that the features of blood flow and hardness also played an important role in thyroid US diagnosis. Therefore, in our study, new DL models based on multimodal US imaging were proposed to explore their application value in improving the diagnostic accuracy of suspicious thyroid lesions and the role of auxiliary diagnosis for radiologists.
Patients
This retrospective study was approved by the Ethics Committee of The Second Affiliated Hospital of Harbin Medical University, and the requirement for informed consent was waived (approval number KY2021-152). Consecutive patients who had undergone thyroid surgery at The Second Affiliated Hospital of Harbin Medical University between September 9, 2020, and June 6, 2021, were enrolled. The inclusion criteria of the enrolled patients were as follows: lesions with (1) complete or high-quality transverse and longitudinal section images (2), complete surgical records and pathological results (3), no preoperative operation such as FNA and ablation or surgical treatment of TNs, and (4) US examination in our hospital within 1 week before surgery. Finally, 1,138 TNs of TI-RADS 3-5 categories from 781 patients were included in the study. The postoperative pathological results were used as the gold standard. The mean diagnostic age of patients was 47.74 ± 10.60 years (range, 21-79 years). According to the pathological results, there were 550 (48.33%) malignant and 588 (51.67%) benign TNs. The workflow of the selection is shown in Figure 1.
Ultrasound image acquisition and analysis
Preoperative thyroid US examinations were performed by two radiologists with 10 years of experience (Q.D. and H.K.) using a US device (Hitachi HI VISION Avius, Hitachi Medical Corporation, Tokyo, Japan) equipped with a 5-to 13-MHz linear probe. According to the Chinese TI-RADS (C-TIRADS) issued by the Chinese Society of Ultrasound in Medicine in 2020, thyroid scanning and imaging parameter adjustments were guided and completed (6). The GSU, CDFI, and strain elastography (SE) images of the TNs were acquired in transverse and longitudinal sections, which showed obvious characteristics and were saved in BMP format.
The ultrasonographic features were evaluated for all 1,138 TNs in our study. To maintain consistency, the images were independently analyzed by two experienced radiologists (L.Z. and W.Y.) in a double-blind manner, and results were obtained through consultation by consensus when discrepancies arose. GSU features, including the maximum diameter, position, echotexture, echogenicity, composition, orientation, margin, punctate echogenic foci, halo, and posterior features, were evaluated visually according to the C-TIRADS. CDFI could indicate tumor blood flow characteristics using the vascular Flowchart of enrolling patients with thyroid nodules. Four modalities of TN images were included in our research: GSU, CDFI, SE, and ROI mask (Mask) images. Each modal image was captured from both horizontal and vertical perspectives. The multimodal and double-view US images of a TN in the right lobe of a 65-year-old female patient with pathologically proven papillary carcinoma are illustrated in Figure 2. The Masks of the TNs were manually segmented using ImageJ (version 1.48, National Institutes of Health, USA) by two radiologists (Q.D. and Y.T.). The total data set was separated into training, validation, and test data sets, with a ratio of 4:1:1.25.
Deep residual learning with attention block
Deep networks can extract more abstract information from low-level feature maps, which enables them to perform better than shallow networks. The residue strategy provides a skip connection to solve the degradation problem, making it possible to train a very deep network. To make full use of the multimodal image features, ResNet-50 (28) was used as the backbone for feature extraction in our method. In the ResNet-50, there is one convolutional layer and 16 residual blocks. For the essential composition of ResNet-50, a residual block is defined as follows: where x and y denote the input and output feature maps of the residual block, respectively. F refers to the residual function, which is learned by stacked convolutional layers with different kernel sizes in the residual block. The right side of the equation is obtained by feedforward neural networks with skip connections, which allow gradients to propagate through the networks. All available multimodal images were preprocessed to a size of 224 × 224 × 3 pixels, where 224 denotes the width and height and 3 denotes the channels of images. The training and validation data sets were randomly divided into five parts for fivefold cross-validation. Multimodal US images of the same patient were sent to the training, validation, or testing data set as one sample. During the training process, the parameters of the modal were optimized by forward and backward propagation computing until the prediction reached a high accuracy related to the ground truth. The feedforward process can be mathematically expressed as follows: where l denotes the number of layers. h l represents the output feature map of the l layer with h l−1 as the input. W and b denote the weights and biases of the convolutional filter bank, respectively. R is a rectified linear activation (ReLU) function. In back propagation, the parameters of the network are updated by optimizing the following binary cross-entropy loss. Because of the low contrast and small area of TNs in thyroid US images, it is necessary to obtain effective feature information. However, the key channels and spatial position of the lesion cannot be identified because the information obtained by the convolution operation with the kernel in ResNet is local and may fail to capture effective features from the global image. To solve this problem, we combined the convolutional bottleneck attention module (29) and ResNet-50 to learn the weights for our feature maps ( Figure 3). Two attention units were inserted before the first and after the last residual block to obtain abstract features from both the higher and lower layers, as shown in Figure 3A. There are two types of attention mechanisms in the attention unit: spatial attention and channel attention, as shown in Figure 3B. Channel-wise attention was used to select features that could calculate the strongest channel-wise activation values. Spatial attention performs average pooling and max pooling along the channel axis on the feature map to obtain the activated feature map with a local receptive field in the spatial dimension. To complement channel attention, spatial attention was applied to find the informative region for the input feature map in the spatial dimension.
Implementation
To establish the DL model, we used 588 benign and 550 malignant TNs with multimodal and double-view images as the data set. Furthermore, fivefold cross-validation was applied to the data sets.
To evaluate the performance of the four types of sonography in thyroid cancer diagnosis, we performed experiments with multimodal inputs (i.e., GSU, CDFI, SE, and Mask). The four streams in Figure 4 correspond to the four modalities. All four modalities (Figure 4), as well as one or multiple modalities of the same patient, were taken as the inputs. Popular ResNet-50 was used as the feature extraction backbone ( Figure 3A). The features obtained by multiple network streams from the different modalities and views were averaged and then applied to fully connected layers to predict the classification result. In our experiments, each network stream had its own independent parameters.
The framework was implemented on a Dell-T7920 workstation equipped with an NVIDIA GeForce RTX3090 GPU and 64 GB of memory. The Adam optimization algorithm for minibatch gradient descent was used for training with a batch size of 32. The learning rate was initially set to 0.00001 and reduced by 0.1 every 30 epochs. A pretrained model was used for parameter initialization. The models with the smallest loss values within 100 training epochs were selected as the final models to generate classification results. We set the same epochs for training every modal, including the double-and single-view modes.
Comparing the diagnosis of the deep learning model and radiologists
In this section, we investigate the diagnostic performance of the DL models and radiologists using 228 cases from the test set. According to a survey, the diagnostic accuracy of radiologists B A FIGURE 3 The overall network architecture. increased when they classified the final category into either dichotomous prediction or malignant risk (9). In our study, radiologists diagnosed TNs of the test set based on multimodal US images, and the results were compared with those of the DL method. Five senior radiologists with 5-10 years of experience and five junior radiologists with 1-3 years of experience independently evaluated the TNs and were blinded the diagnosis to the postoperative pathological results. The radiologist then performed a second diagnosis based on the results of the DL and arrived at the final diagnosis. The diagnostic performance of the radiologist alone and in combination with DL assistance was compared.
Statistical analysis
R software (version 1.8) and MedCalc (version 11.2, Ostend, Belgium) were used to analyze the data. The data set was randomly divided into five non-overlapping groups, whereas there was no data intersection for the same subject for each group. After fivefold cross-validation, the accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristics curves (AUC) were obtained to evaluate the performance of the presented DL model in the test set. The Delong test results in terms of the AUC for the test data set were introduced to evaluate the statistical difference between DL based on different combined US images and radiologists with variable levels. A 95% confidence interval was used to estimate the range of these evaluation values; p-values of less than 0.05 (two-tailed) were considered statistically significant.
The US characteristics of the 1,138 TNs were statistically analyzed, and the results are listed in Table 1. Except for The illustration of multimodality inputs and feature fusion.
Diagnostic performance of deep learning models
The performances of the various DL models for differentiating TNs are summarized in Table 2 and Figure 5. In our study, a total of eight DL models were established based on multimodal US imaging. We found that the feature fusion of images from both transverse and longitudinal sections could achieve better performance than that from a single section (Supplementary Text S1).
The Table 2). The highest accuracy of 86.2%, sensitivity of 86.9%, and PPV of 87.8 were achieved by the DL model based on (G + C + M)US, and the highest specificity of 89.5% and NPV of 87.7% were achieved based on (G + C + E)US. The DL model using (G + C + E + M)US images achieved the best performance (AUC of 0.928), with an increase of 10.3% compared with that using a single GSU (p< 0.001).
Deep learning performance compared with radiologists
The diagnostic performance of radiologists with different levels of experience in differentiating malignant from benign TNs is shown in Table 3 and Figure 6. When independently evaluating the TNs without DL assistance, the diagnosis of senior radiologists showed higher accuracy, specificity, and AUC than that of juniors (80.6% vs. 72.7%, p = 0.008; 81.7% vs. 72.0%, p = 0.018; 0.794 vs. 0.720, p = 0.002, respectively). The The ROC curves of DL-based single GSU and multimodality. ROC, receiver operating characteristics; DL, deep learning; GSU, gray-scale ultrasound. The ROC curves of DL and radiologists with different degrees of experience. ROC, receiver operating characteristics; DL, deep learning. Frontiers in Oncology frontiersin.org sensitivity (79.5%) of US diagnosis by senior radiologists was also better than that (73.5%) of junior radiologists (p = 0.079). When the DL method was added for the second diagnosis in the test set, the diagnostic performance of junior radiologists significantly increased from 0.720 to 0.796 (p< 0.001). The AUC of the junior radiologists in the second diagnosis was similar to that of the first diagnosis by senior radiologists (0.796 vs. 0.794) but was considered inferior (0.796 vs. 0.822), and the differences were not statistically significant (p > 0.05). Moreover, the DL model also had a certain auxiliary diagnostic effect and could slightly improve the diagnostic performance of senior radiologists in terms of accuracy (from 80.6% to 83.4%), which was higher than that of DL based on GSU (83.4% vs. 78.9%, p = 0.041). However, the AUC of senior radiologists with DL assistance was only comparable to that of DL based on a single GSU (0.822 vs. 0.825, p = 0.512) and significantly less than that of DL based on multimodal US images (0.822 vs. 0.858-0.928, p< 0.05).
Discussion
Thyroid cancer has recently become one of the most common malignancies in Chinese women (2). US was the first choice for the examination of thyroid lesions, and TNs were diagnosed on US imaging by radiologists according to TI-RADS. Each guideline has its strengths and weaknesses; for example, the American Thyroid Association 2015 guideline showed better diagnostic efficiency in evaluating TNs >1 cm, the TIRADS issued by the American College of Radiology in 2017 had more advantages in reducing unnecessary biopsy operations, and TNs were well diagnosed by radiologists according to C-TIRADS, achieving a higher performance (6)(7)(8). However, the diagnostic results were susceptible to operator dependency, probe, and US equipment variability. FNA is a comparatively accurate method for differentiating TNs preoperatively, but it was reported that approximately 20% of FNA samples obtained had ambiguous results (15)(16)(17). AI not only solves the complex problem of the US risk stratification system but also reduces intra-and interobserver variability in US diagnosis (23,30).
Nevertheless, most applications of DL in the diagnosis of TNs have been conducted based on single GSU imaging or single-view sections, limiting access to image information to a certain extent (21,(31)(32)(33). In addition to GSU, radiologists also referred to CDFI and elastography for obtaining the blood flow and hardness information of TNs to assist the GSU diagnosis clinically and make a diagnosis after a comprehensive analysis. In terms of the statistical analysis in our study, a higher elastic score was markedly correlated with malignant TNs, confirming that malignant TNs tend to be hard. The differences in vascular distribution pattern and Adler grade were statistically significant in TNs, and malignant TNs tended to be less or lacked blood flow. In addition, many studies have verified the effectiveness of combined or multimodal US imaging in the differentiation of TNs visually (11,13,24,25). Therefore, based on multimodal US images of TNs obtained from transverse and longitudinal sections, new DL models were used to distinguish benign from malignant TNs in our study.
In our study, the diagnostic performance of GSU (0.825) was comparable to that of previous studies (AUC of 0.788 and 0.829) (32,33). However, the DL models using combined or multimodal US images achieved a better performance (0.858-0.928) than those using GSU alone (0.825) (p< 0.05). Notably, the AUC of the DL model based on GSU alone was also greatly improved after adding CDFI (0.825 vs. 0.909, p< 0.001). In a related study, Baig et al. quantified the regional blood flow indices of TNs, and the diagnostic accuracy of GSU features was increased from 58.6% to 79.3% when combined with CDFI (p< 0.05) (34). The DL models in our study provided consistent and repeatable results and outperformed conventional machine learning-based methods with a specificity of 86.9%, a PPV of 85.2%, and an accuracy of 83.8%. As for the improvement in diagnostic efficiency after adding CDFI, we found that it may be due to the attention mechanism algorithm applied in this study, which could obtain richer and more objective features that were previously unrecognized visually by learning the information of CDFI images autonomously. We also demonstrated that SE imaging helped improve the diagnosis of DL based on GSU (0.825 vs. 0.858, p< 0.05). However, the AUC of the DL model based on (G + E)US was markedly less than that based on (G + C)US (0.858 vs. 0.909, p = 0.001). Additionally, there were no significant differences between (G + C + E)US and (G + C)US (0.906 vs. 0.909, p = 0.294). Therefore, our study confirmed that CDFI played a more substantial role in distinguishing TNs than SE in our study, and the less obvious advantages of SE may be associated with the subjectivity of the collecting process of SE images.
Adding a Mask containing the contour information of the TNs was found to help improve the diagnostic performance of DL models based on GSU (from 0.825 to 0.881), (G + C)US (from 0.909 to 0.918), (G + E)US (from 0.858 to 0.889), and (G + C + E)US (from 0.906 to 0.928), indicating that effective delineation of the nodular boundaries in US images played an important role in characterizing TNs. The best AUC of 0.928 was achieved by DL using (G + C + E + M)US. The highest specificity (89.5%) and PPV (87.8%) were achieved by DL based on (G + C + E)US, which could play a primary role in avoiding overdiagnosis and helping reduce unnecessary biopsies for the diagnosis of TNs, whereas the highest sensitivity (86.9%) and NPV (87.7%) of great clinical significance for screening out malignant TNs and avoiding misdiagnosis were achieved by DL based on (G + C + M)US. In summary, the performance of DL models based on multimodal US imaging was superior to that based on a single GSU, which supports our assumption that multimodal US could provide more comprehensive and effective information for TN diagnosis.
In clinical practice, US diagnosis by radiologists cannot be completely replaced by AI technology, and a final diagnosis should be made by radiologists. Therefore, we compared the performance of the DL method for differentiating TNs with that of visual diagnosis by radiologists and further explored the auxiliary role of DL for radiologists' diagnosis. Compared with the first diagnosis of TNs visually by junior radiologists, there was a significant improvement in the second diagnosis with DL assistance (0.720 vs. 0.796, p< 0.001), which could be comparable to that of the seniors in the first diagnosis (0.796 vs. 0.794, p > 0.05). Moreover, the DL method could also provide an auxiliary diagnosis for senior radiologists in terms of accuracy (from 80.6% to 83.4%), which was superior to DL based on GSU alone (83.4% vs. 78.9%, p = 0.041). It has been proven that DL can assist clinical radiologists in improving diagnostic ability and increasing confidence, especially for juniors with less experience. In a study by Peng et al. (35), the DL-assisted method also improved the AUC of radiologists in diagnosing TNs from 0.837 to 0.875 (p< 0.001).
Nevertheless, there were no significant diagnostic differences with and without DL assistance for senior radiologists (0.794 vs. 0.822, p = 0.141). It seems that DL-aided diagnosis was less effective for senior than junior radiologists, which may be related to the fact that senior radiologists were more likely to rely on their own clinical experience. Our analysis may also be due to the fact that, by using DL models, we sacrificed interpretability for robust and complex imaging features with greater generalizability. Furthermore, DL technology obtained results based on features that were learned and extracted independently rather than on predefined handcrafted features, where the process was abstract and incomprehensible, leading to distrust by the radiologists. To resolve the visualization of DL learning and decision processes, Kim et al. (36) applied Grad-CAM to generate output images overlaid with heat maps to achieve visual interpretability. Meanwhile, Zhou et al. (20) found that the adjacent parenchyma of TNs is critical for classification by visual interpretability of DL.
By comparing the radiologists' and DL's diagnostic efficacy, we found that senior radiologists with DL assistance only had a diagnosis comparable to the DL model based on GSU in terms of AUC (0.822 vs. 0.825, p = 0.512), which could not be compared with the diagnosis based on multimodal US imaging (0.822 vs. 0.858-0.928, p< 0.05), effectively demonstrating the excellent clinical value of the DL method, especially for multimodal US imaging, with potential for further development and application. Radiologists may be affected by fatigue and other factors in daily work, whereas AI can run on its own and has the characteristics of indefatigability with stable and high diagnostic efficiency.
Our study has several advantages. To our knowledge, this is the first study in which the attention mechanism-guided residual network was used to construct a variety of DL models based on different US imaging combinations. The objects in our study were TNs of the C-TIRADS 3-5 categories, which are more suitable for clinical diagnosis difficulties and extend the scope of clinical application. We have verified that DL models based on multimodal US can assist radiologists in improving diagnostic performance, especially for those with less experience, and postoperative pathological results were used as the gold standard for statistical analysis in this study, which was more objective than the studies using cytological pathological results (20,31). Our case set included relatively more samples and achieved a balance between benign and malignant TNs, which could effectively reduce the diagnostic bias compared with a previous study (18).
This study had some limitations. First, the main limitation of our study was that the data were retrospectively derived from a single center, and additional external validation or multicenter studies are needed to refine our study. Second, the images in this study were static images stored in a compressed format, which may have led to some potential image features not being mined. Therefore, dynamic images or raw radiofrequency signals should be included in future studies. Third, the visualization of the DL proposed in this study was not achieved. Visualization of the DL process could be conducted to make the results more reliable in subsequent studies. More technologies could be included, such as shear wave elastography, superb microvascular imaging, and contrast-enhanced US.
In conclusion, the DL model based on multimodal US images can achieve a high diagnostic value in the differential diagnosis of benign and malignant TNs of C-TIRADS 3-5 categories, aid second-opinion provision, and improve the diagnostic ability for radiologists, which is of great significance for clinical decision-making.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by the Ethics Committee of the Second Affiliated Hospital of Harbin Medical University. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
Author contributions
YY and JT conceived and designed the study. YT, QD, and HK collected the clinical and image data and performed image preprocessing. YT, LZ, and WY analyzed and evaluated the ultrasonographic features of thyroid nodules. YY and WQ provided a deep learning algorithm and built the models. YY analyzed the image data and performed statistical analysis. YT, YY, and TW wrote the manuscript. JT, XX, and XL reviewed and edited the manuscript. All authors have contributed to the manuscript and approved the submitted version.
|
2022-11-08T14:20:24.558Z
|
2022-11-08T00:00:00.000
|
{
"year": 2022,
"sha1": "540b93932be90fe2a97f7de9b0ba47b5661a40e9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "540b93932be90fe2a97f7de9b0ba47b5661a40e9",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
34174197
|
pes2o/s2orc
|
v3-fos-license
|
Spacetime structures of continuous time quantum walks
The propagation by continuous time quantum walks (CTQWs) on one-dimensional lattices shows structures in the transition probabilities between different sites reminiscent of quantum carpets. For a system with periodic boundary conditions, we calculate the transition probabilities for a CTQW by diagonalizing the transfer matrix and by a Bloch function ansatz. Remarkably, the results obtained for the Bloch function ansatz can be related to results from (discrete) generalized coined quantum walks. Furthermore, we show that here the first revival time turns out to be larger than for quantum carpets.
Simple theoretical models have always been very useful for our understanding of physics. In quantum mechanics, next to the harmonic oscillator, the particle in a box provides much insight into the quantum world (e.g. [1]). Recently, the problem of a quantum mechanical particle initially characterized by a gaussian wave packet and moving in an infinite box has been reexamined [2,3,4]. Surprisingly, this simple system shows complex but regular spacetime probability structures which are now called quantum carpets.
In solid state physics and quantum information theory, one of the most simple systems is associated with a particle moving in a regular periodic potential. This can be, for instance, either an electron moving through a crystal [5,6] or a qubit on an optical lattice or in an optical cavity [7,8,9]. For the electron moving through a crystal, the band structure and eigenfunctions are well known. In principle, the same holds for the qubit. However, in quantum information theory, the qubit on a lattice or, more general, on a graph is used to define the quantum analog of a random walk. As classically, there is a discrete [10] and a continuous-time [11] version. Unlike in classical physics, these two are not translatable into each other.
Here we focus on continuous-time (quantum) random walks. Consider a walk on a graph which is a collection of connected nodes. Lattices are very simple graphs where the nodes are connected in a very regular manner. To every graph there exists a corresponding adjacency or connectivity matrix A = (A ij ), which is a discrete version of the Laplace operator. The non-diagonal elements A ij equal −1 if nodes i and j are connected by a bond and 0 otherwise. The diagonal elements A ii equal the number of bonds which exit from node i, i.e., A ii equals the functionality f i of the node i.
Classically, a continuous-time random walk (CTRW) is governed by the master equation [12,13] where p jk (t) is the conditional probability to find the CTRW at time t at node j when starting at node k. The transfer matrix of the walk, T = (T jk ), is related to the adjacency matrix by T = −γA, where we assume the transmission rate γ of all bonds to be equal for simplicity. Formally, this approach can be generalized to continuous models like the Lorentz gas [14].
The formal solution of Eq. (1) is The quantum-mechanical extension of a CTRW is called continuous-time quantum walk (CTQW). These are obtained by identifying the Hamiltonian of the system with the (classical) transfer operator, H = −T [11,15,16]. Then the basis vectors |k associated with the nodes k of the graph span the whole accessible Hilbert space. In this basis the Schrödinger equation (SE) reads where we have set m ≡ 1 and ≡ 1. The time evolution of a state |k starting at time t 0 is given by |k(t) = U(t, t 0 )|k , where U(t, t 0 ) = exp(−iH(t − t 0 )) is the quantum mechanical time evolution operator. Now the transition amplitude α jk (t) from state |k at time 0 to state |j at time t reads Following from Eq.(3) the α jk (t) obey The main difference between Eq.(2) and Eq.(4) is that classically j p jk (t) = 1, whereas quantum mechanically j |α jk (t)| 2 = 1 holds. In principle, for the full solution of Eqs.(1) and (5) all the eigenvalues and all the eigenvectors of T = −H (or, equivalently, of A) are needed. Let λ n denote the nth eigenvalue of A and Λ the corresponding eigenvalue matrix. Furthermore, let Q denote the matrix constructed from the orthonormalized eigenvectors of A, so that A = QΛQ −1 . Now the classical probability is given by whereas the quantum mechanical transition probability is The unitary time evolution prevents that π jk (t) has a definite limit for t → ∞. In order to compare the classical long time probability with the quantum mechanical one, one usually uses the limiting probability distribution [17] In the subsequent calculation we restrict ourselves to CTQWs on regular one-dimensional (1d) lattices. Then the adjacency matrix A takes on a very simple form. For a 1d lattice with periodic boundary conditions, i.e. a circle, every node has exactly two neighbors. Thus, for a lattice of length N , with the boundary condition that node N + 1 is equivalent to node 1, we have For a lattice with reflecting boundary conditions the adjacency matrix A is analogous to Eq.(9), except that A 11 = A N N = 1 and A 1N = A N 1 = 0 because the end nodes have only one neighbor. Solving the eigenvalue problem for A, which is a real and symmetric matrix is a well-known problem, also of much interest in polymer physics [18,19]. A different ansatz describing the dynamics of a quantum particle in 1d was given by Wójcik and Dorfman who employ a quantum multibaker map [20]. The structure of H = γA suggests an analytic treatment. For a 1d lattice with periodic boundary conditions and γ = 1 the Hamiltonian acting on a state |j is given by which is the discrete version of the Laplacian −∆ = −∇ 2 . Eq.(10) is the discrete version of the Hamiltonian for a free particle moving on a lattice. It is well known in solid state physics that the solutions of the SE for a particle moving freely in a regular potential are Bloch functions [5,6]. Thus, the time independent SE is given by where the eigenstates |Φ θ are Bloch states and can be written as a linear combination of states |j localized at nodes j, The projection on the state |j than reads Φ θ (j) ≡ j|Φ θ = e −iθj / √ N , which is nothing but the Bloch relation Φ θ (j + 1) = e −iθ Φ θ (j) [5,6]. Now the energy is obtained from Eqs. (11) and (12) as For small θ the energy is given by E θ ≈ θ 2 which resembles the energy spectrum of a free particle.
With this ansatz we calculate the transition amplitudes α kj (t). The state |j is localized at node j and may be described by a Wannier function [5,6], i.e. by inverting Eq. (12), Since the states |j span the whole accessible Hilbert space, we have k|j = δ kj and therefore via Eq. (12) also Then the transition amplitude reads The periodic boundary condition for a 1d lattice of size N For small θ, this result is directly related to the results obtained for a quantum particle in a box [2,3,4], because then we have E n ∼ n 2 .
In the limit N → ∞, Eq.(16) translates to where J k (x) is the Bessel function of the first kind [21].
The same result has also been obtained with a functional integral ansatz [22]. From Eq. (17) we also see that the first maxima of the transition probabilities are related to the maxima of the Bessel function, since we have lim N →∞ π kj (t) = [J k−j (2t)] 2 . However, for an infinite lattice there is no interference due to either backscattering at reflecting boundaries or transmission by periodic boundaries. For higher dimensional lattices the calculation is analogous. We note that the assumption of periodic boundary conditions is strictly valid only in the limit of very large lattice sizes where the exact form of the boundary does not matter [5,6].
Very recently it has been found by Wójcik et al., [23], that the return probability for a 1d generalized coined quantum walk (GCQW), which is a variant of a discrete quantum walk, has the functional form p kk (tτ ) = [J 0 (2t √ D)] 2 , where τ and D are variables specified in [23], which indeed is of the same form as the return probability calculated from Eq.(17). We interpret this as an indication that CTQWs and GCQWs, although not directly translatable into each other, can lead to similar results. However, in [23] the return probability is calculated for a particle on a very large circle such that interference effects are not seen on the short time scales considered there. By looking ahead at Fig. 1, we see that, indeed, on short time scales this is also approximately true in our case of the CTQW on the finite lattice. Nevertheless, without going into further detail at this point, we note this remarkable similarity between CTQWs and GCQWs.
For a CTQW on a 1d circular lattice we calculate the quantum mechanical transition probabilities π jk (t). Figure 1(a) shows the return probability π kk (t) for a CTQW on a circle of 21 nodes first evaluated in a straightforward way by diagonalizing the matrix A numerically, then by using the Bloch function ansatz described above. Both results coincide. For comparison we also have computed the return probability for the infinitely extended system, see Eq.(17). On small time scales all the results coincide. At later times waves propagating on the finite lattice start to interfere; then the results diverge and for a finite lattice one observes an increase in the probability of being at the starting node. This happens around the time t ≈ N/2. In Fig. 1(b) the probability to go from a starting node to the farthest node on the circle, here to go from node 1 to node 11 (or 12), is plotted. Again the calculations by the eigenvalue method and by the Bloch function ansatz are indistinguishable. As before, also the probabilities for the infinite and for the finite systems differ. The difference is more pronounced because in time t ≈ N/4 counterpropagating waves from the starting node interfere at the opposite node.
The probabilities to go from a starting node to all other nodes in time t on a circle of length N = 21 is plotted in Fig. 2(left). (For a CTQW on a circle the starting node is arbitrary.) For small times, when there is no interference, the waves propagate freely. After a time t ≈ N/4 the waves interfere but the pattern remains quite regular. The same holds for N = 20, but the structures are more regular, see Fig. 2(right). This is due to the fact that the number of steps to go form one node to another is even or odd in both directions for the even- For N = 4 we have where π jj (t) and π j,2j (t) are only shifted by a phase factor of π/2 but equal in magnitude. The limiting probability distributions are for N = 3, χ 11 = 5/9 and χ 12 = χ 13 = 2/9 and for N = 4, χ 11 = χ 13 = 3/8 and χ 12 = χ 14 = 1/8, and thus support the findings for bigger lattices, e.g. Fig. 3.
The occurrence of the regular structures is reminiscent of the so-called quantum carpets [2,3,4]. These were found in the interference pattern of a quantum particle, initially prepared as a gaussian wave packet, moving in a 1d box. The spreading and self-interference due to reflection of the wave packet at the walls lead to patterns in the spacetime probability distribution. Furthermore, after some time, the socalled revival time, the whole initial wavefunction gets reconstructed. For a particle in a box, theses quantum revivals are (almost) perfect and the revival time T follows from the energy E n = (nπ /L) 2 /2m = n 2 2π /T , where L is the width of the box [3]. For very long times, Fig. 4 shows a contour plot of the probability for a CTQW on a circle of length N = 21 (left) and N = 20 (right). There is obvious structure in the interference pattern. Furthermore, there are areas on this quantum carpet where there is a very high probability, visualized by dark regions, to find the CTQW at its starting point. Thus, quantum revivals also occur for the discrete lattice. However, these are not perfect.
The revival time τ is given by α kj (τ ) = α kj (0). Since the transition amplitudes are given as a sum over all modes n, see Eq.(16), we cannot give a universal revival time which is independent of n. Nevertheless, from Eq.(16) we get for each mode n its revival time where r ∈ N (without any loss of generality we set r = 1). From Eq. (20) we find that τ n > τ n+1 for n ∈]0, N/2] and τ n < τ n+1 for n ∈]N/2, N ]. For certain values of n, τ n will be of order unity, e.g. for n = N/2 we get τ n = π/2. However, for n << N , Eq.(20) yields τ n = N 2 /2πn 2 ≡ τ 0 /n 2 , which is analogous to the particle in the box and where τ 0 is a universal revival time. Thus, the revival times τ n have large variations in value. To make a sensible statement about at least the first revival time, we need to compare it to the actual time needed by the CTQW for travelling through the lattice. As mentioned earlier, interference effects in the return probability π 1,1 (t) are seen after a time t ≈ N/2. The first revival time has to be larger than this, because there cannot be any revival unless the wave reaches its starting node again. Our calculations suggest that the first revival time will be of order τ 0 . From Fig. 4 we see that the first (incomplete) revival occurs for N = 20 at t ≈ 70 > 20 2 /2π and for N = 21 at t ≈ 75 > 21 2 /2π.
In conclusion we have shown that CTQWs on regular 1d lattices show regular structures in their spacetime transition probabilities. By employing the Bloch function ansatz we calculated quantum mechanical transition probabilities (as a function of time t) between the different nodes of the lattice. These results are practically indistinguishable from the ones obtained by diagonalizing the transfer matrix. We note that the results obtained via the Bloch function ansatz can be related to recent results for GCQWs. The spacetime structures are reminiscent of quantum carpets, but have their first revival at later times than what is found for quantum carpets. Support from the Deutsche Forschungsgemeinschaft (DFG) and the Fonds der Chemischen Industrie is gratefully acknowledged.
|
2017-05-20T16:49:25.417Z
|
2005-02-01T00:00:00.000
|
{
"year": 2005,
"sha1": "d7f85af8e789f95a2f46915148b705b07b71505b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/quant-ph/0502004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "598f584ab4ca902b85cc4f0feaf97bc6c54b0e25",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
}
|
5254876
|
pes2o/s2orc
|
v3-fos-license
|
Osteoblast Cell Response on the Ti6Al4V Alloy Heat-Treated
In an effort to examine the effect of the microstructural changes of the Ti6Al4V alloy, two heat treatments were carried out below (Ti6Al4V800) and above (Ti6Al4V1050) its β-phase transformation temperature. After each treatment, globular and lamellar microstructures were obtained. Saos-2 pre-osteoblast human osteosarcoma cells were seeded onto Ti6Al4V alloy disks and immersed in cell culture for 7 days. Electrochemical assays in situ were performed using OCP and EIS measurements. Impedance data show a passive behavior for the three Ti6Al4V alloys; additionally, enhanced impedance values were recorded for Ti6Al4V800 and Ti6Al4V1050 alloys. This passive behavior in culture medium is mostly due to the formation of TiO2 during their sterilization. Biocompatibility and cell adhesion were characterized using the SEM technique; Ti6Al4V as received and Ti6Al4V800 alloys exhibited polygonal and elongated morphology, whereas Ti6Al4V1050 alloy displayed a spherical morphology. Ti and O elements were identified by EDX analysis due to the TiO2 and signals of C, N and O, related to the formation of organic compounds from extracellular matrix. These results suggest that cell adhesion is more likely to occur on TiO2 formed in discrete α-phase regions (hcp) depending on its microstructure (grains).
Introduction
Commercially pure titanium (CP Ti) and titanium-based alloys are used in dental applications, joints, orthopedic trauma and reconstruction surgery and attachment systems due to their mechanical properties, resistance to corrosion and biocompatibility [1][2][3][4][5][6]. The corrosion resistance of Ti-based alloys is a result of a titanium oxide film formed on their surface at room temperature, which provides them with protection against biological fluids. However, due to its low thickness of between 1 and 4 nm, this film is very susceptible to fracture, leaving the metallic substrate exposed to body fluids and thus giving rise to base metal pitting and its later passivation [7][8][9][10]. Successive fracture-repassivation events of the layer lead to the release of metal ions and oxide particles that may affect important properties, such as Young's modulus of the oxide and the substrate, the hardness and thickness of the oxide film, and its adherence [11]. Moreover, surface morphology and chemistry of the oxide film may be affected as well [12]. Electrochemical properties of the oxide film and its long-term stability in biofluids play an important role in biocompatibility of titanium and its alloys [13][14][15]. Titanium, aluminum and vanadium ions are released in the corrosion process, inhibiting the formation of apatite on the material, rising to a non-harmonious behavior between the implant and the bone [16][17][18].
In order to improve the corrosion resistance of these materials, which implies reduction in releases of toxic metal elements such as vanadium, present in Ti6Al4V alloy, as well as their biocompatibility, resistance to fatigue and appropriate Young's modulus, they are subjected to a variety of treatments. These include mechanical, chemical, physical, thermal and heat treatments [19,20], thermomechanical [21] and deep cryogenic treatments [22], coatings [23][24][25], alkali-plus-heat [26], ion implantation [27][28][29], plasma spray [30], laser metal deposition (LMD) [31], selective laser melting and laser remelting (SLM) [32]. Heat treatments, in particular, may cause changes in the microstructure of the material depending on both temperature and cooling velocity, as well as on aging and alloy elements.
Ti6Al4V is an alpha-beta titanium alloy, where Al and V act as stabilizers of α and β phases, respectively, modifying the Ti transformation temperature; this temperature is 980 ± 20 • C [33,34] and the alloy may present two different microstructures: globular and lamellar, which provide mechanical and corrosion resistance to the Ti alloy [19,20,27,[35][36][37]. Different structural morphologies may be transferred to or may have influence on the surface layers of the alloy, which could lead to different biological behaviors from those of unmodified materials. This study evaluated the effect of microstructural changes generated in Ti6Al4V alloy by two heat treatments, at 800 • C and 1050 • C-temperatures that are below and above the transformation temperature of Ti6Al4V alloy, respectively-on its biocorrosion behavior and its biocompatibility in the presence of osteoblastic cells in a culture medium. Figure 1 shows the microstructural characterization of Ti6Al4V as received, and heat-treated at 800 • C (Ti6Al4V 800 ) and 1050 • C (Ti6Al4V 1050 ), respectively. The Ti6Al4V as received and Ti6Al4V 800 alloys (Figure 1a,b) show β-phase globular grains (dark regions) sized between 2 and 4 µm in diameter, dispersed in the α-phase matrix (bright regions) of 5 to 8 µm in diameter. The α-phase acts as a barrier that prevents the grain size from increasing [33][34][35]. Meanwhile, for Ti6Al4V 1050 , Figure 1c shows a Widmanstatten type microstructure with acicular α-phase or fine α-phase plates surrounded by beta phase on grain edges [38,39]; the plate thickness is approximately 1 µm. inhibiting the formation of apatite on the material, rising to a non-harmonious behavior between the implant and the bone [16][17][18].
Microstructural Characterization
In order to improve the corrosion resistance of these materials, which implies reduction in releases of toxic metal elements such as vanadium, present in Ti6Al4V alloy, as well as their biocompatibility, resistance to fatigue and appropriate Young's modulus, they are subjected to a variety of treatments. These include mechanical, chemical, physical, thermal and heat treatments [19,20], thermomechanical [21] and deep cryogenic treatments [22], coatings [23][24][25], alkali-plus-heat [26], ion implantation [27][28][29], plasma spray [30], laser metal deposition (LMD) [31], selective laser melting and laser remelting (SLM) [32]. Heat treatments, in particular, may cause changes in the microstructure of the material depending on both temperature and cooling velocity, as well as on aging and alloy elements.
Ti6Al4V is an alpha-beta titanium alloy, where Al and V act as stabilizers of α and β phases, respectively, modifying the Ti transformation temperature; this temperature is 980 ± 20 °C [33,34] and the alloy may present two different microstructures: globular and lamellar, which provide mechanical and corrosion resistance to the Ti alloy [19,20,27,[35][36][37]. Different structural morphologies may be transferred to or may have influence on the surface layers of the alloy, which could lead to different biological behaviors from those of unmodified materials. This study evaluated the effect of microstructural changes generated in Ti6Al4V alloy by two heat treatments, at 800 °C and 1050 °C-temperatures that are below and above the transformation temperature of Ti6Al4V alloy, respectively-on its biocorrosion behavior and its biocompatibility in the presence of osteoblastic cells in a culture medium. Figure 1 shows the microstructural characterization of Ti6Al4V as received, and heat-treated at 800 °C (Ti6Al4V800) and 1050 °C (Ti6Al4V1050), respectively. The Ti6Al4V as received and Ti6Al4V800 alloys (Figure 1a,b) show β-phase globular grains (dark regions) sized between 2 and 4 μm in diameter, dispersed in the α-phase matrix (bright regions) of 5 to 8 μm in diameter. The α-phase acts as a barrier that prevents the grain size from increasing [33][34][35]. Meanwhile, for Ti6Al4V1050, Figure 1c shows a Widmanstatten type microstructure with acicular α-phase or fine α-phase plates surrounded by beta phase on grain edges [38,39]; the plate thickness is approximately 1 μm. 2.2. X-ray Diffraction Analysis (XRD) Figure 2 shows diffraction patterns obtained for the three alloys tested. All reflections of αTi and βTi can be observed for Ti6Al4V as received and Ti6Al4V 800 ; whereas α' (acicular α) is generated for Ti6Al4V 1050 alloy [40]. Also, β-phase is observed to a smaller extent for the three materials, retained after the treatment at 38.88 • . This phase remains stable in the alloy as a result of the redistribution of alloy elements (Al and V) during the cooling [41,42]. In general, the composition of the Ti6Al4V alloys after different heat treatments is mainly α-phase with a small amount of β-phase [43], with these alloys exhibiting different microstructural features ( Figure 1); an acicular type for the Ti6Al4V 1050 alloy, as opposed to grains for Ti6Al4V as received and Ti6Al4V 800 alloys. Figure 2 shows diffraction patterns obtained for the three alloys tested. All reflections of αTi and βTi can be observed for Ti6Al4V as received and Ti6Al4V800; whereas α' (acicular α) is generated for Ti6Al4V1050 alloy [40]. Also, β-phase is observed to a smaller extent for the three materials, retained after the treatment at 38.88°. This phase remains stable in the alloy as a result of the redistribution of alloy elements (Al and V) during the cooling [41,42]. In general, the composition of the Ti6Al4V alloys after different heat treatments is mainly α-phase with a small amount of β-phase [43], with these alloys exhibiting different microstructural features ( Figure 1); an acicular type for the Ti6Al4V1050 alloy, as opposed to grains for Ti6Al4V as received and Ti6Al4V800 alloys. Figure 3 compares high-resolution XPS spectra of Ti 2p, O1s and Al 2p obtained on the surface of Ti6Al4V alloy, as received and with different heat treatments. The Ti 2p spectra (Figure 3a,c,f) can be fitted with four doublets and different binding energies. The first doublet, located at 453.7 and 460.3 eV is associated with the presence of Ti in the metallic state (Ti Metallic ); the second, at 454.7 and 460.2 eV, may be assigned to the presence of TiO (Ti 2+ ), and the third, at 457.4 and 464.2 eV, reveals the presence of Ti2O3 (Ti 3+ ). The doublet with the highest intensity is observed at 458.4 and 463.6 eV, which could be attributed to the presence of TiO2 (Ti 4+ ). These titanium oxides make part of a thin passive layer (a few nm thick) formed after the sterilization process at the outermost surface of the alloy.
X-ray Photoelectron Spectroscopy Analysis (XPS)
The O 1s spectrum (Figure 3b,d,g) could be fitted with two components of similar intensity. The first one is located approximately at 529.6 eV and normally assigned to the presence of Ti-O bonds and related to TiO2. The second component in the O 1s spectrum is located at 531.5-532 eV and is attributed to the presence of OH − groups, or to adsorbed water and a component with a binding energy of 531.8 eV, associated with the presence of oxygen in the form of aluminum oxide (Al2O3) [44]. This indicates that the oxide surface is mainly composed of TiO2 being hydrated and probably forming an oxy-hydroxide. The presence of aluminum in the chemical composition of the film was detected as Al Metallic at 71 and 71.5 eV, and as Al2O3 at 74.2-74.8 eV, only for the Ti6Al4V800 and Ti6Al4V1050 alloys (Figure 3e,h) ; whereas for the Ti6Al4V as received, this signal is absent, likely due to its minor oxide thickness or contribution in the oxide. It is important to note that vanadium was not detected under the employed conditions [45][46][47][48]; however, it had been reported in low concentrations compared to oxygen. (Figure 3a,c,f) can be fitted with four doublets and different binding energies. The first doublet, located at 453.7 and 460.3 eV is associated with the presence of Ti in the metallic state (Ti Metallic ); the second, at 454.7 and 460.2 eV, may be assigned to the presence of TiO (Ti 2+ ), and the third, at 457.4 and 464.2 eV, reveals the presence of Ti 2 O 3 (Ti 3+ ). The doublet with the highest intensity is observed at 458.4 and 463.6 eV, which could be attributed to the presence of TiO 2 (Ti 4+ ). These titanium oxides make part of a thin passive layer (a few nm thick) formed after the sterilization process at the outermost surface of the alloy.
The O 1s spectrum (Figure 3b,d,g) could be fitted with two components of similar intensity. The first one is located approximately at 529.6 eV and normally assigned to the presence of Ti-O bonds and related to TiO 2 . The second component in the O 1s spectrum is located at 531.5-532 eV and is attributed to the presence of OH − groups, or to adsorbed water and a component with a binding energy of 531.8 eV, associated with the presence of oxygen in the form of aluminum oxide (Al 2 O 3 ) [44]. This indicates that the oxide surface is mainly composed of TiO 2 being hydrated and probably forming an oxy-hydroxide. The presence of aluminum in the chemical composition of the film was detected as Al Metallic at 71 and 71.5 eV, and as Al 2 O 3 at 74.2-74.8 eV, only for the Ti6Al4V 800 and Ti6Al4V 1050 alloys (Figure 3e,h) ; whereas for the Ti6Al4V as received, this signal is absent, likely due to its minor oxide thickness or contribution in the oxide. It is important to note that vanadium was not detected under the employed conditions [45][46][47][48]; however, it had been reported in low concentrations compared to oxygen. The thickness of the oxide film on the metallic surfaces is calculated using the Strohmeier equation (Equation (1)) [49]: where d o is the thickness of the TiO 2 layer (in nm); θ is the photoelectron output angle; I metal and I oxide are the intensities of the titanium components in the metallic state and as Ti 4+ from the Ti2p peak; λ metal and λ oxide are the mean free paths of photoelectrons in the substrate and the oxide layer; and N m and N o are the volume densities of titanium atoms in metal and oxide. The values of λ metal and λ oxide are 1.73 and 3.08 nm, respectively [50]. Table 1 shows the oxide film thickness calculated by Equation (1), where Ti6Al4V 800 and Ti6Al4V 1050 exhibit an increase by a factor of 2 as compared to Ti6Al4V as received; this increase is observed for surfaces heat-treated at 800 • C and 1050 • C. Figure 4 shows the evolution of the open circuit potential (OCP) for Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 over the immersion time in a biological solution with osteoblastic cells (DMEM at 10% of FBS + cells). At t = 0 (culture medium without cells), the OCP values are seen to be more negative after the heat treatment of the alloy. The OCP values tend to displace in the negative direction and remain constant as of the 4th day for Ti6Al4V as received and Ti6Al4V 800 alloys, showing similar activity to that of the passive oxide layer during its immersion. Conversely, the initial OCP of Ti6Al4V 1050 alloy is more negative but its evolution during the test goes in a positive direction, improving over time without stabilizing at the end of the test (7 days). This trend shows that the surface layer formed on this alloy evolves towards a higher passivity due to the increase in the oxide thickness, a greater hydration and/or a positive interaction of proteins from the medium and likely a faster cell growth. The thickness of the oxide film on the metallic surfaces is calculated using the Strohmeier equation (Equation (1)) [49]: where do is the thickness of the TiO2 layer (in nm); θ is the photoelectron output angle; Imetal and Ioxide are the intensities of the titanium components in the metallic state and as Ti 4+ from the Ti2p peak; λmetal and λoxide are the mean free paths of photoelectrons in the substrate and the oxide layer; and Nm and No are the volume densities of titanium atoms in metal and oxide. The values of λmetal and λoxide are 1.73 and 3.08 nm, respectively [50]. Table 1 shows the oxide film thickness calculated by Equation (1), where Ti6Al4V800 and Ti6Al4V1050 exhibit an increase by a factor of 2 as compared to Ti6Al4V as received; this increase is observed for surfaces heat-treated at 800 °C and 1050 °C. Figure 4 shows the evolution of the open circuit potential (OCP) for Ti6Al4V as received, Ti6Al4V800 and Ti6Al4V1050 over the immersion time in a biological solution with osteoblastic cells (DMEM at 10% of FBS + cells). At t = 0 (culture medium without cells), the OCP values are seen to be more negative after the heat treatment of the alloy. The OCP values tend to displace in the negative direction and remain constant as of the 4th day for Ti6Al4V as received and Ti6Al4V800 alloys, showing similar activity to that of the passive oxide layer during its immersion. Conversely, the initial OCP of Ti6Al4V1050 alloy is more negative but its evolution during the test goes in a positive direction, improving over time without stabilizing at the end of the test (7 days). This trend shows that the surface layer formed on this alloy evolves towards a higher passivity due to the increase in the oxide thickness, a greater hydration and/or a positive interaction of proteins from the medium and likely a faster cell growth. 6 show Nyquist (Zimag vs. Zreal) and Bode plots (Module |Z| vs. Frequency and Phase angle vs. Frequency) obtained for Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 over the immersion time (0, 1 and 7 days) in the culture medium with osteoblastic cells (DMEM at 10% of FBS + osteoblastic cells). A similar electrochemical behavior can be observed for the different alloys during their immersion in the cell culture, which indicates a steady state over time due to the passivity provided by the oxide layer, mostly TiO 2 ( Figure 3). The capacitive response is due to an increase in imaginary impedance, Figure 5, related to the high corrosion resistance of the materials. This increase is higher for Ti6Al4V 800 and Ti6Al4V 1050 (Figure 5b,c), as compared to Ti6Al4V as received (Figure 5a), mainly due to the increase in the oxide thickness (see Table 1).
Materials 2017, 10,445 6 of 16 during their immersion in the cell culture, which indicates a steady state over time due to the passivity provided by the oxide layer, mostly TiO2 (Figure 3). The capacitive response is due to an increase in imaginary impedance, Figure 5, related to the high corrosion resistance of the materials. This increase is higher for Ti6Al4V800 and Ti6Al4V1050 (Figure 5b,c), as compared to Ti6Al4V as received (Figure 5a), mainly due to the increase in the oxide thickness (see Table 1). Bode plots of Impedance Modulus vs. Frequency (Figure 6a-c) reveal a plateau at high frequencies, associated with the resistance of the solution. The decrease in frequency gives rise to a slope between 0.887 and 0.936, which is attributed to a capacitive behavior; this is consistent with Phase angle vs. Frequency plots (Figure 6d-f), due to the increase in the angle from 0° to closely −90°. This capacitive response is associated with the presence of TiO2; furthermore, for 1 and 7 days of immersion a slight increase in phase angle values is observed (see inset) likely due to the adsorption of proteins and cells on the surface of the materials [51][52][53][54]. This latter may modify the relaxation of different time constants (possibly overlapped) in the interval from 10 -2 to 10 3 Hz for Ti6Al4V as received, Ti6Al4V800 and Ti6Al4V1050 alloys; allowing the identification of at least two time constants. According to the literature [55], these constants are associated with the formation of the film made of oxides, mainly TiO2, as well as of the proteins adsorbed on the oxide; the extracellular matrix and cells adhered to this latter.
Taking into consideration that the cell interaction until their confluence on the metallic surface (quasi-total surface coverage by cells) occurs locally, the impedance responses might be simulated considering a partially-covered surface. Impedance plots were fitted considering equivalent circuits shown in Figure 7. For the initial time given in Figure 7a, we have considered a circuit, RC, composed of electrolyte resistance Re; a constant phase element Qf, which simulates a non-linear behavior of the capacitor due to the passive film formed by the oxide and adsorbed proteins; and the resistance associated with this film, Rf. Figure 7b exhibits the equivalent circuit used to simulate EIS plots for 1 and 7 days of immersion in osteoblast culture. In this case, the elements associated with Bode plots of Impedance Modulus vs. Frequency (Figure 6a-c) reveal a plateau at high frequencies, associated with the resistance of the solution. The decrease in frequency gives rise to a slope between 0.887 and 0.936, which is attributed to a capacitive behavior; this is consistent with Phase angle vs. Frequency plots (Figure 6d-f), due to the increase in the angle from 0 • to closely −90 • . This capacitive response is associated with the presence of TiO 2 ; furthermore, for 1 and 7 days of immersion a slight increase in phase angle values is observed (see inset) likely due to the adsorption of proteins and cells on the surface of the materials [51][52][53][54]. This latter may modify the relaxation of different time constants (possibly overlapped) in the interval from 10 −2 to 10 3 Hz for Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 alloys; allowing the identification of at least two time constants. According to the literature [55], these constants are associated with the formation of the film made of oxides, mainly TiO 2 , as well as of the proteins adsorbed on the oxide; the extracellular matrix and cells adhered to this latter. Taking into consideration that the cell interaction until their confluence on the metallic surface (quasi-total surface coverage by cells) occurs locally, the impedance responses might be simulated considering a partially-covered surface. Impedance plots were fitted considering equivalent circuits shown in Figure 7. For the initial time given in Figure 7a, we have considered a circuit, RC, composed of electrolyte resistance R e ; a constant phase element Q f , which simulates a non-linear behavior of the capacitor due to the passive film formed by the oxide and adsorbed proteins; and the resistance associated with this film, R f . Figure 7b exhibits the equivalent circuit used to simulate EIS plots for 1 and 7 days of immersion in osteoblast culture. In this case, the elements associated with the resistance and non-ideal capacitance of the extracellular matrix and osteoblasts, R extra and Q cell have been considered as well. It was carried out the fitting of the EIS diagrams and the results are given in Table 2. This table shows that in the absence or in presence of cells, the heat treatment has no effect on the values of the solution resistance (∼56.7 Ω cm 2 ) as a result of the immersion time. The Qf values at 0 days are at the same order of magnitude (10 −5 F cm −2 ) typical of passive films that slightly decrease as the oxide thickness is enhanced. Also, this parameter decreases by one order of magnitude (10 −6 F cm −2 ) for Ti6Al4V, and by two (10 −7 F cm −2 ) for the heat-treated samples immersed for 1 and 7 days; these results could be due to variations in the hydration of the TiO2, thickness and/or protein adsorption. This chemical adsorption modifies the non-ideal capacitance of the titanium oxide becoming more resistive through the immersion. In relation to the Rf parameter, values of 10 7 Ω cm 2 are reached for the three samples at different immersion times, with these results being similar to those reported in the literature [56].
Addition of cells to the culture modifies the interface of the Ti alloys tested in this biological medium; thereby the non-ideal capacitance (Qcell) and resistance of the extracellular matrix (Rextra) excreted by the cells on TiO2/adsorbed proteins can be analyzed. It is important to note that the Qcell This table shows that in the absence or in presence of cells, the heat treatment has no effect on the values of the solution resistance (~56.7 Ω cm 2 ) as a result of the immersion time. The Q f values at 0 days are at the same order of magnitude (10 −5 F cm −2 ) typical of passive films that slightly decrease as the oxide thickness is enhanced. Also, this parameter decreases by one order of magnitude (10 −6 F cm −2 ) for Ti6Al4V, and by two (10 −7 F cm −2 ) for the heat-treated samples immersed for 1 and 7 days; these results could be due to variations in the hydration of the TiO 2 , thickness and/or protein adsorption. This chemical adsorption modifies the non-ideal capacitance of the titanium oxide becoming more resistive through the immersion. In relation to the R f parameter, values of 10 7 Ω cm 2 are reached for the three samples at different immersion times, with these results being similar to those reported in the literature [56].
Addition of cells to the culture modifies the interface of the Ti alloys tested in this biological medium; thereby the non-ideal capacitance (Q cell ) and resistance of the extracellular matrix (R extra ) excreted by the cells on TiO 2 /adsorbed proteins can be analyzed. It is important to note that the Q cell values are in the same order of magnitude (10 −5 F cm −2 ) as those reported for the TiO 2 (0 days). These findings can be explained by considering a weak adhesion and/or minor coverage of the osteoblast cells on the Ti6Al4V alloys (see below). Thus, it can be assumed that there is a poor interaction between the extracellular matrix excretion (mainly type I collagen) and the TiO 2 ; likely due to its adsorption (via oxygen) through oxygen vacancies, as has been reported in the literature [16,26,31,53]. Regarding the R extra values, a slight increase is seen at 1 day for Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 alloys, followed by its decrease after 7 days of immersion. This resistive contribution is related to the coverage of cells adhered to these surfaces, i.e., a greater cell adhesion would lead to an increase in this resistive term. Under this assumption, an enhanced coverage of the extracellular matrix and a higher proliferation of cells would be expected for Ti6Al4V as received, and to a minor extent for Ti6Al4V 800 and Ti6Al4V 1050 alloys (see below). Figure 8 shows SEM images of the Ti surfaces after 7 days of immersion in the osteoblast culture medium. The three surfaces are partially covered by cells, which in the case of the Ti6Al4V as received and Ti6Al4V 800 alloys are polygonal, elongated and fully spread (Figure 8a,c); whereas the Ti6Al4V 1050 alloy (Figure 8e) presents round, non-spread cells, accumulated in some areas of the surface. This morphology could be due to the difficulties of cell adsorption on the oxide, which depends on the microstructure (lamellar microstructure) and distribution of alloyed elements (Al and V) after the heat treatment. The oxide film in the three alloys is partially covered by cells and extracellular matrix (Figure 8g-i) and there is an increase in osteoblast anchoring and proliferation in the following order: Ti6Al4V 1050 , Ti6Al4V 800 and Ti6Al4V as received; consistent with the EIS analysis.
Morphological Observation
The element composition at the surface level (concentration in weight, %w) of different alloys was detected by EDX analysis after 7 days of immersion in the cell culture medium. In the analysis performed on cell-free areas of the three tested surfaces (Figure 8b,d,f), signals of Ti and O associated with the formation of TiO 2 were detected; even though the O quantification was not evident for Ti6Al4V as received, because of the predominant Ti signal and low oxide thickness. Other metallic oxides (Al and V) may also form there, however their contribution in the oxide is minor because most of these signals come from the metallic substrate consistent with the XPS analysis. In the cell-covered areas, however, signals of C and O were identified, whose proportions were quite similar in Ti6Al4V as received and Ti6Al4V 800 alloys; whereas in Ti6Al4V 1050 , the signal of C was larger than the signal of O. As the presence of these elements is related to cell adhesion, this process can be deemed to occur in a similar manner for Ti6Al4V as received and Ti6Al4V 800 ; while for Ti6Al4V 1050 , the adhesion seems to take place differently, perhaps due to the synthesis of other carbon compounds from extracellular matrix and/or their orientation on the metallic surface [57,58]. To explain the reduced O signal, it can be suggested the O −2 adsorption from the chemical compounds could be retarded by the hydroxide or water species on the TiO 2 surface being larger hydrated for Ti6Al4V 1050 . Besides, this adsorption process is affected by the formation of a less defective oxide on this alloy (less oxygen vacancies); conversely, for Ti6Al4V as received and Ti6Al4V 800 , the formation of other sub-oxides (TiO and Ti 2 O 3 ) takes place to a major extent. Another difference between these alloys results from the presence of N, which is evident for Ti6Al4V 1050 , followed by Ti6Al4V 800 and Ti6Al4V as received, and results from the presence of proteins, mainly type I collagen, but also from the presence of phosphorylated glycoproteins, osteocalcin and matrix Gla proteins. According to these results, it seems that the presence of organic compounds may vary during the process of cell adhesion due to the differences in the microstructure of the materials.
in titanium alloys and morphology of cell adhesion, the adsorption process is considered to take place predominantly in domains where there is an α-phase (hcp); likely due to the presence of Al, which is widely known to be easily hydrated [47,54,[66][67][68][69], enhancing the cell adhesion. Thus, depending on the microstructure, the proteins can orient on the oxide surface [42,58] and the morphology of the cell adhesion is different; e.g., the grains observed in Figure 1 for the Ti6Al4V as received and Ti6Al4V800 alloys encompass the same polygonal morphology of the cells. Conversely, for the Ti6Al4V1050 alloy, an enlarged cell adhesion is observed related to its lamellar features. On the other hand, Ca (<0.33%w) and P (0.28%w) elements were detected on the surface of Ti6Al4V 800 alloy, whereas on the Ti6Al4V 1050 surface only Ca (0.11%w) was identified. Ca and P precipitation on these surfaces suggests that they are precursors of bone mineralization in in vivo assays and therefore, they play an important role in their interaction with metallic surfaces and consequent cell adhesion [58][59][60][61][62][63]. P and Ca were not detected on Ti6Al4V, likely due to a lower oxide hydration after 7 days of immersion, according to XPS results, see Figure 3. This assumption is consistent with studies on the biocompatibility and osteointegration of these materials, where oxide hydration had been reported to favor calcium phosphate precipitation [56,57,59,64].
The differences obtained during cell adhesion on Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 (Figure 8b,d,f) may be explained by suggesting that they are due to the interaction of proteins and/or cells with the oxide at surface level. The protein adsorption is the first stage prior to cell adhesion, whereas the quality of adhesion has an influence on the morphology, capacity for proliferation and cell differentiation [37,58,60,65]. So, taking into consideration the dominant phase in titanium alloys and morphology of cell adhesion, the adsorption process is considered to take place predominantly in domains where there is an α-phase (hcp); likely due to the presence of Al, which is widely known to be easily hydrated [47,54,[66][67][68][69], enhancing the cell adhesion. Thus, depending on the microstructure, the proteins can orient on the oxide surface [42,58] and the morphology of the cell adhesion is different; e.g., the grains observed in Figure 1 for the Ti6Al4V as received and Ti6Al4V 800 alloys encompass the same polygonal morphology of the cells. Conversely, for the Ti6Al4V 1050 alloy, an enlarged cell adhesion is observed related to its lamellar features.
The proportion of sub-oxides that make up the passive oxide film after the sterilization process is an important point to highlight for the three tested alloys, because they are related to a less passive behavior of the Ti6Al4V as received. Conversely, a major passivation is obtained, e.g., for the Ti6Al4V 800 and Ti6Al4V 1050 alloys. Another important factor to consider is related to film hydration, which is enlarged for Ti6Al4V 1050 (Figure 3g) compared to the other two alloys; this may be due to a reordering of alloy elements (Al and V) in the lamellar array obtained after heat treatment at 1050 • C [21]. The above facts may be related to Al enrichment in the α-phase [37,45,50] and to Ca adhesion to the surface of Ti6Al4V 1050 alloy [40,57,62,63]. Furthermore, the presence of V in the oxide had been reported to inhibit the formation of calcium phosphate [69]. Figure 8g-i show that osteoblast adhesion and proliferation depend on the microstructure and the nature of the passive oxide on the substrate. These results indicate that cell adhesion takes place despite microstructural differences [70,71], depending on both their capacity for extra-and intracellular matrix excretion, and microstructural differences between Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 alloys.
Heat Treatments
Annealed Ti6Al4V alloy bars (Goodfellow Materials Ldt, Huntingdon, UK), 12.7 mm in diameter and 20 mm in length, were encapsulated in quartz under argon atmosphere to prevent their oxidation upon subjection to temperatures below (800 • C) and above (1050 • C) the β-phase transformation temperature (980 ± 20 • C) [33] for six hours, and later air-cooled (approximately 35 • C min −1 ). Afterwards, the heat-treated bars were cut to obtain 2 mm-thick discs.
Microstructure Revealing
The samples were prepared for microstructural observation following a standard metallographic technique: smoothing with 400-, 600-, 1200-and 1500-grit SiC paper sheet, polishing with 0.3 µm alumina to a mirror finish and chemical etch with Kroll's reagent (HF + HNO 3 and distilled water in 1:3:96 proportions) [34]. The microstructure was characterized using a Nikon EPIPHOT 300 optical microscope coupled to a Nikon FDX-35 camera (Nikon Instruments Europe B.V., Amsterdam, The Netherlands).
Electrochemical Cell
A glass electrochemical cell was specially designed for cell culture and consisted of a base, two Teflon plates, and a Ti6Al4V working electrode in between (with and without heat treatment). The electrochemical cell was screwed in the upper Teflon piece and adjusted with a silicone o-ring [64] and maintained at 37 • C in a water bath. A calomel-saturated electrode was utilized as a reference electrode, and a platinum wire as a counter electrode (Goodfellow Cambridge Ltd, Huntingdon, UK). Luer fittings were employed to control CO 2 entry and exit (5%). The electrochemical cell and polished Ti6Al4V alloy discs, as received and heat-treated, and counter electrode, were sterilized in an autoclave for 30 min at 120 • C and 1.2 kg cm −2 . The reference electrode was sterilized using UV-light for 10 min.
In Vitro Assays
Saos-2 pre-osteoblast human osteosarcoma cells (from the cell bank of the Center for Biological Researches, CSIC, Madrid, Spain) were used for in vitro assays. After 24 h of immersion in culture medium, 10,000 cells were seeded on disks of Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 alloys, in order to cover the exposed metallic surface (A = 38.2 mm 2 ). The assays lasted 7 days without and with electrical perturbation, at least in triplicate. In the first case, cells were only deposited and after the immersion, they were fixed for analysis using a Scanning Electron Microscope with microanalysis (SEM/EDX); whereas in the second case, open circuit potential (OCP) and Electrochemical Impedance Spectroscopy (EIS) measurements were made over time and the samples were later prepared and observed by SEM/EDX (not shown). It is worth mentioning that electrochemical measurements made at initial time (0 hours) were performed in the absence of cells, whereas the cell culture medium was renovated every 48 h to ensure an adequate contribution of nutrients to the cells and remove waste products from them. EIS characterization was carried out by applying a sinusoidal signal of 5 mV amplitude within frequency interval from 10 5 Hz to 10 −2 Hz with 10 points per decade, using a Gamry 600 Potentiostat-Galvanostat coupled to a PC (Gamry Instruments Inc, Warminster, PA, USA) for data acquisition and control.
Cell Fixation
To carry out morphological studies, cells seeded on metal discs immersed in 24-well plates for 7 days were fixed on the metal surface by adding 1 mL of 2% glutaraldehyde; which were kept at 4 • C for 24 h. Cells were then dehydrated by immersion in a series of ethanol solutions ranging from 35% to 100%. To dry the surface-adhered cells, a Trimethylsilane solution (TMS Sigma-Aldrich ® ) at 50% (0.5 mL of TMS in 0.5 mL of 100% ethanol) was added for 10 min. This solution was removed and 1 mL of TMS at 100% was added for another 10 min. Lastly, TMS was removed and left to air-dry during 30 min.
Surface Characterization
Surface analyses were carried out using the following devices: a Bruker AXS D8 Focus X-ray diffractometer (Bruker AXS GmbH, Karlsruhe, Germany) with Cu Kα radiation, fitted with an iron fluorescence filter, in the range 20 • -90 • , at a rate of 8 • min −1 with a 0.02 passage. X-ray Mg/Al anode (CLAM 2) operating at 300W and at low pressures (< 10 −8 Torr), whereas peaks were fitted with Gaussian-Lorentzian curves after subtracting the Sherly-type background using Leibol software (Homemade v. 2.0, CENIM, Madrid, Spain) and XPS (Fison Instruments, East Grinstead, UK). Hitachi S4800 Scanning Electron Microscope (Hitachi High-Technologies Corporation, Tokyo, Japan), operating at 15 kV and coupled to an EDX was used for the morphological observation.
Conclusions
This work has studied the properties of the oxide film formed on the surface of Ti6Al4V as received, Ti6Al4V 800 and Ti6Al4V 1050 alloys, immersed in osteoblast culture, using XRD, XPS and EIS techniques. The results show that depending on the temperature of heat treatment, the Ti6Al4V alloy can have two microstructures: globular and lamellar. The XRD analysis of these alloys allowed the identification of the α-phase as predominant in both microstructures. The XPS analysis determined that the passive oxide is mostly composed of TiO 2 , without discarding the presence of other sub-oxides (TiO, Ti 2 O 3 and Al 2 O 3 ). An important issue to emphasize is the decrease in sub-oxides, barely perceived in the case of Ti6Al4V 1050 . An increase in film thickness was seen in relation to the Ti6Al4V as received alloy, being double that of the heat-treated samples, which may be associated with the change in microstructure. The EIS technique allowed the observation of a stable passive behavior of the three materials immersed in cell culture for 7 days at 37 • C. As far as cell adhesion and proliferation are concerned, it was seen that cells are more dispersed on the surfaces of Ti6Al4V as received and Ti6Al4V 800 , whereas on Ti6Al4V 1050 cells accumulated on the intermediate part of the surface. The above may be related to the nature of the substrate and the growth of passive oxide on it. The EDX analysis revealed Ca precipitated on Ti6Al4V 800 and Ti6Al4V 1050 , however, the largest amount (0.3%w) was observed on the surface of Ti6Al4V 1050 and in regions with the accumulation of cells. This is associated with hydration and the presence of Al in the outermost layer of the passive film. The microstructural changes in the Ti6Al4V alloy due to the heat treatments have an effect on the growth of the passive oxide, which leads to the variation of its chemical composition in relation to the presence of other sub-oxides, such as TiO, Ti 2 O 3 and Al 2 O 3 . The above is directly related to cell adhesion and morphology as well as to cell proliferation. The presence of Ca and P on Ti6Al4V 800 and TiAl4V 1050 could indicate an important effect of the heat treatment on its biocompatibility. Therefore, these heat-treated surfaces could provide the Ti6Al4V alloy with an improvement in its performance as a biomaterial to use for the manufacture of orthopedic implants.
|
2018-04-03T01:25:37.223Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "a91767701ab8ed3a32ad3e4cdccf8ac7cabd73ac",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/10/4/445/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a91767701ab8ed3a32ad3e4cdccf8ac7cabd73ac",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
247223117
|
pes2o/s2orc
|
v3-fos-license
|
Semilinear elliptic Schr\"odinger equations involving singular potentials and source terms
Let $\Omega \subset \mathbb{R}^N$ ($N>2$) be a $C^2$ bounded domain and $\Sigma \subset \Omega$ be a compact, $C^2$ submanifold without boundary, of dimension $k$ with $0\leq k<N-2$. Put $L_\mu = \Delta + \mu d_\Sigma^{-2}$ in $\Omega \setminus \Sigma$, where $d_\Sigma(x) = \mathrm{dist}(x,\Sigma)$ and $\mu$ is a parameter. We study the boundary value problem (P) $-L_\mu u = g(u) + \tau$ in $\Omega \setminus \Sigma$ with condition $u=\nu$ on $\partial \Omega \cup \Sigma$, where $g: \mathbb{R} \to \mathbb{R}$ is a nondecreasing, continuous function and $\tau$ and $\nu$ are positive measures. The interplay between the inverse-square potential $d_\Sigma^{-2}$, the nature of the source term $g(u)$ and the measure data $\tau,\nu$ yields substantial difficulties in the research of the problem. We perform a deep analysis based on delicate estimate on the Green kernel and Martin kernel and fine topologies induced by appropriate capacities to establish various necessary and sufficient conditions for the existence of a solution in different cases.
1. Introduction 1.1. Motivation and aim. The research of Schrödinger equations is a hot topic in the area of partial differential equations because of its applications in encoding physical properties of quantum systems. In the literature, a large number of publications have been devoted to the investigation of stationary Schrödinger equations involving the Laplacian with a singular potential. The presence of the singular potential yields distinctive features of the research and leads to disclose new phenomena.
The borderline case where the potential is the inverse-square of the distance to a submanifold of the domain under consideration is of interest since in this case the potential admits the same scaling (of degree −2) as the Laplacian and hence cannot be treated simply by standard perturbation methods. Several works have been carried out to investigate the effect of such a potential in various aspects, including a recent study on linear equations.
The present paper originated in attempts to set a step forward in the study of elliptic nonlinear Schrodinger equations involving an inverse-square potential and a source term in measure frameworks.
Various works on problem (1.5) and related problems have been published in the literature, including excellent papers of Dávila and Dupaigne [9,7,8] where important tools in function settings are established and combined with a monotonicity argument in derivation of existence, nonexistence, uniqueness for solutions with zero boundary datum. Afterwards, deep nonexistence results for nonnegative distributional supersolutions have been obtained by Fall [10] via a linearization argument. Recently, a description on isolated singularities in case Σ = {0} ⊂ Ω has been provided by Chen and Zhou [6].
In the present paper, the interplay between dimention of the set Σ, the value of µ, the growth of the source term and the concentration of measure data causes the invalidity or quite restrictive applicability of the techniques used in the mentioned papers and leads to the involvement of several critical exponents for the solvability of problem (1.5). Therefore, our aim is to perform further analysis and to establish effective tools, which allow us to obtain existence and nonexistence results for (1.5)
The notion of the weak solutions of (1.5) is given below.
By Theorem 4.8, u is a weak solution of (1.5) if and only if in Ω \ Σ.
Our main results disclose different scenarios, depending on the interplay between the concentration and the total variation of the measure data, and the size of the set Σ, in which the existence of a solution to (1.5) can be derived. In the following theorem, we show the existence, as well as weak Lebesgue estimates, of a solution to (1.5) provided that the nonlinearity g has mild growth and the measure data have small norm. for some q ∈ (1, ∞) and |g(s)| ≤ a|s|q for some a > 0,q > 1 and for any |s| ≤ 1. (1.11) Assume one of the following conditions holds.
and (1.10) holds with q = N +γ N +γ−2 . Then the conclusion of Theorem 1.2 holds true with Then the conclusion of Theorem 1.2 holds true with q as in (1.13). (1.14) Then the conclusion of Theorem 1.2 holds true with q as in (1.14).
has compact support in ∂Ω with ν M(∂Ω∪Σ) = 1, and (1.10) holds with q = N +1 N −1 . Then the conclusion of Theorem 1.2 holds true with q = N +1 N −1 . We remark that condition (1.14) is not sharp. When g is a pure power function, condition (1.14) can be improved to be sharp, as pointed out in the remark following Theorem 1.5.
When g is a power function, namely g(u) = |u| p−1 u for p > 1, problem (1.5) becomes We will point out below that the exponents N +γ N +γ−2 , N −α − N −α − −2 and N +1 N −1 are critical exponents for the existence of a solution to (1.15). Moreover, by performing further analysis, we are able to provide necessary and sufficient conditions in terms of estimates of the Green kernel and Martin kernel, as well as in terms of appropriate capacities.
We first consider (1.15) with σν = 0. Let us introduce suitable capacities. For α ≤ N −2, set Here 1 E denotes the indicator function of E. By [1, Theorem 2.5.1], we have Theorem 1.4. We assume that µ < N −2 2 2 and . This is shown in Theorem 6.18. Next we investigate (1.15) with τ = 0. To this end, we make use of a different type of capacities whose definition is introduced in (6.33). These capacities are denoted by Cap Γ θ,s , where Γ = ∂Ω or Γ = Σ, which allow us to measure Borel subsets of ∂Ω ∪ Σ in a subtle way. with compact support in Σ. Then the following statements are equivalent.
Existence results in case boundary data are concentrated on ∂Ω are stated in the next theorem. N −1 , for any σ > 0 and any z ∈ ∂Ω, equation (1.18) with ν = δ z does not admit any positive solution (see Remark 6.19). It will be also pointed out that when µ > 0 and p ≥ 2+α − α − , for any σ > 0 and any ν ∈ M + (∂Ω ∪ Σ) with compact support in ∂Ω, problem (1.18) does not admit any positive weak solution. This is discussed in Lemma 6.15.
Organization of the paper. In Section 2, we present main properties of the submanifold Σ and recall important facts about the first eigenfunction, Green kernel and Martin kernel of −L µ . In Section 3, we establish sharp estimates on the Green operator and Martin operator, which play an important role in proving the existence of a solution to (1.5). We then discuss the notion of boundary trace and several results regarding linear equations involving −L µ in Section 4. Section 5 is devoted to the proof of Theorems 1.2 and 1.3. In section 6, we focus on the power case and provide the proof of Theorems 1.4-1.6. In Appendix A, we give an estimate which is useful in the proof of several results in Section 3.
Notations.
We list below notations that are frequently used in the paper.
• Let φ be a positive continuous function in Ω \ Σ and κ ≥ 1. Let L κ (Ω; φ) be the space of functions f such that The weighted Sobolev space H 1 (Ω; φ) is the space of functions f ∈ L 2 (Ω; φ) such that ∇f ∈ L 2 (Ω; φ). This space is endowed with the norm • For a measure ω, denote by ω + and ω − the positive part and negative part of ω.
• We denote by c, c 1 , C... the constant which depend on initial parameters and may change from one appearance to another.
• The notation A B (resp. A B) means A ≥ c B (resp. A ≤ c B) where the implicit c is a positive constant depending on some initial parameters. If A B and A B, we write A ≈ B. Throughout the paper, most of the implicit constants depend on some (or all) of the initial parameters such as N, Ω, Σ, k, µ and we will omit these dependencies in the notations (except when it is necessary).
., x k ) ∈ R k and x ′′ = (x k+1 , ..., x N ) ∈ R N −k . For β > 0, we denote by B k β (x ′ ) the ball in R k with center at x ′ and radius β. For any ξ ∈ Σ, we set for some functions Γ ξ i : R k → R, i = k + 1, ..., N . Since Σ is a C 2 compact submanifold in R N without boundary, there is β 0 such that the followings hold.
Weak Lebesgue estimates
3.1. Auxiliary estimates. We first recall the definition of weak Lebesgue spaces (or Marcinkiewicz spaces). Let D ⊂ R N be a domain. Denote by L κ w (D; τ ), 1 ≤ κ < ∞, τ ∈ M + (D), the weak L κ space defined as follows: a measurable function f in D belongs to this space if there exists a constant c such that The function λ f is called the distribution function of f (relative to τ ). For p ≥ 1, denote is not a norm, but for κ > 1, it is equivalent to the norm More precisely, We also denote byL κ w the weak type L κ space with norm When dτ = ϕ dx for some positive continuous function ϕ, for simplicity, we use the notation L κ w (D; ϕ). Notice that L κ w (D; ϕ) ⊂ L r (D; ϕ) for any r ∈ [1, κ). From (3.2) and (3.3), one can derive the following estimate which is useful in the sequel. For any f ∈ L κ w (D; ϕ), there holds Let us recall a result from [4] which will be used in the proof of weak Lebesgue estimates for the Green kernel and Martin kernel.
Let β 1 be as in (2.1). We write We will estimate successively the terms on the right hand side of (3.10). We consider only the case H < N −2 We split the first term on the right hand side of (3.10) as We note that (3.14) Next, by (3.13), we see that Therefore, for every λ ≥ 1, Combining (3.11), (3.14) and (3.15) yields, for any λ ≥ 1, Next we estimate the second term on the right hand side of (3.10). By (3.12), we have This yields, for λ ≥ 1, (3.18) where C = C(N, Ω, Σ, α, γ). By applying Proposition 3 Σ τ and using (3.18), we finally derive (3.9). By using a similar argument as in the proof of Lemma 3.2, one can obtain the following lemma.
Proof. For y ∈ ∂Ω ∪ Σ, set We write with compact support in ∂Ω and without loss of generality, we may assume that ν ≥ 0. Let y ∈ ∂Ω.
Consequently, for all λ >C, Next we treat the second term on the right hand side of (3.30). By using the estimate d(x) ≤ |x − y|, we see that, for λ ≥ 1, Combining (3.31) and (3.32), we obtain and ω = ν, we obtain (3.54).
We putH where D Ω = 2 sup x∈Ω |x|. (3.38) and The implicit constants in the above estimates depend only on N, Ω, α. Here weak Lebesgue spacesL p w are defined in (3.4).
3.2.
Weak Lebesgue estimate on Green kernel. In this subsection, we will use the results of the previous subsection to establish estimates of the Green kernel. Let ϕ α,γ be as in (3.6). For a measure τ on Ω \ Σ, the Green operator acting on τ is The implicit constant depends on N, Ω, Σ, µ, γ.
Proof. Without loss of generality we may assume that τ is nonnegative. We consider the following cases.
Proof. For y ∈ Ω \ Σ and λ > 0, set Let β 0 be as in Subsection 2.1. We write Note that, for Γ = ∂Ω or Σ, we have The implicit constants in the above estimates depends on N, Ω, Σ, µ.
Boundary value problem for linear equations
In this section, we first recall the notion of boundary trace which is defined with respect to harmonic measures related to L µ . Then we provide the existence, uniqueness and a priori estimates of the solution to the boundary value problem for linear equations. We refer the reader to [17] for the proofs. . We define Let z ∈ Ω \ Σ and h ∈ C(∂Ω ∪ Σ) and denote L µ,z (h) := v h (z) where v h is the unique solution of the Dirichlet problem Here the boundary value condition in (4.1) is understood in the sense that The mapping h → L µ,z (h) is a linear positive functional on C(∂Ω ∪ Σ). Thus there exists a unique Borel measure on ∂Ω ∪ Σ, called L µ -harmonic measure in ∂Ω ∪ Σ relative to z and denoted by ω z Let x 0 ∈ Ω \ Σ be a fixed reference point. Let {Ω n } be an increasing sequence of bounded C 2 domains such that For each n, set O n = Ω n \ Σ n and assume that x 0 ∈ O 1 . Such a sequence {O n } will be called a C 2 exhaustion of Ω \ Σ. Then −L µ is uniformly elliptic and coercive in H 1 0 (O n ) and its first eigenvalue λ On µ in O n is larger than its first eigenvalue λ µ in Ω \ Σ.
For h ∈ C(∂O n ), the following problem admits a unique solution which allows to define the L µ -harmonic measure ω x 0 On on ∂O n by v(x 0 ) = ∂On h(y) dω x0 On (y).
Let G On µ (x, y) be the Green kernel of −L µ on O n . Then G On µ (x, y) ↑ G µ (x, y) for x, y ∈ Ω \ Σ, x = y.
We recall below the definition of the boundary trace which is defined in a dynamic way. The boundary trace of u is denoted by tr(u). . Let u be a nonnegative L µ -superharmonic function. Then u ∈ L 1 (Ω; φ µ ) and there exist positive measures τ ∈ M + (Ω \ Σ; φ µ ) and ν ∈ M + (∂Ω ∪ Σ) such that Let φ be a concave nondecreasing C 2 function on [0, ∞), such that φ(1) ≥ 0. Then the function φ ′ (w/ψ)ϕ belongs to L 1 (Ω; φ µ ) and the following holds in the weak sense in Proof. The proof is the same as that of [18,Propositions 3.1] and we omit it.
General nonlinearities
In this section, we provide various sufficient conditions for the existence of a solution to (1.5). Throughout this sections we assume that g : R → R is continuous and nondecreasing and satisfies g(0) = 0. We start with the following result. Then for any s 0 > e 2m q , there holds Proof. We note that g(|v|) ≥ g(0) = 0. Let s 0 > 1 to be determined later on. Using the fact that g is nondecreasing, we obtain Thus we have proved estimate (5.3). By applying estimate (5.3) with g replaced by h(t) = −g(−t), we obtain (5.4).
Assume one of the following conditions holds.
(ii) The case 1 ∂Ω ν ≡ 0 and (5.8) holds for with q = N +1 N −1 can be proceeded similarly as in case (i) with minor modification and hence we omit it.
Proof of Theorem 1.3. The proof of statements (i), (ii) and (iv) is similar to that of Theorem 1.2 and we omit it. As for the proof of statement (iii), the point that needs to be paid attention is the use of Theorem 3.8 (for µ > 0) and Theorem 3.10 (for µ ≤ 0) for Q 1 (S(v)) as in the first estimate in (5.13). In particular, for µ ≤ H 2 , the estimate is valid for q = min N +1 The rest of the proof of statement (iii) can be proceeded as in the proof of Lemma 5.2 and of Theorem 1.2 and is left to the reader.
6.1. Partial existence results. We provide below necessary and sufficient conditions expressed in terms of Green kernel and Martin kernel for the existence of a solution to (6.1).
Let Z be a metric space and ω ∈ M + (Z). Let J : Z × Z → (0, ∞] be a Borel positive kernel such that J is symmetric and J −1 satisfies a quasi-metric inequality, i.e. there is a constant C > 1 such that for all x, y, z ∈ Z, . For t > 1 the capacity Cap ω J,t in Z is defined for any Borel E ⊂ Z by We will point out below that N α defined in (6.9) with dω = d(x) b d Σ (x) θ 1 Ω\Σ (x) dx satisfies all assumptions of J in Proposition 6.3, for some appropriate b, θ ∈ R. Let us first prove the quasi-metric inequality.
There exists a positive constant C = C(Ω, Σ, α) such that , ∀x, y, z ∈ Ω. (6.10) Proof. Let 0 ≤ b ≤ 2, we first claim that there exists a positive constant C = C (N, b, α) such that the following inequality is valid (6.11) In order to prove (6.11), we consider two cases.
Next we give sufficient conditions for (6.7) and (6.8) to hold.
for any y ∈ B(x, s), therefore (6.16) follows easily in this case.
. In addition for any This implies that A ⊂ B(x, s). Consequently, d Σ (y) θ dy and hence (6.16) follows by case 1.
From the above observations and Proposition 6.3, we obtain the desired results.
In order to study the boundary value problem with measure data concentrated on ∂Ω∪Σ, we make use of specific capacities which are defined below.
For α ∈ R we define the Bessel kernel of order α in R d by B d,α (ξ) : It is known that if 1 < κ < ∞ and α > 0, L α,κ (R d ) = W α,κ (R d ) if α ∈ N. If α / ∈ N then the positive cone of their dual coincide, i.e. (L −α,κ ′ (R d )) + = (B −α,κ ′ (R d )) + , always with equivalent norms. The Bessel capacity is defined for compact subsets K ⊂ R d by We then define the Cap Γ θ,s −capacity of a compact set E ⊂ Γ by By using the above capacities and Proposition 6.3, we are able to prove Theorem 1.5.
Proof of Theorem 1.5. First we note that (6.24) holds and By using a similar argument as in the proof of Theorem 1.4, together with (6.24) and (6.35), we deduce that equation Therefore, as in the proof of Theorem 1.4, in light of Lemmas 6.4, 6.6 and 6.7, we may apply Proposition 6.3 with J(x, y) p+1 dx and λ = ν. Estimate (6.6) is satisfied thanks to Lemma 6.4, while assumptions (6.7)-(6.8) are fulfilled thanks to Lemmas 6.6-6.7 respectively with b = p + 1 and θ = −α − (p + 1). We note that condition p < 2+α + α + ensures that b and θ satisfy the assumptions in Lemmas (6.6)-(6.7). Therefore, by employing Proposition 6.3, we can show that statements 1-3 of Proposition 6.3 are equivalent to statements 1-3 of the present theorem respectively.
When ν concentrates on ∂Ω, we also obtain criteria for the existence of problem (6.1). We will treat the case µ < N −2 p+1 dx and λ = ν in order to show that statements 1-3 of Proposition 6.3 are equivalent to statements 1-3 of the present theorem respectively.
Next we will show that statement 4 of Proposition 6.3 is equivalent to statement 4 of the present theorem. More precisely, we will show that for any subset E ⊂ ∂Ω, there holds Indeed, by a similar argument as in the proof of (6.38), under the stated assumptions on p, we can show that for any λ ∈ M + (∂Ω ∪ Σ) with compact support in ∂Ω, there holds This and the estimate Therefore, in view of the proof of [3, Proposition 2.9] (with α = β = 2, s = p ′ and α 0 = p+1) and (6.23), we obtain (6.40). The proof is complete.
For simplicity, we assume that 0 ∈ Σ. Then, for x near 0, we have Note that which together with (2.7), implies G H 2 (x, y) G H 2 ,ε (x, y), ∀x, y ∈ Ω \ {0}, x = y. Proceeding as in the proof of Theorem 1.4, we obtain the following result . Proceeding as in the proof of Theorem 6.18, we obtain the desired result.
Remark 6.19. If p < N +1 N −1 , by using a (6.34), we obtain that inf z∈∂Ω Cap ∂Ω Appendix A. Some estimates In this appendix, we give an estimate which is used several times in the paper.
|
2022-03-04T06:47:10.893Z
|
2022-03-02T00:00:00.000
|
{
"year": 2022,
"sha1": "77e7c0681c226f03aaef59ada18a1debb9f4ccab",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cfdea965902a405a39d0e2f680d02c11735858ec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
215793890
|
pes2o/s2orc
|
v3-fos-license
|
Sorafenib as an Inhibitor of RUVBL2.
RUVBL1 and RUVBL2 are highly conserved ATPases that belong to the AAA+ (ATPases Associated with various cellular Activities) superfamily and are involved in various complexes and cellular processes, several of which are closely linked to oncogenesis. The proteins were implicated in DNA damage signaling and repair, chromatin remodeling, telomerase activity, and in modulating the transcriptional activities of proto-oncogenes such as c-Myc and β-catenin. Moreover, both proteins were found to be overexpressed in several different types of cancers such as breast, lung, kidney, bladder, and leukemia. Given their various roles and strong involvement in carcinogenesis, the RUVBL proteins are considered to be novel targets for the discovery and development of therapeutic cancer drugs. Here, we describe the identification of sorafenib as a novel inhibitor of the ATPase activity of human RUVBL2. Enzyme kinetics and surface plasmon resonance experiments revealed that sorafenib is a weak, mixed non-competitive inhibitor of the protein’s ATPase activity. Size exclusion chromatography and small angle X-ray scattering data indicated that the interaction of sorafenib with RUVBL2 does not cause a significant effect on the solution conformation of the protein; however, the data suggested that the effect of sorafenib on RUVBL2 activity is mediated by the insertion domain in the protein. Sorafenib also inhibited the ATPase activity of the RUVBL1/2 complex. Hence, we propose that sorafenib could be further optimized to be a potent inhibitor of the RUVBL proteins.
Introduction
Human RUVBL1 (also known as pontin in mammalians or Rvb1 in yeast) and its paralogue human RUVBL2 (also known as reptin in mammalians or Rvb2 in yeast) share 41% sequence identity and 64% sequence similarity to each other and belong to the AAA+ (ATPases associated with diverse cellular activities) superfamily of ATPases, which is a lineage of the P-loop NTPases. This class of ATPases is present in all kingdoms of life and is divided into numerous groups, clades, and families based on structural and sequence analyses [1][2][3]. AAA+ proteins usually form oligomeric oligomeric state of RUVBL2 and led to a shift of RUVBL1/2 complex localization from the cytoplasm to the nucleus in cancer cells [31].
The biotechnology company Daiichi Sankyo (Japan) filed a patent in 2015 (WIPO Patent Application WO/2015/125786) for an aminopyrazolone derivative that inhibits the ATPase activity of the RUVBL1/2 complex. They reported promising efficacy in several mouse xenograft models [32]. Also, Cleave Biosciences (CA, USA) described a compound, CB-6644, which is a derivative of the compounds described by Daiichi Sankyo that inhibited the ATPase activity of the RUVBL1/2 complex [33]. The compound showed antitumor activity when assessed in SCID-beige mice bearing human tumor xenografts derived from either Burkitt's lymphoma (Ramos) or multiple myeloma (RPMI8226) cell lines that were among the most sensitive to CB-6644 treatment in a cell panel screen.
Both Daiichi Sankyo and Cleave Biosciences described inhibitors for the RUVBL1/2 complex only and not the individual proteins. Therefore, in this study, we concentrated on identifying inhibitors of RUVBL2, which has higher ATPase activity than RUVBL1. We performed high-throughput screening for inhibitors of the ATPase activity of human RUVBL2 that led to the discovery of sorafenib, a drug already being used in the treatment of liver and kidney cancer. Sorafenib was found to be a mixed non-competitive inhibitor of RUVBL2 with a K d value of about 22 µM. Sorafenib also inhibited RUVBL1/2 ATPase.
Recombinant Protein Expression and Purification
The plasmids and strains used to express and purify all the RUVBL proteins are given in Table S1. The Profinity eXact pPAL7 expression vector is from Bio-Rad (Berkeley, CA, USA). Point mutants were generated using the QuikChange kit (Stratagen, Berkeley, CA, USA). Primers and the respective restriction cut sites are listed in Table S2. All constructs were verified by DNA sequencing at The Centre for Applied Genomics (TCAG) facility at the Hospital for Sick Children.
To express the relevant proteins, strains were grown in Lysogeny Broth (LB) medium at 37 • C to OD 600 = 0.6 and expression was induced with 1 mM IPTG overnight at 18 • C. Constructs with an N-terminal Profinity eXact tag were expressed in E. coli BL21(DE3) pRIL and purified using Profinity eXact resins according to manufacturer's protocol. Eluted proteins were then dialyzed in buffer A (25 mM TrisHCl, pH 7.5, 50 mM NaCl, 10% glycerol, 1 mM DTT) for 4 h and then injected onto MonoQ 5/50 GL column (GE healthcare, Chicago, IL, USA) connected to either AKTA FPLC system or BioLogic DuoFlow system (Bio-Rad), and equilibrated with buffer A prior to the application of a segmented gradient from buffer A to buffer B (25 mM TrisHCl, pH 7.5, 500 mM NaCl, 10% glycerol, 1 mM DTT) over 50 mL (50 column volume). This MonoQ step was repeated once more for fractions containing the relevant protein.
N-terminal His 6 -TEV fusion constructs were expressed in E. coli BL21(DE3) pRIL and purified using Ni-nitrilotriacetic acid resin (NiNTA, Qiagen, Hilden, Germany) according to the manufacturer's protocol. Eluted proteins were incubated overnight with TEV protease in a 10:1 molar ratio of protein to TEV, dialyzed in buffer C (25 mM TrisHCl, pH 7.5, 50 mM KCl, 10% glycerol, 5 mM β-mercaptoethanol) for 4 h and passed through Ni-NTA resins to separate cleaved from uncleaved proteins. Proteins were then injected onto Mono Q 10/100 GL column (GE healthcare) connected to either AKTA FPLC system (GE healthcare life sciences) or BioLogic DuoFlow system (Bio-Rad), equilibrated with buffer C with a gradient starting at buffer C to buffer D (25 mM TrisHCl, pH 7.5, 500 mM KCl, 10% glycerol, 1 mM DTT) over 120 mL (15 column volumes). The Mono Q step was repeated once more.
For co-expression of the eXact tag-RUVBL2/RUVBL1-TEV-His 6 complex, a pCOLA-Duet1 vector encoding both eXact tag-RUVBL2 and RUVBL1-TEV-His 6 was transformed into BL21(DE3) pRIL E. coli. The complex was purified first using the Profinity eXact resin and then dialyzed into buffer E (100 mM sodium phosphate, pH 7.2, 10% glycerol) for 4 h. The complex was subsequently purified using the Ni-NTA as described above, then dialyzed into buffer C and further purified using Mono Q 10/100 GL column.
All proteins were dialyzed and stored in buffer F (40 mM TrisHCl, pH 7.5, 200 mM KCl, 5 mM MgCl 2, 10% glycerol, 1 mM DTT). The concentrations of the purified proteins were determined by absorbance at 280 nm.
ATPase Assays
The ATPase activity of the RUVBL proteins was determined using the ATP/NADH coupled ATPase assay [34]. In this assay, the regeneration of the hydrolyzed ATP is coupled to the oxidation of NADH. The ATP hydrolysis rate was determined by measuring the decrease in NADH absorbance at 340 nm in a 150 µL reaction volume. Samples were placed in 96-well flat-bottom microplates and absorbance change was monitored using SpectraMax 340PC 384 microplate reader (Molecular Devices). Typically, the reaction consisted of 3 mM phosphoenol pyruvate, 0.2 mM NADH, 40 units/mL pyruvate kinase, 58 units/mL lactate dehydrogenase, in ATPase reaction buffer (20 mM TrisHCl, pH 7.5, 200 mM KCl, 8 mM MgCl 2 , 10% glycerol), and 5 mM ATP (or a range of ATP concentrations). The reaction components without ATP and the ATP were incubated separately at 37 • C for 10 min and then the reaction was started by adding ATP to the rest of the reaction components. The assay was performed at 37 • C and readings were taken over an hour in 20-s intervals. The rates were corrected for background signal. Rates were averaged over selected time intervals during which the absorbance decrease was linear.
For ATPase assays with chemical compounds, compounds were incubated with the protein on ice for 30 min and spun down prior to use in assay. The final compound concentration used in the assay is as specified in the figures. All molecules were dissolved in DMSO. The final amount of DMSO was 1%.
Analysis of the Kinetic Parameters of RUVBL ATPase
To determine the IC 50 value for sorafenib on the ATPase activity of RUVBL2, the ATPase activity of 10 µM RUVBL2 at different sorafenib concentrations (0, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4, 12.8, 51.2, and 60 µM) was measured and the percent inhibition was obtained. The IC 50 value was computed using the OriginPro software by fitting to the dose response function: here, y is the measured percent inhibition, I min is the minimum percent inhibition, I max is maximum inhibition, [sorafenib] is the molar concentration of sorafenib, h is the Hill coefficient.
To obtain the kinetic parameters of RUVBL proteins, the ATPase activities of the proteins were measured at different ATP concentrations. The monomeric concentration of the protein was 10 µM, and concentration of ATP titrated was in the range of 0.1 mM to 7 mM. Each experiment was repeated in triplicate. The Michaelis-Menten K M and the maximal velocity V max were obtained by fitting the experimental initial velocity values V 0 at different ATP concentrations to: ATPase assay was used to measure the ATPase activity of 10 µM RUVBL2 at different sorafenib concentrations (0, 1, 3, 4, 5, and 6 µM) while titrating ATP (ranging from 0 to 6 mM). Each experiment was repeated in triplicate. K i and K i values were calculated using the Lineweaver-Burke equation for mixed inhibition: where V 0 is the initial velocity, V max is the maximal velocity, α = 1 +
ATPlite TM Luminescence Assay for High-Throughput Screening
The PerkinElmer ATPlite TM 1 step Luminescence ATP Detection Assay System was used to screen for inhibitors of the ATPase activity of human RUVBL2. ATPlite is an ATP monitoring system using firefly (Photinus pyralis) luciferase. The system is based on the detection of light produced by the reaction of ATP with luciferase and D-luciferin: The emitted light is proportional to the ATP concentration within certain limits. The intensity of the emitted light decreases as ATP gets hydrolyzed. Typically, 10 µM RUVBL2 protomer was incubated with 100 µM of ATP at 37 • C for 3 h in the presence or absence of compounds (5 µM). RUVBL2 storage buffer F was similarly incubated with 100 µM of ATP to serve as a control. Final concentration of DMSO was 0.01%. Aliquots were taken at different time intervals from the reaction to monitor ATP hydrolysis. Reaction samples after 3 h of incubation were mixed with equal volume of ATPlite substrate and the luminescence was read after 2 min. B-scores were calculated to remove positional errors [35].
Surface Plasmon Resonance
SPR measurements were performed at 25 • C using a ProteOn XPR36 instrument (Bio-Rad). Samples were buffer exchanged into HEPES buffer (25 mM HEPES, pH 7.5, 200 mM KCl, 10% glycerol, 1 mM DTT) prior to the experiment. Proteins were then immobilized by amine coupling to GLH sensor chip surfaces (Bio-Rad). RUVBL2 was coupled to the chip after being diluted with acetate buffer pH 4.5 to a final concentration of 25 µg/mL. Sorafenib and its analogs, all dissolved in DMSO, were run over the chip using the interaction buffer (10 mM HEPES, pH 7.4, 150 mM NaCl, 5 mM Mg 2+ , 0.005% TWEEN 20, 3% DMSO).
For equilibrium analysis of RUVBL2-sorafenib binding, sorafenib was diluted as a series of 2-fold dilutions ranging from 150 µM to 0.29 µM in interaction buffer. Sorafenib concentrations and buffer control were injected in the analyte channels with a contact time of 120 s, dissociation time of 120 s, and a flow rate of 30 µL/min.
To perform kinetic analysis of RUVBL2-sorafenib binding, the compound was applied in a series of 2-fold dilutions ranging from 100 µM to 6.25 µM in interaction buffer. Injection of sorafenib and buffer control onto the analyte channels were performed using the following parameters: 40 s contact time, 600 s dissociation time, and 100 µL/min flow rate. Binding kinetic values for k on , k off , and K d are average values calculated by fitting each sensorgram of an SPR data set to a 1:1 Langmuir binding model using ProteOn Manager Software (Bio-Rad). Errors were derived from standard deviations of the values calculated from fitting each binding curve.
For the screen of other molecules binding to RUVBL2, sorafenib and its analogs were diluted as previously described to a final concentration of 100 µM and were injected in the analyte channels with a contact time of 120 s, dissociation time of 600 s, and a flow rate of 30 µL/min. These experiments were done in duplicates. Hit molecules that showed binding were diluted as a series of 2-fold dilutions ranging from 100 µM to 0.2 µM and injected over the analyte channels for equilibrium analysis with a contact time of 120 s, dissociation time of 180 s, and a flow rate of 30 µL/min.
All reported data were channel and double referenced, whereby 'no ligand immobilized' and 'interaction buffer' signals were both subtracted from raw data.
Small Angle X-ray Scattering Experiments
Small angle X-ray scattering data were collected at the Brazilian Synchrotron Light Laboratory (CNPEM-LNLS, Campinas/SP, Brazil) using a Pilatus 300 K detector (Dectris) and a monochromatic 1.488 Å wavelength X-ray beam. Sample-to-detector distance was~1000 mM, corresponding to the q-range from 0.01 to 0.50 Å −1 . Human RUVBL2 samples at 0.8 mg/mL in buffer G were exposed to the X-ray beam for six frames of 10 sec and one frame of 300 sec. The same was done for samples containing RUVBL2 + DMSO and RUVBL2 + 30 µM Sorafenib (in DMSO). After data inspection for X-ray damage, aggregation and interparticle interference using the ATSAS 2.7.2 package [36], averaged final curves were generated.
Sorafenib as an Inhibitor of Human RUVBL2 ATPase Activity
As shown in Table 1, the ATPase activity (initial ATP hydrolysis rate) of human RUVBL2 is about eight-fold higher than that of human RUVBL1. If the Walker B motif in RUVBL1 or RUVBL2 is mutated by replacing the conserved aspartic acid residue (DEVH) with asparagine [RUVBL1(D302N) and RUVBL2(D299N)], then no significant ATPase activity is observed ( Table 1). The RUVBL1/2 complex formed from mixing the individually purified proteins exhibited a higher ATPase activity in comparison to RUVBL1 or RUVBL2 alone. The ATPase activity obtained from mixing the individual proteins was found to be the same as that of the complex obtained from coexpressing RUVBL1 and RUVBL2 (Table 1; see Methods). To test the contributions of each of the RUVBL subunit to the ATPase activity of the complex, complexes containing one WT and one inactive protein were formed. RUVBL1/2 complex having either one of the subunits with a mutated WB motif resulted in the reduction in the ATPase activity of the complex (Table 1). Mutating the WB of RUVBL2 caused a more significant reduction in the complex ATPase activity in comparison to mutating WB of RUVBL1. RUVBL1/2 complex with WB mutations in both proteins had no significant ATPase activity (Table 1). Based on the above results, a High-Throughput Screen (HTS) based on the ATPlite assay (see Methods) was developed to screen the DIVERSet™ collection from ChemBridge Corp. (San Diego, CA) composed of 10,000 highly diverse drug-like molecules and a small library of kinase inhibitors (200 compounds) for inhibitors of RUVBL2 ATPase activity. B-scores [35] were calculated for all the screened compounds ( Figure 1A) and hits above three standard deviations were selected. 49 compounds were retested using the ATP/NADH ATPase assay (see Methods). Sorafenib and sorafenib-p-toluenesulfonate salt ( Figure 1B) were identified as validated hits. As shown in Figure 1C, the ATPase activity of 10 µM of RUVBL1, RUVBL2, or RUVBL1/2 complex (protomer concentration) were measured in the presence of DMSO or 20 µM sorafenib. DMSO or sorafenib was incubated with RUVBL proteins for 30 min on ice prior to the assay. Sorafenib was able to inhibit the ATPase activity of RUVBL2 and RUVBL1/2 by about 60% and 40%, respectively. The compound had no effect on RUVBL1; however, the ATPase activity of RUVBL1 is already quite low. We also found that sorafenib can inhibit RUVBL2 from Saccharomyces cerevisiae (named ScRvb2) although less efficiently ( Figure 1D; 10 µM protein monomer and 30 µM sorafenib). To demonstrate that sorafenib is not a general inhibitor of AAA+ proteins, we tested the effect of the compound on the bacterial Escherichia coli AAA+ proteins ClpX and RavA. No significant inhibition was observed ( Figure 1D; 1 µM AAA+ protein protomer and 30 µM sorafenib). These results also demonstrate that sorafenib has no effect on the ATP/NADH or ATPlite assays being used.
To identify sorafenib analogs that might be better inhibitors of RUVBL2, 17 analogs were purchased and tested (structures shown in Figure S1). None of the analogs inhibited the ATPase activity of RUVBL2 except for regorafenib (structure shown in Figure 1B). However, regorafenib was a less efficient inhibitor than sorafenib. At 15 µM regorafenib and 10 µM protein protomer concentration, regorafenib did not inhibit the ATPase activity of RUVBL1 or RUVBL1/2 and inhibited RUVBL2 by about 40% (Figure 1E).
Sorafenib Does Not Affect the Oligomerization of RUVBL2 nor Induces Its Aggregation
To determine whether sorafenib is inhibiting RUVBL2 ATPase activity by affecting its oligomerization, 10 µM RUVBL2 incubated with either DMSO or with 30 µM sorafenib for 30 min was analyzed by size exclusion chromatography (SEC). The SDS-PAGE gels showed that sorafenib did not have a significant effect on the elution profile of RUVBL2 (Figure 2A and Figure S2A). Furthermore, since it has been reported before that sorafenib might induce the precipitation or aggregation of proteins [37], 30 µM RUVBL2 was incubated with DMSO or 87 µM sorafenib for 30 min and then spun down. No precipitation was observed. Supernatants were run on SDS-PAGE gels. The gels did not show a decrease in the levels of soluble RUVBL2 suggesting that sorafenib is not causing the precipitation of the protein ( Figure 2B and Figure S2B). Furthermore, the ATPase activity of RUVBL2 with increasing concentration of sorafenib was measured in the presence of Triton X-100, which should prevent or reduce potential protein aggregation. The inhibition by sorafenib still persisted under these conditions with higher sorafenib concentrations leading to lower RUVBL2 ATPase activity ( Figure 2C).
Subsequently, small angle X-ray scattering (SAXS) analysis was carried out to determine if sorafenib causes gross changes in the structure or conformational equilibrium of RUVBL2. Averaged final curves of RUVBL2, RUVBL2 + DMSO and RUVBL2 + Sorafenib were overlapped and no significant differences between their scattering profiles were observed ( Figure 2D, upper panel). Calculation of the radii of gyration resulted in values of 47.7 Å for apo RUVBL2, 47.9 Å for RUVBL2 + DMSO and 47.2 Å for RUVBL2 + Sorafenib. These values agree with previously observed dimensions for human RUVBL2 by X-ray crystallography (52 Å) [14]. To further investigate differences in the overall fold and flexibility of RUVBL2, dimensionless Kratky analysis was performed. Figure 2D, lower panel, shows that apo RUVBL2, RUVBL2 + DMSO and RUVBL2 + Sorafenib curves have the same Kratky profile. Taken together, these results indicate that neither DMSO nor sorafenib induce significant changes in RUVBL2 structure or oligomerization.
Sorafenib is a Mixed Non-Competitive Inhibitor of RUVBL2
To characterize the inhibitory effect of sorafenib on RUVBL2, the ATPase activity of the protein was measured with increasing concentration of sorafenib and the half maximal inhibitory concentration (IC 50 ) of sorafenib was found to be 3.1 ± 0.8 µM ( Figure 3A).
The type of inhibition caused by sorafenib on RUVBL2 ATPase activity was determined by measuring the rate of ATP hydrolysis using ATP concentrations ranging from 0.1 to 6 mM in the presence of different sorafenib concentrations (0, 1, 3, 4, 5, and 6 µM). The Lineweaver-Burk plots describing the inhibition of RUVBL2 at different inhibitor concentrations are shown in Figure 3B. Analysis of the data revealed that sorafenib acts via a mixed non-competitive inhibition mechanism with a K i of 0.9 ± 0.3 µM and K i of 1.9 ± 0.3 µM ( Figure 3B). These values are close to the IC 50 value obtained ( Figure 3A). The binding of sorafenib to RUVBL2 was also assessed using SPR. RUVBL2 was immobilized on a sensor chip and was exposed to increasing concentrations of sorafenib (ranging from 0.29 to 150 µM in 2-fold dilutions). The SPR profile indicated an interaction between RUVBL2 and sorafenib with fast association and dissociation rates ( Figure 3C). Upon equilibrium analysis using data from all different sorafenib concentrations, the dissociation constant (K d ) was calculated to be 22.7 ± 3.7 µM ( Figure 3C). The dissociation constant was also calculated using kinetic analysis from k on and k off values and was found to be 21.7 ± 1.0 µM, which is consistent with that calculated by equilibrium analysis ( Figure 3C). Sorafenib binding to RUVBL1 was also investigated using the SPR approach; however, the calculated K d value was found to be 84.8 ± 3.7 µM by both equilibrium and kinetic analysis (data not shown). This is consistent with our finding that sorafenib does not significantly inhibit RUVBL1 ( Figure 1C).
Effect of Sorafenib on the ATPase Activity of RUVBL Mutants
Despite the fact that RUVBL proteins are classified as AAA+ ATPases, their ATPase activity is quite low in comparison to other AAA+ proteins and even to the yeast Rvbs [38,39]. Interestingly, two aspartic acid residues present in a highly conserved motif (DLLDR; where R is the arginine finger), were found to play an important role in the ATP hydrolysis step for Rvb from the archaeon Methanopyrus kandleri [40]. The two aspartic acid residues and the arginine finger protrude into the ATP-binding pocket that encompasses the Walker A and Walker B motifs of the neighboring subunit. Upon mutating D to N in the DLLDR motif of archaeal Rvb, an enhancement in the ATP hydrolysis rate was observed due to a proposed decrease in the activation barrier for proton transfer from the lytic water to the closest negatively charged proton-accepting residue in WB [40].
Subsequently, the effect of sorafenib on the ATPase activity of the DLLDR mutants of RUVBL2 was tested using 15 µM compound concentration and 10 µM protomer concentration. Interestingly, sorafenib inhibited the ATPase activity of all the RUVBL2 DLLDR mutants ( Figure 4A,C). More interestingly, the ATPase activity of RUVBL2 mutant deleted of the insertion domain (residues E133-V238 deleted and replaced with AGA; Figure 4B), RUVBL2∆DII, which was measured to be similar to that of RUVBL2 WT, was not inhibited by sorafenib (Figure 4). This suggests that the RUVBL2 insertion domain might influence the sorafenib binding site or that sorafenib binds directly to it, which would be consistent with the mixed noncompetitive inhibition mode of sorafenib inhibition of RUVBL2 ATPase activity ( Figure 3B). In red is DI, the N-terminal αβα subdomain of the AAA+ domain, in blue is DIII, the C-terminal all α subdomain of the AAA+ domain, and in yellow is DII, the insertion domain. Boxed is the part of DII that was deleted in RUVBL2∆DII.
(C) Shown is the ATP-binding pocket at the interface between two RUVBL2 subunits. One subunit is colored in light pink and the adjacent monomer is colored in green (PDB: 3UK6); the ADP molecule is colored in black. Conserved motifs and specific amino acids with significant role in ATP-binding and hydrolysis are colored as follows: Walker A (WA) in red with K83 in stick representation, Walker B (WB) in dark blue with D299 in stick representation, Sensor I in hot pink, Sensor II in yellow, and the DLLDR motif in cyan. The first aspartic acid (D349) in DLLDR is colored in brown and the second aspartic acid (D352) is colored in purple and both are shown in stick representation.
Discussion
As mentioned earlier, different groups showed that human RUVBL proteins are critical players in tumor development and metastasis. The ATPase activity of these proteins is essential for most of their roles in cancer progression. Therefore, we were interested in finding inhibitors of the RUVBL proteins and more specifically for RUVBL2 since it exhibits higher ATPase activity than RUVBL1. Our screen led us to discover sorafenib as an inhibitor of human RUVBL2.
Enzyme inhibitors can be classified as irreversible, when they bind tightly or covalently to a target protein, and as reversible, when an inhibitor can be displaced from the enzyme-inhibitor (EI) complex by, for instance, competing with the natural enzyme substrate (S) or upon dilution. Among reversible inhibitors, competitive (bind to the same site as the substrate, forming an EI complex), non-competitive (bind to a site other than the substrate binding site, forming either ESI or EI complexes) and uncompetitive inhibitors (bind to a site other than the substrate binding site, forming EI complex and blocking substrate binding) can be found [41]. Sorafenib inhibition of RUVBL2 ATPase was determined to be mixed noncompetitive ( Figure 3B) suggesting that sorafenib might not directly bind to the ATPase pocket, but possibly through interaction with different motifs thus leading to conformational changes that could affect, for example, the binding of ATP to RUVBL2 or the release of ADP. Further experiments performed on RUVBL2∆DII confirmed that the ATPase activity of RUVBL2 with truncation of its insertion domain was not inhibited by sorafenib; therefore, leading us to propose that the mechanism of action of sorafenib is mediated through its interaction with DII or that DII flexibility causes RUVBL2 to populate certain conformations to which sorafenib can bind. Recently, it was proposed that the RUVBL2 N-terminal segment may function as a lid for the nucleotide-binding site and the binding of ATP could induce the recruitment of DII by the N-terminal segment [14,16]. In this sense, binding of sorafenib to DII would influence the dynamics of DII, consequently affecting RUVBL2 nucleotide exchange rate.
Our work suggests a regulatory role of the insertion domain (DII) on the ATPase activity of human RUVBL proteins. Such a role of DII was initially highlighted in a study published in 2011, which showed that DII has an autoinhibitory function due to its flexibility since its truncation caused an enhancement of the ATPase activity of the RUVBLs [8]. However, it should be noted that in our study, we do not see such enhancement of RUVBL2∆DII ATPase relative to that of RUVBL2 WT ( Figure 4A). The difference could possibly be attributed to the methods used for protein purification, the type of the purification tag, and the presence/absence of the tag.
Sorafenib (Nexavar, BAY-43006, Bayer Pharma) is an oral drug approved by the FDA for the treatment of advanced renal cell carcinoma and hepatocellular carcinoma, and its effect on other tumor types such as breast, lung and colon was reported [42,43]. It is a multikinase inhibitor that targets the Raf serine/threonine kinase (Raf-1, WT BRAF, and oncogenic BRAF V600E) and receptor tyrosine kinases (VEGFR 1-3 and PDGFR), which explains its broad activity across different tumor types via various mechanism of actions such as antiproliferative, antiangiogenic, and proapoptotic [43]. X-ray crystal structures of sorafenib with Raf-1, WT BRAF and BRAF V600E were published and showed that sorafenib is an allosteric inhibitor that binds to the activation segment Asp-Phe-Gly (DFG) of the Raf kinase [44].
The Raf kinase exists in one of two conformations; one is the inactive state called 'DFG Asp-out' conformation in which the phenylalanine side chain occupies the ATP-binding pocket and aspartic acid side chain faces away from the active site [44,45]. The other one is the active conformation, which is called 'DFG Asp-in' conformation, in which the phenylalanine residue is rotated out of the ATP-binding pocket and the aspartic acid residue is facing into the ATP-binding pocket [44,45]. Sorafenib binds to the DFG motif and locks it in the DFG Asp-out state, thus rendering the kinase inactive [42,44]. Sorafenib has an IC 50 value of 6 nM for Raf1 kinase and 57 nM for p38α kinase [42,46]. Our studies showed that sorafenib has an IC 50 of 3.1 µM against RUVBL2; therefore, sorafenib is a much weaker inhibitor of RUVBL2 compared to its effect on the kinases. Hence, chemical modifications of sorafenib and further screening are needed to make sorafenib analogs that are better inhibitors of RUVBL2.
Moreover, given the non-competitive mode of action, sorafenib may offer opportunities to support drug combination strategies, especially since sorafenib is an already approved FDA drug, and, thus, preclinical and clinical trials would be much easier to pursue.
Supplementary Materials: The following are available online at http://www.mdpi.com/2218-273X/10/4/605/s1, Figure S1: Sorafenib analogs tested against RUVBL2 ATPase activity. Sorafenib analogs tested against RUVBL2 ATPase activity using the ATP/NADH assay. The analogs were also tested for binding to RUVBL2 using Surface Plasmon Resonance. In both assays, none of the analogs were found to inhibit or interact with RUVBL2. Figure S2: Sorafenib has no significant effect on RUVBL2 oligomerization or aggregation Table S1: List of proteins, plasmids and expression strains used in this study. Table S2: List of mutations and primers used for subcloning and mutagenesis in this study.
Author Contributions: N.N. with the help of T.L. and G.A. carried out all the experiments described in this manuscript except for the ones corresponding to Figures 1A and 2D. F.U. carried out the drug screen described in Figure 1A under the supervision of A.D. with assistance of M.I. and M.P. T.V.S. carried out the SAXS experiments of Figure 2D
|
2020-04-16T09:18:34.139Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f817a1f22809a4d02743d032299c15ebeaed5161",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-273X/10/4/605/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed545b6e1d703f737461276a53070060637bca55",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
228867722
|
pes2o/s2orc
|
v3-fos-license
|
E ‐ Service Quality from Attributes to Outcomes: The Similarity and Difference between Digital and Hybrid Services
: Our research goal is to offer an e ‐ service quality model based on experience and multidimensional quality and compare its applicability for e ‐ services to find differences and similarities in consumer perceptions and behavioral intentions. Additionally, we seek to compare attributes that compose quality dimensions for hybrid and digital e ‐ services. The study was based on an online survey conducted in July–September 2019 among citizens and foreign residents in the Russian Federation. Respondents had to answer questions concerning a specific e ‐ service brand to capture real consumer behavior. The data of 365 questionnaires were analyzed using the Spearman correlation to determine the relationship between the model components. Customer experience is a valid outcome variable in the e ‐ service model that strongly influences customer satisfaction and repurchase intentions. The model proved to be equally valid for hybrid and digital e ‐ services. The key differences between digital and hybrid e ‐ services lie within the distribution of e ‐ service attributes between quality dimensions. Ease of use and perceived usefulness are the most essential attributes that have a direct influence on customer satisfaction. The findings show the necessity of best practices diffused between different types of e ‐ services and present an opportunity to widely spread research findings between different e ‐ service sectors.
Introduction
The service sector contributes to the key macroeconomic indicators of the world economy development, produces the largest share of global GDP, leads in the total employment rate, and creates sustainable opportunities for equality and social wellbeing. Currently, the growth of the service sector is driven by digital transformation, the growing penetration rate of Internet and mobile technologies, the emergence of new business models, and the increasing attractiveness of the sharing economy [1][2][3][4]. It has led to dramatic changes in service production systems [5] and consumer behavior [6] and the emergence and fast development of electronic services. Electronic service is a general term that refers to services rendered through information technologies via the Internet [7][8][9][10]. E-services involve a broad range of activities that use the Internet as a distribution channel (e.g., e-tailing, e-banking, e-travel) and newly emerged digital services. There is no commonly agreed definition of digital service, and authors refer to digital interaction through Internet Protocol [11], digital technologies [12], and digital data, a combination of digital technologies and physical products [13]. In general, digital services include a set of actions to create, search, collect, store, process, provide, and distribute information and products in digital form, performed through the use of information technologies via the Internet upon the request of consumers. To distinguish between different types of services, we propose to use the term "hybrid services" to generalize e-services based on traditional activities when only a limited number of processes is offered online and the service result is delivered offline. Thus, e-services can be divided into two groups-hybrid services and digital services.
The difference in development dynamics between digital and hybrid services can be illustrated with data from the e-services market of the Russian Federation, which, in 2019, accounted for almost 5% of the national GDP, its growth rate being over 280% in comparison to 2015. Hybrid services (efinance and e-commerce sectors) comprise 88.7% of the total e-service market. The fastest growing sectors are e-tailing, e-banking, and e-travel. Three digital services sectors (marketing and advertising, infrastructure and communication, and media and entertainment) make up only 4.5% of the total e-services market. The e-services market is mostly consumer-oriented, and the B2B e-services share is 2.7% of the market volume [14], although at least 27% of Russian enterprises use cloud services. However, only penetration rate and audience size enable the evaluation of the development of some types of e-services. For example, e-government services in the Russian Federation have the highest penetration rate of 74.8% of the total population aged between 15 and 72 years, which is very close to the penetration rate of the Internet (87.3%) in 2019 [15,16].
The high growth rate and ease of access make it attractive for companies, fuel competition, and raise the importance of research into e-services quality, which is the source of the open innovation practices in the e-service market as it generates the information necessary for corporate and user innovation, customer involvement, and knowledge exchange between internal and external innovations [17].
Since information quality and digital technologies create the customer value of e-services, it is necessary to integrate information management and quality management concepts and tools. Digitalization changes the nature of e-service quality when a complex configuration of traditional and new "digital" service properties is formed. It stimulates multiple research efforts to build an eservice quality model that explains the relationships between e-service attributes and quality, customer satisfaction and consumer behavior, acceptance, and intentions for use. As stated in the World Economic Forum Report, the phenomenon of "digital consumption", cross-sectoral diffusion of customer expectations, and the concepts of "solution economy" and "experience economy" shift the focus from the consumer properties of services to their ability to generate benefits for the consumer, solve the consumer's problems, and offer cognitive and emotional experience-not only in the consumer market, but also in B2B interaction [6]. Research into the e-service quality focuses either on the general e-service or on the specific type of e-service, such as e-travel, e-tailing, or the digital platform. No comparison study of e-service quality models for different types of services has been conducted, so the following question remains unanswered-does the general phenomenon of e-service exist, when applied to quality, experience and satisfaction, or are there significant differences between hybrid and digital services? Hybrid services are supported by offline service delivery, clear regulation rules, and robust business models. They appeal to well-established consumer needs and offer both online and offline expertise. Digital services deliver value and experience online, offer inadequate consumer rights protection, and satisfy intangible needs with intangible quality properties. This means that experience perceptions of quality attributes may significantly vary for hybrid and digital services. Comparison between e-service quality models applied to hybrid and digital e-services could prove or disprove the knowledge and best practices flow between providers of different e-services, allowing us to understand if common quality regulations are applicable for all types of e-services.
Our research is targeted at comparing the performance of the general e-service quality model based on the concept of experience-based multidimensional quality for hybrid and digital e-services in order to find differences and similarities in consumer perceptions and behavioral intentions. The main tasks of this research are to show approaches to e-service quality, adoption, and continuation models through a literature review and choose a model for the study, to choose the dimensions of eservice quality and assign quality attributes to each dimension, and to test the chosen e-service quality model for digital and hybrid e-service quality by a survey among e-service customers located in different regions of the Russian Federation, including Russian citizens and foreigners residing in Russia. The novelty of the survey design is that it allows for assessing real consumer behavior with a specific e-service brand rather than measuring consumer perceptions of abstract e-services in general.
The paper is structured as follows: Section 2 (Literature Review) provides a brief description of the recent research into technology acceptance models and e-service quality models and substantiates the integrated e-service quality model based on customer experience and multidimensional e-service quality. The section contains the description of e-service quality dimensions and e-service attributes related to each dimension. Finally, the section provides the research hypotheses.
Section 3 (Methodology and Hypothesis Development) provides details about the design and implementation of the survey. Section 4 (Results) presents the results of the study. It starts with the short statistical test of differences between hybrid and digital e-services based on the Student t-test and Fisher test. Further, it contains a detailed analysis of the correlation between the components of the model for e-services in general and specifically for hybrid and digital e-services.
Section 5 (Discussion) focuses on the explanation of the role of customer experience in the integrated e-service model and its relationship with customer satisfaction and e-service quality dimensions. The section contains a discussion of the relationship between e-service attributes and quality dimensions, which brings unexpected findings and highlights the significant differences between hybrid and digital e-services. The section ends with the revisited e-service model that was proposed in the literature review section and improved distribution of attributes between quality dimensions for e-services in general and digital and hybrid e-services in particular.
Section 6 (Conclusions) highlights the key findings of the study and presents some limitations and recommendations for future research as well.
Sections 7 (Managerial Implications) and 8 (Practical/Social Implications) show the possible usefulness of the study findings for e-service providers when managing e-service quality and general benefits of the study for open innovation practices and quality of life.
Finally, Section 9 describes limitations and future research opportunities. The major originality of the study is in the attempt to compare the performance of the e-service quality model regarding e-services in general and hybrid and digital e-services based on the design of the conducted survey.
Literature Review
Two traditional areas of research (technology acceptance models and service quality models and theories) influence recent advancements in e-service quality modeling. Service quality models conceptualize quality attributes and outcome variables-customer expectations, satisfaction, repurchase intentions, and word of mouth-while technology acceptance models search for quality attributes and other factors that influence customer behavior-decisions to adopt an e-service and to continue using it.
Based on the ideas of diffusion of innovation (DOI) [18], behavioral theories of reasonable actions (TRA), and planned behavior (TPB) [19], technology acceptance models are focused on technology attributes and other factors that affect the user's decision to adopt a technology. Initially, DOI introduced six technology attributes that influence the technology adoption decision: relative advantage, compatibility with the pre-existing system, complexity or difficulty to learn, testability, potential for reinvention, and observed effects [18]. The Task Technology Fit (TTF) model stressed the importance of the technology compliance with the user's tasks to increase the likelihood of its use [20]. Subsequent theories of the technology acceptance model specified this technology attribute as "perceived usefulness" and complimented it with "perceived ease of use" derived from the DOI model. In [21], ease of use is defined as the ability of a customer to find information or enact a transaction with the least amount of effort. The TAM [22], TAM 2 [23], TAM3 [24], UTAUT [25], and UTAUT [26] models tried to distinguish between technology attributes and the hierarchy of social, personal, technical, environmental, and organizational factors that influence the decision to use the technology. The national cultural characteristics of consumers are the factors which most recently gained attention [27]. The UTAUT2 theory confirmed that the same technology attributes explain the adoption of e-services [26].
Thus, we can conclude that technology acceptance models are able to answer a question that is not traditionally considered in quality management-which e-service attributes are important for the consumer when making a decision about using a service? Such attributes can be called "starting quality", when the consumer has no experience of using the service and decisions are based only on expectations.
To explain how a consumer makes a decision to continue using information technology or an information system, the following models were offered-the Information Systems Continuance Intention Model (ISCI) and the Information System Success Theory (ISS) rooted in the Expectation-Confirmation Theory (ECT). These theories assume that satisfied users will continue to use the product or service, and dissatisfied users will stop using it [28]. The ISCI model assumes that the user's intention to continue using the information system depends on three factors: satisfaction, meeting expectations, and perceived usefulness derived from technology acceptance models. The ISS theory goes further and incorporates ideas of service quality, when system quality, information quality, and quality of services influence together the user's satisfaction and intentions for use, which brings net benefits to a customer [29].
Along with the ISCI and ISS models, numerous e-service acceptance and continuation models have appeared in the last ten years which investigate the relationship between e-service attributes, eservice quality, customer satisfaction, acceptance, and repurchase intentions, although the correlation between them varies in different models. The weak point of such models is that e-service attributes are usually disintegrated and may affect every outcome variable or even be influenced by them. For example, the E-Service Acceptance Model (ETAM) demonstrates a three-step consequence of eservice attribute influence on customer satisfaction and quality, while both of them influence customer intention to use e-services [30]. The ETAM's significant omission is that quality and satisfaction are concepts of the same level affected by different e-service attributes, like ease of use, learning, content, support, trust, or design. As suggested in [31], perceived usefulness has a statistically significant effect on the intention to use online platform services, and satisfaction has been found to have a positive effect on the ease of use, as it breaks casual relations between service quality and customer satisfaction. New interpretations of technology acceptance models are offered in [32,33], where the decision of adoption is made towards the specific e-service function, like volunteer recruitment for NGO in Twitter [32] or communication of e-Word of Mouth in Tripadvisor [33].
Technology acceptance and continuation models overlook the customer's active role in e-service creation, although several models include mediating factors like attitude toward internet purchase, which bridges customer satisfaction and internet purchase intention [34].
Contrarily, e-service quality models are based on the shared understanding of the relationship between e-service quality and outcome variables such as customer satisfaction, repurchase intentions, and word of mouth [10,35,36] (Figure 1).
The means-end chain theory is an important theoretical background for e-service models [37][38][39][40] explaining how customers evaluate experiences-from quality attributes to quality dimensions. It means that in order to explain how a customer makes the decision to continue use an e-service (repurchase intention), we should follow the linear relationship presented in Figure 1.
Limitations of technology acceptance and e-service models are rooted in their technological nature and we should apply service-dominant logic [41] as a general concept when explaining eservice customer behavior. E-service is a result of value co-creation by the provider and consumer, and thus e-service should be seen as a specific customer experience that creates e-service quality and generates customer satisfaction and intention to continue using the service. Customer experience in e-services has been studied as a factor of repurchase intentions [42], firm's competitiveness [43], and word of mouth [44], but not in correlation with customer satisfaction and e-service quality. At the same time, the emergence of customer experience of using e-services could explain the transition between customer decision to accept e-services and customer decision to continue to use e-services. It leads to the concept of "experienced" quality, whereby customer perceptions of quality are based on real experience and thus experience influences repurchase intentions through satisfaction. In our view, the combination of e-service quality and technology acceptance models with the concept of customer experience may offer a better understanding of customer behavior, from the decision to adopt an e-service to the decision to continue using this service, with a mediating role of customer experience, e-service quality, and customer satisfaction. Such a combination is also based on the idea of the service journey [45].
A relevant model was offered by Vatolkina in [46] but we refined it based on the literature review. Firstly, we deleted expected security from the e-service attributes that influence the decision to adopt an e-service because it has not gained sufficient theoretical substantiation. For example, the study of Himanshu Raval and Viral Bhatt, 2020 [47] showed that security and online shopping platform satisfaction have a weak correlation. In [27], we also find that a survey held among Chinese customers showed that perceived privacy surprisingly did not impact the "likelihood to purchase online". Secondly, based on the literature review [10,21,[34][35][36][48][49][50][51][52][53][54][55][56][57], we added the dimension of quality of support ( Figure 2) to complement the dimensions of quality of e-service results, quality of e-service process, quality of e-service system, and quality of e-service information. The dimension "quality of support" is aligned with the E-RecS-Qual model [57] and reflects the system of e-service recovery that is not the part of the value created by the e-service but influences both customer perceptions of e-service quality and customer experience. The integrated e-service adoption-continuance quality model shows that e-service quality influences the consumer experience, which affects consumer satisfaction, leading in turn to the consumer's intention to continue using the service. Low satisfaction results in a refusal to use the service in the future. Considering the diversity of e-services, the following question arises: is the model applicable for all types of e-services?
Methodology and Hypothesis Development
The literature review revealed that the majority of the e-service quality, e-service acceptance, and continuation models are constructed either for general e-services (like E-S-QUAL) or for specific hybrid e-services (e-tailing, e-library, e-travel) or even for websites (like W-S-QUAL). Just a few studies were conducted for digital e-services like platforms and social media [31,50,51,54]. No comparison between two types of e-services have been conducted to prove that relationships between customer experience, quality, satisfaction, and intention for use are similar for hybrid and digital eservices as well as to prove that e-quality dimensions are similar for digital and hybrid e-services.
Therefore, we devised the following research hypotheses.
Hypothesis 1 (H1).
The relationship between customer experience, quality, satisfaction, and intention for use is similar for both major types of e-services-hybrid and digital services-and could be described with an integrated e-service adoption-continuance quality model.
Hypothesis 2 (H2). The e-service quality dimensions and attributes are similar for two major types of eservices-hybrid services and digital services.
To design the study, we started with the selection of e-service quality attributes corresponding to the five e-service quality dimensions of the model. Based on the study of a systematic review and specific research papers on general e-service quality and website quality models, as well as specific research on e-tailing, social platforms, and e-travel quality models [10,21,[34][35][36][48][49][50][51][52][53][54][55][56][57][58], we concluded that every dimension is composed of several e-service attributes (Table 1). Table 1. E-service quality dimensions and attributes.
E-Service Quality Dimensions E-Service Attributes
Quality of e-service result Functionality Personalization Reliability Ability to save time Quality of e-service process Ease of use Security Accessibility Quality of e-service system Website or app structure and navigation Website or app design Quality of e-service information Quality of website or app content Usefulness of information Quality of e-service customer support Timeliness of e-service customer support To test the relationships between the components of the integrated e-service adoptioncontinuance quality model, a structured questionnaire for an online survey was developed because the questionnaire is a very flexible data collection tool [59]. The survey was designed using a Likert scale from 1 to 5, where 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree. Each question aimed to assess the perception of one of the components of the model. The questionnaire items, their correlation with model components, and descriptions of the quality components are presented in Table 2. The e-service can be adapted to meet my needs
RES4
The e-service helps me to save time RES5 I'm satisfied with the quality of the e-service result Quality of e-service process PROC1 The e-service is easy to use Degree to which e-service co-creation process meets customer needs and expectations PROC2 I feel that my personal and financial information I use for the e-service is safe PROC3 I always can get access to the e-service when I need it PROC4 I'm satisfied with the process of using the eservice Quality of e-service information
INF1
The e-service website content is of high quality Degree to which information provided by e-service provider meets customer needs to achieve specific goals when using the e-service
INF2
The e-service allows me to find the necessary information INF3 I'm satisfied with the quality of information eservice provides Quality of eservice system
SYS1
The e-service website has a clear structure Degree to which e-service website or app design and structure meets customer needs to achieve specific goals when using the e-service SYS2 I like how the e-service website or app looks like SYS3 I'm satisfied with the quality of the e-service technical level Quality of eservice customer support
SUP1
The quality of customer support is high Degree to which e-service customer support meets customer needs to use the e-service effectively
SUP2
The e-service is quick to answer questions about the support SUP3 I'm satisfied with the quality of customer support Intention to refuse using the service REF1 I'm going to refuse to use e-service in the next several months Customer readiness to refuse to use eservice A specific feature of the questionnaire design is that respondents had to answer questions about a specific e-service brand. Previous studies used questions about any abstract e-service [30,34,49,56] or abstract e-service of a specific type [27,31,41,43,47,61], so the respondents had to imagine what their decisions or perceptions could be in general. Several studies investigate consumer behavior in relation to specific e-service platforms-like volunteer acceptance of the Twitter platform based on the TAM model [32] or developing and measuring the importance of e-service quality dimensions and attributes for Facebook's users [51,54].
We expected that our survey design would capture real consumer behavior. In the survey, we offered 20 different options, including international brands popular in Russia, like Booking, AliExpress, Instagram, Youtube, Facebook, Badoo, Qiwi, WhatsApp, and Google as well as strong Russian brands like Wildberries, Yandex, YandexTaxi, YandexDrive, Ivi, Ozon, Avito, and SberbankOnline. Respondents could choose any other e-service brand, so Discord, Afisha, Apteka, Gosuslugi, DeliveryClub, and Steam appeared in the results of the survey.
The questionnaire was created in Google form and invitations to participate in the survey were distributed via the largest Russian social digital platform, "Vkontakte", at random. The preface to the questionnaire included the purpose of the study, rules of using the Likert scale, and a disclaimer stating that the survey was anonymous.
We collected 365 completed questionnaires from the respondents in the period between July and September 2019. An analysis showed that 350 respondents were residents of 38 Russian cities and 15 were international students from Ukraine, Turkmenistan, Thailand, Iraq, Germany, Georgia, and Northern Cyprus who currently lived in Russia. Table 3 gives the respondents' profile. The majority of respondents (68.8%) represent two age groups, from 18 to 24 years and from 25 to 34 years old, where the latter had the highest e-service penetration rate. Among the respondents, 66.3% used e-services daily, including 26% of respondents who used e-services several times a day. Only 7.1% of respondents rarely used e-services (several times or once a year). The survey demonstrated that 43.7% of the respondents chose digital services, and 55.3% preferred hybrid services. YouTube was the digital e-service with the largest audience in the study (18.1% of respondents chose it in the survey). Yandex Taxi (the Russian largest online taxi-aggregator) and Wildberries (the largest online retailer in Russia) were ranked second and third, with 12.1% and 9.6% of the respondents' choice, respectively. In general, the industry coverage of the selected eservices applied to most of their types (entertainment and media, online retail, online travel, electronic payment services, transport services, food delivery, event ticket booking, online video, online music and books, social networks, financial services, etc.). This is why the survey results can be applied to the B2C e-services market in Russia in general.
Since the size of the general sample was not set, and it was also impossible to control the chance of re-passing the survey because it was held online, a simple random sampling formula was used for the calculation:
Analysis of the Survey Results
To compare the average values of the chosen response options for hybrid and digital e-services, the Student t-test was selected with p = 0.05. To compare the variance of the response choice values for hybrid and digital e-services, the Fisher test was applied with p = 0.95. Table 4 gives the general survey results, indicating the average values, standard deviation, Student t-test, and Fisher test values. We identified eight key variables, confirming at least one hypothesis and revealing statistically significant differences between hybrid and digital e-services.
Verification of Hypothesis 1. The Relationship between Customer Experience, Quality, Satisfaction, and Intention for Use Is Similar for Both Major Types of E-Services-Hybrid and Digital Services-and Could Be Described with Integrated E-Service Adoption-Continuance Quality Model
The results of the study show that consumers' perceptions of the e-services quality, consumer experience, and customer satisfaction were at a high level and were rated above 4 points by the respondents. The EXC1 and PROC1 questions (with an average value being 4.515 and 4.636, respectively, and a standard deviation of 0.76 for both) had the highest ratings.
Linear correlation coefficients (r) were calculated to test the relationship between the model components. In contrast to sociological studies [62], values of correlation coefficients higher than 0.5 are not very common; therefore, it is possible to take into account the values that are equal to or greater than 0.3, i.e., characterizing a moderate correlation of features. The correlation with coefficients ranging between 0.5 and 0.8 could be regarded as strong, and when coefficients range from 0.81 to 1.0, the correlation is very strong.
We calculated the correlation coefficients for three clusters-e-services in general, digital services, and hybrid services (Table 5). The data presented in Table 4 show a significant correlation relationship between the crucial components of the e-services quality model. The correlation between the model components was tested for digital and hybrid services specifically (Table 6). Although our conclusions about e-services in general could be applied both to digital and hybrid services, we can observe some differences. Thus, positive experience has a higher influence on customer satisfaction with hybrid services than with general e-services and digital services. The quality of results is the most important factor of the hybrid service quality, and the quality of information is the least important factor. For digital services, the influence of quality constructs on customer experience and customer satisfaction is higher than for hybrid services and for e-services in general. The most important quality factor is quality of information due to the specific function of digital services. Customer support is the least important factor.
The results of the study show that the relationships between model components are confirmed both for hybrid and digital e-services, and this means that service quality influences the consumer experience, which affects consumer satisfaction, leading in turn to the consumer's intention to continue using the service.
According to the research results, the strongest relationship is observed between positive consumer experience and customer satisfaction. Consumer experience and satisfaction have a significant impact on the consumer's intention to continue using the service and demonstrate a weak negative relationship with the intention to refuse the service. This means that both experience and satisfaction play a mediating role in customer behavior. Both experience and satisfaction are outcomes of e-service quality which lead to repurchase intentions. The difference between customer experience and satisfaction depends on the influence of a specific quality dimension.
Thus, customer experience was proven to be an outcome variable of the e-service quality model, as it shows a significant correlation with customer satisfaction and e-service quality dimensions both for hybrid and digital e-services. It complements previous studies where outcome variables involved satisfaction, repurchase intentions, and word of mouth [36]. This is an important contribution of our study, since it enables us to shift the focus from the technological to the interactive nature of eservices. It is proven also by the differences which we can observe. Thus, positive experience has a stronger influence on customer satisfaction with hybrid services than with e-services in general and digital services. In our opinion, this is the consequence of the more interactive nature of hybrid eservices as they involve delivering customer value offline with interpersonal interactions.
Thus, the quality of e-services has a greater influence on customer satisfaction, while consumer experience is influenced by the quality of e-service technical level, quality of e-service process, and quality of customer support. This shows that customer satisfaction is a function-driven concept and emerges through a comparison of customer needs and e-service results. This correlates with previous studies where fulfillment/reliability was the strongest factor affecting satisfaction [10]. The added value of our findings is that other quality dimensions assessed in our study also showed that the eservice consumer experience concept reflects the customer's active participation in the e-service value co-creation process and thus depends on the quality of service delivery process, customer support, and technical level.
Verification of Hypothesis 2. The E-Service Quality Dimensions and Attributes Are Similar for Two Major Types of E-Services-Hybrid Services and Digital Services
The survey tested the relationship between e-service attributes and e-service quality dimensions (Table 7). The results of the study show that only two e-service attributes have a strong influence on customer satisfaction-usefulness and ease of use. Other e-service properties show a strong influence on quality dimensions, proving the multistage nature of the e-service model. The perception of the eservice attributes appears in the process of e-service value co-creation and emergence of customer experience (Table 8). Our study verified the multidimensional structure of e-service quality according to means-end chain theory and multiple previous studies [1,7,10,35,36,39] and allowed us to verify the validity of the following quality dimensions for both hybrid and digital e-services: quality of e-service results, quality of e-service process, quality of e-service information, quality of e-service system, and quality of e-service customer support. This approach to quality dimensions is based on the ideas of the Edvardsson B. [48], ISCI model [29], and E-RecS-Qual model [57] and assumes that the e-service quality dimension should be conceptualized as a specific component of the service and quality attributes specify each of these components. This differs from the multiple studies [1,7,10,35,36,39] where quality dimensions are represented by quality attributes, and this confuses both customers and managers when conceptualizing e-service quality.
Integrated E-Service Adoption-Continuance Quality Model
According to our findings, e-service quality dimensions show a moderate impact on the consumer intention to continue using a service, which confirms the means-end chain theory, when a customer starts with a judgment of specific attributes and progresses to perception or more abstract concepts like quality, experience, and satisfaction.
An interesting finding that still places means-end chain theory under question is that e-service usefulness and ease of use have a strong impact directly on customer satisfaction. This reminds us about technology acceptance models and shows that e-service usefulness and ease of use are the most significant attributes not only at the stage of e-service acceptance but also at the stage of using the service. Other consumer attributes require aggregation in quality dimensions in order to have a cumulative impact on customer satisfaction and the decision to continue using the service. Our findings allowed us to revisit the model (Figure 3). The important added value of the study is that the relationship between e-service quality dimensions and e-service attributes shows the significant differences for the hybrid and digital eservices. As we stated above, perceived usefulness (RES1) has the most decisive impact on customer satisfaction both for digital and hybrid services. For digital e-services, it has a stronger correlation with process quality than with result quality. This may be because most of the digital services are process-oriented, whereby the customer receives benefits during the process of e-service delivery. For digital e-services, process quality and system quality are the most consistent quality dimensions. Thus, the accessibility and reliability of the e-service are perceived as a part of the system quality dimensions. Ease of use has a strong correlation with two dimensions-information quality and process quality. For hybrid services, the most crucial quality dimension is the quality of the results, while quality of information is the least important factor. This is determined by the differences in function of information. For digital services, information is the primary service outcome determining the usefulness of the service. At the same time, for hybrid services, system quality is also important because it has a strong correlation with six e-service attributes. Information quality and process quality are less important for hybrid e-services because they are result-oriented, and the service delivery process is entirely associated with the use of websites or mobile applications. Interestingly, security for all types of services shows a moderate correlation with the quality of customer support, which means that security is perceived as a function of the support or help from the service provider.
The analysis shows that perceived security and the ability of the service to save the consumer's time have the lowest impact on perceived quality and, in our opinion, this requires further research. Similar results can be observed in some other studies. For example, as shown in [47], security and online shopping platform satisfaction have a weak correlation, while ease of use, reliability and responsiveness, assurance, and attractiveness have a significant impact on online shopping customer satisfaction. It is also confirmed in [27] that perceived privacy surprisingly did not impact the "likelihood to purchase online". As a contrast, the results of the study on the adoption of egovernment services made in the United Arab Emirates underline strong positive relations between consumer perceptions of confidentiality and trust and e-government services adoption [61].
E-Service Quality Dimensions and Consumer Attributes
We suppose that security is an independent attribute that influences the decision to adopt an eservice and intentions to continue using the e-service. However, it does not influence the e-service quality perception and customer satisfaction level. In our opinion, according to the Kano Model [63], security should be considered as a basic attribute ("must be") that does not affect customer satisfaction but leads to customer dissatisfaction if not present. This means that even if customers perceive that the security of an e-service is high, it has no influence on their intentions to adopt an eservice or continue using it. On the contrary, if the perceived security is low, it will negatively influence the decision and decrease the value of the e-service quality. Hence, the relationship between perceived e-service security and consumer behavior requires further study. As for the perception of time in the context of using e-services, we can assume that consumers take this benefit for granted (also as a basic property, according to the Kano Model), which means that there is no impact on quality perception and satisfaction level.
We present a new relationship between quality dimensions and consumer attributes according to our findings (Table 9). Table 9. Revised e-service quality dimensions and consumer attributes.
E-Services Attributes E-Services in General
Digital Services Hybrid Services Quality of e-service result Functionality Quality of e-service process Timeliness of customer support Security
E-Service Quality and Open Innovation in Digital and Hybrid Service Industry
Customer open innovation is an inherent element of services as the customer plays the role of value co-creator and actively participates in service delivery and the constant modification process. Open innovation contributes to the constant improvement of service quality [64] only when organizational quality management practices allow us to listen to the voice of the customer and adapt the service in order to meet customer needs [65], which requires market, responsive, and innovation orientation of the organization [66]. The study results show that customer voice includes the perception of customer experience, e-service quality, and satisfaction based on both customer requirements and expectations. The multidimensional nature of e-service quality helps to identify specific customer requirements and expectations about e-service quality dimensions and attributes. This means that every element of e-service quality is subject to open innovation practices and our study reveals how to prioritize innovations according to customer voice. The most important attributes are usefulness and ease of use for all types of e-services, both for the decisions to adopt and to continue to use e-services. This means that the innovations that help to deliver and improve them will have the greatest effect on customer satisfaction and repurchase intentions. Thus, both the service design process [67] and continuous improvement efforts should be focused primarily on usefulness and ease of use. On the other hand, innovations in e-service security and time-saving attributes are also crucial as they are perceived by customers as "must be" attributes.
The study reveals that technical level and quality of information are more important for digital e-services because they are based on self-service and customers were more vulnerable to imperfections in the website design and quality of information provided. At the same time, selfservice decreases the opportunity to listen and to understand customer voice so customer open innovation depends highly on the customer feedback and customer support tools employed by the customer provider because it is not enough to find external knowledge-there should be salient innovation [68] and quality management practices [69] as well as a distinctive shift from closed innovation to a proactive open innovation organizational culture [70] that helps to transform customer voice in e-service innovation.
Conclusions
The research results imply that the e-services quality model includes customer experience as an essential variable that has a significant influence both on customer satisfaction and intention to repurchase e-services. When customers decide to continue using the e-service, they need to have a positive experience that influences customer satisfaction. This research resulted in a better understanding of the differences between customer satisfaction and experience. Consumer satisfaction is strongly influenced by the usefulness and ease of use, while consumer experience is influenced by the quality of e-service technical level, quality of e-service process, and quality of customer support. This confirms that both customer experience and satisfaction should be embedded in e-service quality models to illustrate different angles of customer perceptions and behavior. It bridges the gap between customer loyalty management, which is seen mostly as a marketing function, and quality management, which is seen mostly as an operational function.
The hypothesis about the similar relationship between customer experience, quality, satisfaction, and intention was confirmed for hybrid and digital e-services, as well as the multidimensional nature of e-service quality, including the customer support quality, system quality, information quality, e-service process quality, and quality of e-service results. This supports the idea of common theoretical approaches to quality management for all types of e-services regardless of the combination of online and offline strategies and experiences. Future research should stimulate the diffusion of best practices between different types of e-services and provide the opportunity to spread research findings between different e-service sectors widely.
The major difference between hybrid and digital e-services was found in the relationship between attributes and quality dimensions because of the different focus in value generationprocess-oriented for digital services and result-oriented for hybrid services.
An unexpected finding is that two e-service attributes (perceived usefulness and perceived ease of use) have a significant direct influence on customer satisfaction. Other attributes show an indirect relationship with satisfaction through quality components. Therefore, research results develop ideas of technology acceptance models and prove that perceived usefulness and perceived ease of use should be the focus of managers at all stages of the consumer lifecycle-from the decision to adopt an e-service to the cyclical decision to continue using it.
Another unexpected finding is that security and ability to save time show a weak correlation with e-service quality and customer satisfaction. We should treat them as essential e-service attributes according to the Kano Model, when they influence dissatisfaction, if not present, but do not influence satisfaction, if present.
The combination of technology acceptance models, e-service models, and the customer experience concept enables us to explain customer behavior, when initial customer expectations are focused on two e-service attributes-functionality and ease of use-but after the consumer has experience of using the e-service, his or her expectations undergo a transformation, and he or she perceives e-service quality through a wider number of e-service attributes combined in five e-service quality dimensions: e-service result quality, e-service system quality, e-service process quality, information quality, and customer support quality. The adoption decision is based only on expectations, and the intention to continue using the e-service is based on the transformation of customer experience into customer satisfaction mediated by the e-service quality and customer experience.
Although this research has offered some valuable insight into studies on e-service quality, there are several limitations that need to be acknowledged. First, the data for this research were collected using only one method-the online questionnaire survey-as this is a common data collection technique, though it is not free from the subjectivity of the respondents. The survey was conducted at one point in time, but, according to the service journey concept, consumer expectations and perceptions evolve over time. The study does not cover social, national, personal, technical, and organizational factors that influence customer behavior. However, the results seem to suggest that the sampling method used has excellent exploratory power.
Second, our study does not consider such outcome variables as customer loyalty or word of mouth, which may bring additional insights into customer behavior. Further research is needed to embed them in the e-service quality model and to explore in detail the multidimensional nature of eservice customer experience.
Third, future research is needed to understand the influence of perceived security on the adoption of e-services and further intention to continue using the e-service, because the existing studies show contradictory results in terms of the relationship between security, e-service quality perception, and customer satisfaction.
Managerial Implication
Our findings are useful for e-service providers as they allow them to model e-service quality and design customer behavior studies and select quality management and loyalty management tools for eservices focusing on five quality dimensions and taking into account differences between hybrid and digital e-services. The findings show that to understand customer intentions, it is not enough to measure customer satisfaction and quality perceptions-customer experience also should be the subject of study. As was proven by the study, satisfaction is function-driven and shows a comparison between customer needs and service results, while experience is process-driven and shows a comparison between customer expectations and perceptions of real events during e-service delivery.
Another managerial implication is the importance of quality attributes for the customer. Our findings show that perceived usefulness and ease of use should be primary attributes delivered and advertised by providers as they have most significant influence on customer behavior both for adoption and repurchase decisions.
The study shows how to use quality management tools for hybrid and digital e-services. Hybrid services should be focused on the quality of service results delivered offline, while digital services' functionality should be embedded in the service delivery process. Customer support should be focused on two quality attributes-security and ability to save time. The role of information quality also significantly differs-for hybrid services, it should be designed to help customers to save time, and for digital services, it should help to easily and safely use a service and deliver value though quality content. System quality also needs adjustment. Thus, personalization and accessibility are more important for the digital and less important for the hybrid services.
An important managerial implication is that the general integrated e-service adoptioncontinuance quality model is similar for hybrid and digital e-services and best practices could be diffused between different types of e-services.
Practical/Social Implications
Practical and social implications can be positive or negative, depending on the level of satisfaction and type of use experienced by the user. The above discussion makes it clear that any new services introduced are meant for users, and they should offer solutions for customer needs and bring positive experiences that improve the quality of every life. Positive impact enhances the use of the e-services and allows us to diffuse best practices in e-services development. Thus, understanding and meeting individual needs and expectations helps to improve the quality of all e-services through growing customer expectations and e-service providers' ability to meet these expectations, which erases the boundaries between innovations and open innovations.
Limitations and Future Research
Although this research has offered some valuable insight into the study of e-service quality, there are several limitations that need to be acknowledged.
First, the data for this research were collected using only one method-the online questionnaire survey-as this is a common data collection technique, though it is not free from subjectivity of the respondents. The survey was conducted at one point in time, but, according to the service journey concept, consumer expectations and perceptions evolve over time. The study does not cover social, national, personal, technical, and organizational factors that influence customer behavior. However, the results seem to suggest that the sampling method used has excellent exploratory power.
Second, our study does not consider such outcome variables as customer loyalty or word of mouth, which may bring additional insights into customer behavior. Further research is needed to embed them in the e-service quality model and to explore in detail the multidimensional nature of eservice customer experience.
Third, future research is needed to understand the influence of perceived security on the adoption of e-services and further intention to continue using the e-service, because the existing studies show contradictory results in terms of the relationship between security, e-service quality perception, and customer satisfaction. Funding: This research was funded by RFBR, project number 20-010-00571 "The Impact of Digital Transformation on Improving the Quality and Innovation of Services".
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-11-19T09:14:33.532Z
|
2020-11-12T00:00:00.000
|
{
"year": 2020,
"sha1": "92ccd776ae3365e5ec230709fd13876b8ed6bb6d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2199-8531/6/4/143/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5820fac1036450364947143babe404e83a46be92",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
268193125
|
pes2o/s2orc
|
v3-fos-license
|
Acute Genetic Damage Induced by Ethanol and Corticosterone Seems to Modulate Hippocampal Astrocyte Signaling
Astrocytes maintain CNS homeostasis but also critically contribute to neurological and psychiatric disorders. Such functional diversity implies an extensive signaling repertoire including extracellular vesicles (EVs) and nanotubes (NTs) that could be involved in protection or damage, as widely shown in various experimental paradigms. However, there is no information associating primary damage to the astrocyte genome, the DNA damage response (DDR), and the EV and NT repertoire. Furthermore, similar studies were not performed on hippocampal astrocytes despite their involvement in memory and learning processes, as well as in the development and maintenance of alcohol addiction. By exposing murine hippocampal astrocytes to 400 mM ethanol (EtOH) and/or 1 μM corticosterone (CTS) for 1 h, we tested whether the induced DNA damage and DDR could elicit significant changes in NTs and surface-attached EVs. Genetic damage and initial DDR were assessed by immunolabeling against the phosphorylated histone variant H2AX (γH2AX), DDR-dependent apoptosis by BAX immunoreactivity, and astrocyte activation by the glial acidic fibrillary protein (GFAP) and phalloidin staining. Surface-attached EVs and NTs were examined via scanning electron microscopy, and labeled proteins were analyzed via confocal microscopy. Relative to controls, astrocytes exposed to EtOH, CTS, or EtOH+CTS showed significant increases in nuclear γlH2AX foci, nuclear and cytoplasmic BAX signals, and EV frequency at the expense of the NT amount, mainly upon EtOH, without detectable signs of morphological reactivity. Furthermore, the largest and most complex EVs originated only in DNA-damaged astrocytes. Obtained results revealed that astrocytes exposed to acute EtOH and/or CTS preserved their typical morphology but presented severe DNA damage, triggered canonical DDR pathways, and early changes in the cell signaling mediated by EVs and NTs. Further deepening of this initial morphological and quantitative analysis is necessary to identify the mechanistic links between genetic damage, DDR, cell-cell communication, and their possible impact on hippocampal neural cells.
Introduction
Astrocytes are nonneuronal cells of ectodermal origin that sustain CNS homeostasis at all levels and provide for its defense against injury but also have a critical contribution to neurological and psychiatric disorders.As pivotal responders to all forms of CNS insults, the response of astrocytes to each specific damage condition may involve the loss of protective functions or gaining of neurotoxic properties [1][2][3][4][5][6][7].Thus, neuroprotective or deleterious actions of astrocytes in each specific context will depend not only on the time and type of injury but mainly on the changes elicited in gene expression, morphology, proliferation, functions, and/or signaling [2,6].Such diversity of astrocyte responses implies an extensive signaling repertoire that includes gap junctions, nanotubes (NTs) [8][9][10], soluble factors, and extracellular vesicles (EVs) [1,11,12].All of these allow astrocytes to be proposed as secretory cells with significant action on themselves (autocrine communication) or on the other neural cells (paracrine communication) [11,13,14].
Cocucci and Meldoles [15] reported that astrocytes released EVs that include exosomes (50-100 nm diameter) and ectosomes or microvesicles (>1,000 nm diameter) generated from early, late, and multivesicular endosomes that fuse with the plasma membrane or by a direct outward budding plasma membrane, respectively, to shed into the extracellular space.In addition, upon repetitive ATP stimulation, cultured astrocytes could release larger vesicles (1-8 μm diameter) [16].EVs may contain membrane proteins, lipids, signaling molecules, mRNAs, microRNAs, long noncoding RNAs, mtDNAs, growth factors, and cytokines [9,16,17].These molecules could be involved in neural protection [17,18] or in promoting damage as occurs either in glioblastomas [19,20] or in some neurodegenerative diseases [13,17,21,22].In turn, astrocytes are also targets of EVs from sources other than neural cells, as demonstrated by the inflammatory response resulting from exposing them to EVs from human T-cell lymphotropic virus type 1, the blood-borne pathogen that is the etiological agent of T-cell leukemia/lymphoma in adults [23].Since EVs are involved in various physiological and pathophysiological brain processes, they have begun to be used as biomarkers of normal and pathological situations [24].
Astrocyte NTs range from 50 to 200 nm, but they can reach ~700 nm [8] and enable cell-to-cell communication up to ~500 μm [9,25].Although its formation in significant amounts during healthy conditions is debatable, astrocyte NTs are induced by oxidative stress [26], serum depletion, p53 activation, acidic microenvironment, or hypoxia (reviewed by [9]).In addition, microtubes are thicker cellconnecting tubes (wider than 0.7 μm), share many features with NTs, and were described in gliomas of astrocytic origin [9,27].Microtubes contain actin microfilaments and microtubules, which support intercellular cargo transport and contribute to their apparently longer lifespan compared to NTs [9].It has been proposed that microtubes could connect pathological cells such as those from glioblastoma with normal astrocytes [9] to widespread disease-associated molecules larger than those transported by NTs.Both nanoand microtubes seem to play important roles in many physiological and pathological cellular processes, through the establishment of "open conduits" that seem able to transport ions, organelles, or molecules, helping to synchronize cells, induce cell differentiation, or spread CNS cancers or neurodegenerative diseases [8][9][10].
On the other hand, it is widely accepted that injuring conditions may alter astrocyte EVs and NTs.It has been reported that in cultured astrocytes, besides altering GFAP levels and the cytoskeleton [28], proliferation, trafficking, oxidative stress, and survival [19,[28][29][30], ethanol (EtOH) increased the EV secretion and consequently their content of inflammatory-related proteins [31].The same authors demonstrated that the EVs primed from EtOH-treated astrocytes could alter the physiological state of neurons likely contributing to the spread of neuroinflammation and development of apoptosis.On the other hand, patients submitted to stress increased astrocyte EV levels [12].In this regard, exposure of hippocampal astrocytes to high corticosterone (CTS) concentrations employed to mimic stress-like conditions (100 nM and 1 μM for 3 h) increased EV release [32].This EV astrocyte response will impact on brain homeostasis and in the overall stress response, in view of the critical role that the hippocampus plays in this process.The same study also reports that the modulatory effects of CTS on astrocytic vesicular release imply significant changes in actin cytoskeleton and microfilament rearrangements [32].In summary, the existing literature clearly evidences that EVs and NTs participate in the responses of astrocytes to injuries.Moreover, both modalities are considered key factors in propagating astrocyte signals.However, there is little evidence on the possible association between astrocyte genetic damage and early changes in the EV and NT repertoire and whether one or both specializations could be modulated to limit CNS damage.
In addition despite that EtOH [33,34] and CTS [35,36] can be genotoxic, no previous reports investigated whether the DNA damage induced by very short exposures to EtOH and/or CTS could change the communication repertoire of astrocytes in terms of NT emergence and EV formation and release.Moreover, no studies with similar aims were made in main brain regions related to memory and learning, such as the hippocampus [37,38], which is also very important in the development and maintenance of addiction to widely abused drugs including alcohol [39] and in mood associated disorders [40].
By using an experimental paradigm consisting of exposing hippocampal astrocytes to 400 mM EtOH and/or 1 μM CTS for 1 h [41], we analyzed whether the induced DNA damage and the initial stage of DNA damage response (DDR) could elicit significant changes in the surface-attached EV and NT astrocytic repertoire in terms of morphology and quantity.We decided to study hippocampal astrocytes since it is known that alcohol affects hippocampal functions such as memory and learning through mechanisms that involve astrocytes [42] and because CTS can affect hippocampal astrocytes as was reported in models of major depressive disorders [43].The short-term exposure was selected to know if very short exposure elicits not only DNA damage but also a fast cell response (DDR), which was reported to restore genome integrity and preserve its stability [44][45][46][47].The short exposure also helped us understand whether changes in the signaling repertoire of astrocytes represent a rapid cellular response.EtOH and CTS working concentrations were selected because both were the minor ones that elicited reliable DNA damage and DDR [41].
Genetic damage and DDR were assessed by analyzing the rapid phosphorylation of the histone variant H2AX (termed γH2AX foci) around sites of DNA damage.γ-H2AX recruits a series of proteins involved in the downstream DDR pathway [48][49][50][51][52][53], including connections with the DNA repair [44,54,55].To detect evolution of DDR 2 International Journal of Cell Biology and early signs of apoptosis, the DDR-related apoptosis, which operates via the regulation of the proapoptotic bax gene [56][57][58], was assessed by recognizing BAX (proapoptotic effectors BCL-2-associated X protein or BCL-2-like protein 4) immunoreactivity.Astrocyte morphology was evaluated by DIC, immunostaining against the glial acidic fibrillary protein (GFAP), and phalloidin labeling, and NTs and EVs on astrocyte surfaces were analyzed via scanning electron microscopy.
Our results revealed that the immunoreactivity against γH2AX and BAX indicates that DNA damage and possibly the DDR cascade were induced.Besides, no morphological modification like astrocytic reactivity was detected.Interestingly, significant modifications of the EV and NT repertoires and different sizes, morphology, and complexity of the EVs were observed depending on the experimental condition.2.2.Animals.Forty male Wistar rats (1 day old) from Facultad de Ciencias-Universidad de la República were employed.Pregnant rats were grown in individual cages with food and water ad libitum at 23 ± 1 °C and a 12 h light/dark cycle (07 : 00-19 : 00 h).
Primary
Cultures of Hippocampal Astrocytes.Twelve independent cultures were performed by using 3 rat pups per culture.Procedures were carried out according to Olivera-Bravo et al. [59] with minor modifications.The rats were quickly decapitated under a laminar flow hood, brains dissected and placed in sterile PBS buffer, and meninges removed.Clean brains were transferred to another plate with sterile PBS, and the hippocampus was dissected and cleaned under a stereomicroscope.Then, pieces of clean hippocampi were located in sterile 15 ml Falcon with 1 : 10 volume of 0.05% trypsin-EDTA buffer and then incubated in a water bath at 37 °C.After 25 min, trypsin was blocked by adding 3 ml of complete culture media composed of DMEM (Gibco, 12800082), +10% fetal bovine serum (FBS; Gibco, 12657011), and penicillin/streptomycin (Gibco, 15140122), and pipetted 7 times without bubbling.The cell homogenate was passed through a sterile 80 μm sieve and centrifuged at 400 g for 10 min.The supernatant was discarded, and the pellet was resuspended in 1 ml of complete culture media.Then, the cells were counted, diluted at 400,000 cells/ml, seeded in 35 mm Petri dishes or 24 multiwell plates, and incubated at 37 °C and 5% CO 2 .The complete culture medium was changed every day until confluence.Then, monolayers were gently agitated at room temperature (RT) and darkness for 48 h.A week after, cells were trypsinized and reseeded on slides with a standard-size 8 × 6 mm diameter Teflon reaction well with black background (Tef-Tek Micro Slides premium, PorLab) and 12 × 4 mm diameter glass coverslips (Citoglas®) for analysis by fluorescence microscopy or on Aclar film (Electron Microscopy Sciences) for scanning electron microscopy (SEM) analysis.Twentyfour hours before each experiment, the percentage of FBS was decreased by 2% to favor the quiescence of the culture.
Treatments and Experimental
Conditions.Quiescent astrocyte cultures were treated for 1 h with 400 mM EtOH, 1 μM CTS, or EtOH+CTS (400 mM and 1 μM, respectively).For the controls, astrocytes were incubated in culture media (CM) or exposed to the CTS vehicle, dimethyl sulfoxide (DMSO) at 0.03% to prevent genotoxicity [60].Each experimental condition (CM, DMSO, EtOH, CTS, or EtOH+CTS) was fulfilled in triplicate.The cultures were kept at 37 °C with 5% CO 2 during the exposure time, then washed in 10 mM, pH 7.4 PBS (3 times), and fixed according to the procedure to be applied later.
To evaluate astrocyte morphology in the different experimental conditions, in a set of experiments, 1 : 250 dilutions of Alexa Fluor™ 633 Phalloidin (A22284, Invitrogen) were added together with 1.5 μg/mL of DAPI during 20 min at RT.After 2 washes with PBS, cells were mounted and sealed as indicated above.
All preparations were preserved at 4 °C, protected from light, and then imaged.Images were acquired under a Zeiss LSM 800 confocal microscope using a plan apochromatic oil 3 International Journal of Cell Biology immersion lens (63x, 1.4 NA) with 2x magnification and in sequential scan mode employing 405, 488, and 546 nm LASER lines.Images (voxel size: Δx/Δy/Δz = 0 379/0 379/ 1 00 μm) were saved with a resolution of 2048 × 2048 pixels, in .cziformat, and then in noncompressed .tifformat.Acquisition parameters were maintained among all the experimental conditions.
2.6.Sample Preparation for Scanning Electron Microscopy (SEM) Analysis and Imaging.Astrocyte suspensions were seeded on Aclar film as in Jiménez-Riani et al. [22] and Reyes-Ábalos et al. [41].After a brief wash with warm PBS, cells from each experimental condition were fixed with 2.5% glutaraldehyde (4 °C, 18 h), washed 3 times with PBS, postfixed with osmium tetroxide, and dehydrated with increasing EtOH concentrations (50%, 70%, 80%, 90%, and 100%, 5-10 min each).Solvent elimination was done with a dryer at a critical CO 2 point to preserve intact internal structure, and pure gold metallization was carried out through a sputtering technique (gold plasma).Finally, samples were mounted in individual bronze dowels and submitted to SEM analysis.The astrocyte surface from each experimental condition was analyzed at ultrastructural levels by employing a SEM JEOL-5900-LV microscope.Images were obtained using secondary electrons at 20 mA with 300x, 1,000x, 2,000x, 3,000x, 10,000x, and 30,000x magnifications and saved in noncompressed .tifformat.Image resolution in the x, y plane was 0.3 nm/pixel.Image sizes were 640 × 480 pixels, 8 bits, and 300k scan 3.
Image Processing and Data Collection. Digital confocal
or SEM images were analyzed using FIJI (NIH) software.Different analyses were performed as described below.
γH2AX Focus Quantification on Confocal Images.
To analyze the nuclear γH2AX mark on confocal images, a digital command code was designed to work in batch format.This tool allowed executing blocks of actions in an automatic and agile way, working by folder of images and optimizing the processing and analysis time.The analysis executing code includes the following steps: (i) opening of .tifimage files; (ii) channel splitting (green for GFAP labeling, red for γH2AX foci marking, and blue for DAPI); (iii) 8-bit conversion with a pixel depth of 0-255; (iv) segmentation and generation of binary masks for the red channel; (v) definition of regions of interest (ROIs); (vi) storage of their coordinates in zip files .roi, to quantify γH2AX foci using the 3D object counter plugin; (vii) segmentation, generation of binary masks for the DAPI channel, and delimiting nuclear ROIs; and (viii) counting nuclei (n = 100) per treatment using a 3D object counter plugin.2.7.2. Frequency, Diameter, and Length of NTs.On SEM micrographs obtained at 3,000x magnification, binary (Huang) masks of astrocytes (n = 25 per experimental point) were generated, from which NTs were counted using the FIJI cell counter plugin.The measurement of the length and diameter of NTs was performed employing the free-hand line tool of the FIJI program as follows: (i) a vector drawn from the edge of the cell soma to the end of each NT was used to measure the length, and (ii) to measure the diameter, a second vector perpendicular to the first one was drawn in each NT.The data were recorded in digital spreadsheets associating each astrocyte with the number, diameter, and length of its NT.
Frequency and the Major Axis of EVs on Somas and
NTs.On SEM images of astrocytes (n = 25 per experimental condition), taken at 10,000x or 30,000x magnifications, the characteristics of the EV surfaces were analyzed, and their numbers were quantified using the FIJI cell counter plugins.On previously obtained binary (Huang) masks, the major axis of EVs located on the somas of NTs was measured by drawing a vector along it, using the FIJI free-head line tool.2.7.4. Skeletonization.Skeletonization of SEM images showing NTs was made using the corresponding FIJI plugin on SEM images, as follows: (i) convert the image to 8 bits; (ii) apply despeckle, close the function, and remove outliers; (iii) save the image as a separate file; (iv) skeletonize; and (v) compare with the original figure.
Statistical Analysis and Illustrations.
Using the Graph-Pad Prism 8® software (GraphPad Prism, RRID: SCR_ 002798), the Shapiro-Wilk test (α ≤ 0 05) was applied to check normal distributions concerning the following variables: (i) γH2AX foci number, (ii) astrocyte areas, (iii) NT and EV frequencies, (iv) frequency of EVs on somas or NTs, (v) major axis of EVs, and (vi) length and diameter of NTs, all considered per astrocyte.Since none of them fit normal distributions, they were described employing the medians and 95% confidence intervals, as summary measures.Accordingly, differences between the distinct experimental conditions (CM vs. DMSO, CM vs. EtOH, DMSO vs. CTS, EtOH vs. CTS, EtOH vs. EtOH+CTS, and CTS vs. EtOH+CTS) were analyzed using the Kruskal-Wallis test with Dunn's test for multiple comparisons with α ≤ 0 05.350-500 astrocytes from each experiment were analyzed.Since each condition was implemented in triplicate, medians (each corresponding to one outcome) were compared with each other, and data were pooled when p values were ≤0.05.Graphs were performed employing the GraphPad Prism 8® software and the figures using the Adobe Photoshop CC version 2017.International Journal of Cell Biology these conditions.The highest foci frequency corresponded to the combined EtOH+CTS group.In addition, EtOH and/or CTS exposure did not elicit significant changes in astrocyte shape and reactivity when assessed by GFAP immunostaining and phalloidin-rhodamine labeling (Supplementary Figure 1).In all experimental conditions, GFAP immunostaining reflected fibrillary signals (green) that cover all astrocyte bodies and surround DAPI-positive nuclei (Supplementary Figure 1A).Regarding phalloidin labeling, it evidenced the strong F-actin astrocyte cytoskeleton and the geometric shape without the emission of significant cell processes (Supplementary Figure 1B).
EtOH and CTS Elicited Changes in the Intercellular
Connections Associated with DNA Damage.SEM images from all experimental conditions evidenced that astrocyte somas present many protrusions that extend from the cell margins to the substrate in arrangements where the length predominates over the cross-section and appeared very similar to the NTs previously described by Rustom et al. [61].In addition, numerous variably shaped and sized formations derived from astrocyte membranes were identified as EVs appear loosely associated with cell margins and arranged on astrocyte surfaces in all experimental groups (Figure 3(a)).Remarkably, the frequency of NTs per astrocytes dramatically changed in the different experimental conditions, showing similar values in controls and CTS groups, but strongly decreasing in EtOH and less in EtOH +CTS (Figure 3(b)).Furthermore, the number of EVs was similar in astrocytes exposed to EtOH and/or CTS, but significantly higher than in controls (Figure 3(c)).
NT Main Morphological Parameters
Seemed to Depend on the Injuring Challenge.Two main kinds of protrusions from the cell body could be distinguished (Figure 4(a)).
One type is composed of short irregular protrusions of similar diameters at the point of emergence that softly decrease to end in a cell-free space (green asterisk in Figure 4(a)).The other formations are NTs of different lengths and diameters.
In most cases, NTs seem to act as intercellular bridges between different astrocytes or cross over the cell surface to connect different cells (blue asterisks in Figure 4(a)).
The respective skeletonized schemes evidence the two types of cell processes that generally appear in all experimental conditions (Figure 4 4(e)).
In addition, SEM images revealed a significant number of EVs of rounded shapes associated with the cellular processes that seemed to be NTs (Figures 5(a)-5(c)).Many EVs appeared to be transferred between different cells via NTs ("a" letters in Figures 5(a)-5(c)).Some of them were disorderly arranged on the surface of the NTs, probably using it as a scaffold to move ("a" letters in Figures 5(a)-5(c)).Other EVs appeared to be inside the NTs ("b" letters in Figure 5(a)), which probably travel through the lumen of the NTs.
In spite that most of the NTs show the shapes and dimensions previously described, some NTs have doubled or tripled lengths when compared to the typical ones (Figures 5(a)-5(c)).Interestingly, the distribution of EVs in NTs similarly increased in the EtOH or CTS, being the highest augments in the coexposed astrocytes compared with the rest of the experimental groups (Figure 5(d)).
3.5.The Morphology and Size of EVs Depend on the Injuring Condition.As observed in Figures 6(a) and 6(b), barely attached EVs of different shapes and dispositions were seen on the surface of cultured astrocytes.When using the EV classification reported by Malenica et al. [24] with minor modifications, we identified the following shapes: (i) round EVs that share a common spherical shape and similar size (red asterisks), (ii) less abundant elongated EVs of different lengths but with a clear main axis (blue asterisks), and (iii) donut-shaped EVs (yellow asterisks) in which both extremes appear very close or fused.Interestingly, there are donutshaped EVs with central holes of different diameters, having the smallest and most compact appearance.The other types were (iv) rosette-shaped EVs (yellow arrowheads) that appeared as elongated vesicles adopting a "flower-like" arrangement with the base fused, (v) drumstick-shaped EVs (green asterisks) that seemed formed by an elongate vesicle with an enlarged round tip, and (vi) cup-shaped EVs (magenta asterisks) that appear as a rounded protrusion with a central depression giving it a cup-like appearance.Interestingly, a minor percentage of EVs showed intermediate morphologies that did not allow a clear classification inside one group.When analyzing the distribution of the morphology, controls showed EVs with predominantly round and elongated shapes, although in DMSO, some rosette-shaped EVs were found.Instead, in injuring conditions, all morphologies were observed (Figure 6(c)).EV arrangement was also variable, with most of the EVs appearing isolated, but others were orderly disposed in straight (red arrowhead) or random (green arrowhead) arrangements (Figure 6(d)).
Previous Malenica et al. [24] and Di Daniele et al. [62] reports were used to attribute a possible significance to EVs observed on astrocyte surfaces.Different EV subpopulations, have emerged when quantitation of EVs based on the major axis length and morphological features was performed (Figure 7).In all experimental conditions, the major axis of EVs compatible with small ectosomes was the most prevalent population, followed by those comparable to large exosomes.Frequencies of small ectosomes and large exosomes were higher in treated conditions than in controls, especially when compared to the CM condition.Controls showed higher frequencies of EVs compatible with small exosomes and exomers than treated conditions, with small exosomes prevailing in the CM control.
Regarding size, EVs ranged from ~20 nm up to ~8 μm with more than 90% of them having a major axis lesser than 500 nm and ~95% being smaller than 1 μm (compatible with exomeres, exosomes, and ectosomes).Less than 4% and 1% of total EVs were larger than 1 and 3 μm, respectively (compatible with migrasomes and apoptotic bodies, respectively).Remarkably, >1 μm and >5 μm (termed as giant vesicles) EVs were exclusively seen in DNA-damaged astrocytes.Moreover, giant EVs showed particular features (Figure 8).Those complex formations exhibited considerable heterogeneity including differences in the external surface that could appear smooth or intricate with different degrees of compaction (Figures 8(a) and 8(b)).The number of the giant EVs (>5 μm) and the lengths of their main axes were the highest in the EtOH+CTS condition (Figures 8(c) and 8(d), respectively).
Discussion
Our present data provided evidence of the sensitivity of the cell-signaling repertoire of murine hippocampal astrocytes to the genetic damage induced by acute exposure to the drug of human-wide consumption EtOH and/or the stress response hormone CTS (Figure 1).Genomic DNA damage was assessed by γH2AX foci, which are produced a few minutes after its induction by the early phosphorylation of the H2A histone variant, H2AX [48][49][50][51][52][53].γH2AX immunoreactivity detected in astrocyte nuclei also indicated the early activation of the DDR cell pathways [41,[44][45][46][47].The exclusive nuclear γH2AX signal discarded significant damage in the mitochondrial DNA as evidenced by the absence of a cytoplasmic signal in all experimental conditions (Figure 1).γH2AX foci signal double-strand breaks [52,53] and also single-strand breaks caused by blockage of replication fork or by single-strand DNA intermediates of repair systems [63,64].Since double-strand breaks involve disruption of DNA continuity, they represent the most serious type of DNA lesions [65].Therefore, our results evidence that significant astrocyte DNA damage was induced and that cell cycle control was immediately activated upon acute EtOH and/or CTS exposures.Moreover, under identical experimental conditions, we have recently detected changes in the immunoreactivity of the DDR effector cyclin-D1 and the excision repair endonuclease APE1 [41].This finding suggests that, in the present experimental paradigm, the progression of the DDR cascade [66] and the activation of DNA repair [67] could also take place in DNAdamaged astrocytes.
However, the absence of modifications in the nuclear shape or chromatin compaction in treated astrocytes discards significant late apoptotic events, likely due to the short 1 h exposure.In this regard, DNA damage and DDR upon both injuring conditions occurred without significant morphological changes in astrocytes as assessed in DIC images (Figure 1) and for GFAP immunoreactivity (Supplementary Figure 1A).This result allows discarding significant astrocyte reactivity that is morphologically characterized by body shrinkage and protrusion of significant cellular processes [5].In addition, as the actin cytoskeleton is the major determinant of cell morphology, the preservation of the astrocyte F-actin cytoskeleton as observed upon phalloidin labeling (Supplementary Figure 1B) indicates a preserved cytoskeleton in agreement with DIC and GFAP images.These results suggest that duration of the EtOH and/or CTS injury was not long enough to produce the previously reported actin disorganization [28,68].Therefore, the many functions 12 International Journal of Cell Biology dependent on the actin cytoskeleton would not be impaired despite significant DNA damage and DDR being induced.
Regarding Figure 2, it is interesting to note that BAX immunoreactivity increased in the cytoplasm and nucleus of astrocytes immediately after 1 h of EtOH and/or CTS exposures and that in both subcompartments, BAX signals paralleled the frequencies of γH2AX foci.These findings suggest a relationship between the rapidly increasing BAX signal and injuring circumstances.However, as no morphological changes were observed in exposed astrocytes, changes in BAX immunoreactivity could precede detectable potential damaging effects.
Changes in BAX immunoreactivity could be linked to the activated DDR since BAX is an effector of the p53 DDR-dependent apoptosis [56][57][58].The apoptotic functions of BAX are compatible with its increase in cytoplasmic and mitochondrial subcompartments [67][68][69][70][71].However, BAX could shuttle from the cytosol to the nucleus during apoptosis in response to various stress stimuli.This occurs along with the translocation of some nuclear proteins such as p53 toward the cytoplasm where they could accomplish apoptotic roles or other functions [72,73].However, the consequences of BAX shuttling in the nucleus remained poorly elucidated and are still a debatable point [74,75].Our observation that BAX occupies euchromatic nuclear territories and tends to exclude silent heterochromatin regions suggests that it fulfills putative nuclear functions.In this aspect, Brayer et al. [76] showed that BAX was associated with chromatin in vitro in nonapoptotic cells, and that is linked with the modulation of the cell cycle and proliferation by modulating CDKN1A (cyclin-dependent kinase inhibitor 1A) that mediates the p53-dependent cell cycle G1-phase arrest in response to a variety of stress stimuli.Other functions attributed to nuclear BAX include modulation of basal differentiation and migration [76], which are critical aspects during tumorigenesis and CNS damage.Further studies to colocalize BAX with mitochondria will be necessary to understand whether BAX translocation to the mitochondria occurred in our experimental paradigm and to unravel if the detected early immunoreactivity of BAX precedes apoptotic cell death or is related to BAX non-proapoptotic functions.
Among other noncanonical BAX functions is the association with oxidative stress [73].It has been described that EtOH intoxication elicits oxidative stress, proinflammatory mediators, and cytokine production that contribute to neuronal damage [77].In addition, exposure to stress hormones such as CTS also facilitates the production of reactive oxygen species (ROS) [78][79][80][81][82] that can originate multiple DNA-oxidized products with altered bases and sugar moieties or broken strands [83].In turn, cortical astrocytes treated with EtOH activate the inflammasome complex facilitating ROS generation [84].Therefore, a significant imbalance between ROS production and removal, upon EtOH or CTS exposure, could challenge not only the astrocyte neuroprotective roles [85] but also their own cell proliferation and survival as previously shown in hippocampal astrocytes [34,[86][87][88].Our results suggest that ROS-mediated DNA damage might contribute to astrocyte deleterious actions, either to themselves or to other neural cells.
However, the most remarkable findings of this work were the rapid changes on the astrocyte surface detected immediately after EtOH and/or CTS 1 h exposure, evidenced by clear modifications in NT and EV types and distribution, as well as in the diversity in quantity, morphology, and sizes observed in these membrane specializations.Obtained results strongly suggest that astrocyte signaling could be quickly modulated, as expected for these cells that are in charge of CNS homeostasis [5] and exhibit a plethora of cell communication strategies.It is known that under physiological or pathological circumstances, astrocytes communicate with other brain cells through NTs [9,26,61,89,90].In our astrocyte cultures, we observed some NTs connecting with more or less distant cells, allowing direct communication between cells or facilitating the transfer of EVs on their surface and/or inside them, in some cases.The presence of exosomes and organelles, such as mitochondria within NTs, has been previously reported [91,92].NT development in cultured hippocampal astrocytes may depend on the activation of the p53 tumor suppressor gene [18], which is a central gene in DDR.As p53 regulates cell cycle control, DNA repair, senescence, or apoptosis pathways, when activated following DNA damage [93,94], it indicates a connection between DDR and NT modifications.Remarkably, NT 15 International Journal of Cell Biology frequency decreases in astrocytes exposed to EtOH and/or CTS at the expense of increased EVs, suggesting a kind of interplay related to damage induction that will need to be further studied.Interestingly, we also found an inverse relation between NT and EV frequencies, presenting EtOH with the lowest NT and the highest EV frequencies and controls showing the opposite behavior.We could only speculate that different injury circumstances could differentially influence cell-signaling modalities, favoring long-distance signaling at the expense of cell-to-cell communication through NTs.Additionally, the production of EVs could be facilitated in some circumstances.In this sense, it has been reported that, as an initial step, EtOH increases the fluidity of biological membranes [95].
Nevertheless, we observed EVs related to the astrocyte surfaces of distinct sizes and features that agreed with previously published descriptions, especially the morphologies and classification reported by Malenica et al. [24] and Di Daniele et al. [62] (Figures 6-8).Even though the approach used (SEM) does not allow the confirmation of the EV type as transmission electron microscopy does, and need to be confirmed with complementary approaches, we can suggest based on its size and morphology that most of the EVs ranged from exomers (previously reported as nonmembranous nanoparticles) to small ectosomes in all conditions.However, the smallest EVs (exomers and small exosomes) predominated in controls since probably they are more associated with normal cell functions, while the largest ones compatible with large migrasomes (previously observed at the tips of retraction fibers of migrating cells), apoptotic bodies (formed during the late phase of apoptosis), and complex giant EVs seem to be associated with cellular response to strong injury since they were only identified, with a little frequency, in damaged conditions.We could also speculate that, given the role of astrocytes in maintaining CNS homeostasis, they would seem to activate mechanisms that would reverse DNA damage and its cellular consequences, allowing them to preserve their functions.
Different types of EVs are generated by distinct cellular processes, and EV composition reflects the physiological or pathophysiological state of the cells of its origin [24].The complexity and heterogeneity of EVs observed on astrocyte surfaces could reflect the coexistence of different cellular events related to its production [97-99, 101, 103-107, 109].They may be either the same mechanisms seen in controls, but at a higher degree of activation, or other mechanisms associated with damage circumstances that could operate [100,102,108] since the largest EVs were only observed in DNA-damaged astrocytes.In this regard, it is interesting to note that damaged nuclear and mitochondrial DNA were detected in EVs, suggesting that damaged DNA could be eliminated by integration into EVs [110].
Astrocytes play a prominent role in the protection of neurons against stressors [111][112][113][114][115]. In this respect, it has been reported that the astrocyte-conditioned medium protected hippocampal neurons against CTS-induced damage [85], suggesting that this depends on cell-cell communication.However, according to the stimuli received and the cellular microenvironment, astrocyte communication can have beneficial or detrimental consequences [4,7,43,116].There is evidence that cultured hippocampal astrocytes exposed to EtOH result in an intracellular redox imbalance and the expression of miRNA and proinflammatory molecules with augmented EV secretion [31,117].Increases in the production of EVs and calcium waves likely associated with damage were described in CTS-exposed astrocytes [32].
In response to different stimuli as those dependent on toxic protein aggregates or neuroinflammation, EVs can change their cargo causing oxidative stress, neuroinflammation, or synaptic dysfunction [17,40,43].However, beneficial EV roles are also evident because they are involved in neuroprotection-modulating apoptosis, preserving neuronal function, and repairing neural damage [43].Therefore, changes in signaling mechanisms of astrocytes can have a significant protective or harmful impact in neurons and other neural cells [4,7,116], and their potential impact on clinical aspects is increasingly considered [9,27,118].
Finally, the detected changes in the cellular communication pattern of astrocytes could be a not-yet-described part of the mechanisms involved in their response to genetic damage.Classically, DDR is considered a sequence of pathways that essentially operate at the nuclear and mitochondrial levels; however, the cells would respond as a single system.In this sense, it was reported that the local exposure of cells to low doses of ionizing radiation caused similar biological changes in cells not directly irradiated, giving the basis to the concept of a bystander response [119][120][121][122]. Later, the same effect was detected with other stressinducing agents, proposing that the bystander effect depends on the release of chemical mediators, some transported as EV cargoes [123].We hypothesize that the observed modulation of the NT and EV repertoires in injured astrocytes could indicate that the cellular response to DNA-damaging agents may trigger cellular mechanisms, in addition to the classical DDR pathways.
Conclusions
Our results demonstrated that hippocampal astrocytes responded to acute EtOH and/or CTS injuries, eliciting DNA damage, and possibly all canonical DDR pathways.Interestingly, an early modification in cell-cell communication processes was revealed, evidenced by changes in the pattern of EVs and NTs.We hypothesize that this rapid change in cell signaling could be a novel mechanism related to DDR.The reported quantitative and morphological analysis of EVs and NTs will be further confirmed and deepened to identify the connections of astrocyte genetic damage, with the evolution of DDR and cell-cell communicational patterns, and its impact on other neural cells and in the whole CNS.16 International Journal of Cell Biology International Journal of Cell Biology
3. 1 .
EtOH and/or CTS Exposures Induced DNA Damage and DDR.Analysis of the DNA damage induced by 1 h exposure to 400 mM EtOH and/or 1 μM CTS assessed by DIC and confocal images of γH2AX immunoreactivity (Figure 1) showed positive γH2AX signals as bright (red) spots (termed foci) inside DAPI-stained nuclei (blue).γH2AX foci were more abundant in the treated conditions relative to controls (Figure 1(a)), indicating rapid DNA damage and induction of DDR.As seen in DIC images, no γH2AX signals were detected in the cytoplasm.In addition, quantitation of the number of γH2AX foci per astrocyte (Figure 1(b)) confirmed significant frequency increases in EtOH or CTS compared to the respective controls with no differences between 4
3. 2 .Figure 1 :Figure 2 :
Figure 1: Primary DNA damage induced by EtOH and/or CTS in cultured astrocytes.(a) DIC and confocal images evidencing the DNAdamaged sites recognized as γH2AX (red) foci in DAPI-stained nuclei (blue) as seen in confocal images on the right side of the panel.Left and central images depict γH2AX and DAPI signals on DIC images, respectively.Calibration bar: 5 μm.(b) Box plots show the number of γH2AX foci per astrocyte in each experimental condition.Boxes enclose the data between the 25th and 75th percentiles.The median (50th percentile) is indicated by the cross line within each box.Differences between 100 astrocytes per analyzed variable and group were examined using the Kruskal-Wallis test with Dunn test for multiple comparisons.* * * * p < 0 0001; ns p > 0 05 (not significant).DIC: differential interference contrast; DAPI: 2-(4-amidinophenyl)-1H-indole-6-carboxamidine; CM: culture media; DMSO: dimethyl sulfoxide; CTS: corticosterone; EtOH: ethanol; EtOH+CTS: simultaneous EtOH and CTS treatment.
Figure 3 :
Figure 3: Astrocyte NTs and EVs related to EtOH and/or CTS exposures.(a) SEM images at three increasing magnifications showing NTs and EVs on the surface of astrocytes from control (CM and DMSO) or treated (EtOH, CTS, and EtOH+CTS) conditions.(b, c) Box plots depict the distribution of the number of NTs or EVs per astrocyte in each experimental condition.Differences between 50 astrocytes per variable and experimental group were analyzed employing the Kruskal-Wallis test with Dunn test for multiple comparisons.* * * * p < 0 0001; ns p > 0 05 (not significant).SEM: scanning electron microscopy; CM: culture media; DMSO: dimethyl sulfoxide; EtOH: ethanol; CTS: corticosterone; EtOH+CTS: EtOH and CTS coexposures; NTs: nanotubes; EVs: extracellular vesicles.
Figure 4 :
Figure 4: NT appearance, diameters, and lengths in controls and EtOH-and/or CTS-exposed astrocytes.(a, b) Panoramic SEM images and respective skeletonized masks illustrate examples of astrocyte NTs from controls (CM and DMSO), EtOH, and/or CTS.The green asterisk indicates NTs that softly decrease ending in a cell-free space, and blue ones are NTs that cross over the cell surfaces connecting different astrocytes.(c, d) Box plots represent the distribution of NT length and diameter in each experimental group.(e) Distribution of the ratio between NT length and diameter in each condition, using medians and 95% confidence intervals.Differences between 150 astrocytes per variable and group were examined via the Kruskal-Wallis with Dunn test for multiple comparisons.* * * * p < 0 0001; * * p < 0 01; * p < 0 05; ns p > 0 05 (not significant).SEM: scanning electron microscopy; CM: culture media; DMSO: dimethyl sulfoxide; EtOH: ethanol; CTS: corticosterone; EtOH+CTS: simultaneous EtOH and CTS exposure; NTs: nanotubes.
Figure 5 :Figure 6 :
Figure 5: EVs on NT surfaces observed in control and in exposed astrocytes.(a) Representative SEM images showing NTs of different thicknesses with EVs located on the NT surfaces (indicated with "a" letters) or seemed to be inside the NTs (pointed to with "b" letters).(b) Two demonstrative SEM images were obtained at different magnifications, displaying NTs of different diameters with EVs attached to their surface.The dashed box depicted in the top image encloses the magnified area presented in the bottom image, to demonstrate the presence of EVs on the NT surface.(c) A composition of three SEM images showing a large NT connecting two hippocampal astrocytes.The EVs can be visualized on the NT at different distances from both cells.(d) Box plot representing the distribution of the number of EVs on NT per astrocyte of each experimental condition.Differences between 150 astrocytes per variable and group were analyzed via the Kruskal-Wallis test and Dunn test for multiple comparisons * * * * p < 0 0001; * * * p < 0 001; * * p < 0 01; ns p > 0 05 (not significant).SEM: scanning electron microscopy; CM: culture media; DMSO: dimethyl sulfoxide; EtOH: ethanol; CTS: corticosterone; EtOH+CTS: EtOH and CTS cotreatment; NTs: nanotubes; EVs: extracellular vesicles.
Figure 7 :
Figure7: Frequencies of distinct EV morphologies in the different experimental conditions.The bar graphs represent the frequency of the different EV sizes and shapes observed on astrocyte surfaces according to Malenica et al.[24] and Di Daniele et al.[62], with minimum modification.The frequencies are expressed as approximate percentages of the total EVs per condition.Sizes are based on the length of their main axis.CM: culture media; DMSO: dimethyl sulfoxide; EtOH: ethanol; CTS: corticosterone; EtOH+CTS: EtOH and CTS cotreatment; EVs: extracellular vesicles.
Figure 8 :
Figure 8: The largest EVs detected on astrocyte surfaces.(a) SEM images of EtOH and/or CTS conditions showing large EVs with distinct diameters and appearances that varied from smooth surfaces (left image) to heterogeneous and intricate morphology (right image).(b) SEM images obtained at three different magnifications exhibiting large EVs with surfaces showing different degrees of compaction and complexity.Meanwhile, in the bottom image, small EVs appear attached to the main formation.In the upper one, no individual EVs were observed on the surface.(c, d) Distribution of the number (c) and the main axis (d) (diameter in μm) of the largest EVs (giant), per experimental condition, using medians and 95% confidence interval.Giant EV formations were observed only in EtOH-and/or CTSexposed astrocytes.* * * * p < 0 0001; * * * p < 0 001; * * p < 0 01; * p < 0 05; ns p > 0 05 (not significant).SEM: scanning electron microscopy; CM: culture media; DMSO: dimethyl sulfoxide; EtOH: ethanol; CTS: corticosterone; EtOH+CTS: simultaneous EtOH and CTS exposure; EVs: extracellular vesicles.
2.1.Ethical Statement.This study was performed following the Principles of the Laboratory Animal Care, National Institute of Health of the United States of America, NIH, publication No. 85-23 (2011 revision) and with the No. 18611 Uruguayan Law that dictates the procedures for the use of animals in experimentation, teaching, and scientific research activities.Entire procedures were approved by the Ethical Committees for the Care and Use of Laboratory Animals (CEUA) of the Facultad de Ciencias-Universidad de la República and the Instituto de Investigaciones Biológicas (b)).Based on length measurement, the shortest NTs appear in both control conditions, whereas the longest were found in the groups where EtOH is present, with the EtOH+CTS group showing intermediate values between EtOH and CTS alone (Figure4(c)).Regarding NT diameters, the biggest value was seen in the EtOH condition; the minimum, in the CTS group; and the intermediate, in the coexposed condition (Figure4(d)).Therefore, the EtOH condition exhibits the longest and the thickest NTs.However, the assessment of the NT length/NT diameter ratios in absolute values indicates the biggest values in CTS followed by EtOH+CTS, with EtOH and controls appearing similar but lower than CTS (Figure
|
2024-03-03T19:41:13.718Z
|
2024-02-26T00:00:00.000
|
{
"year": 2024,
"sha1": "47512278e7501ad9cb0e0bc61a288ae827b72407",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijcb/2024/5524487.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9897470f0206a00b4783fab70fe3d37fabcc517b",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22510697
|
pes2o/s2orc
|
v3-fos-license
|
Deformed Skull Morphology Is Caused by the Combined Effects of the Maldevelopment of Calvarias, Cranial Base and Brain in FGFR2-P253R Mice Mimicking Human Apert Syndrome
Apert syndrome (AS) is a common genetic syndrome in humans characterized with craniosynostosis. Apert patients and mouse models showed abnormalities in sutures, cranial base and brain, that may all be involved in the pathogenesis of skull malformation of Apert syndrome. To distinguish the differential roles of these components of head in the pathogenesis of the abnormal skull morphology of AS, we generated mouse strains specifically expressing mutant FGFR2 in chondrocytes, osteoblasts, and progenitor cells of central nervous system (CNS) by crossing Fgfr2+/P253R-Neo mice with Col2a1-Cre, Osteocalcin-Cre (OC-Cre), and Nestin-Cre mice, respectively. We then quantitatively analyzed the skull and brain morphology of these mutant mice by micro-CT and micro-MRI using Euclidean distance matrix analysis (EDMA). Skulls of Col2a1-Fgfr2+/P253R mice showed Apert syndrome-like dysmorphology, such as shortened skull dimensions along the rostrocaudal axis, shortened nasal bone, and evidently advanced ossification of cranial base synchondroses. The OC-Fgfr2+/P253R mice showed malformation in face at 8-week stage. Nestin-Fgfr2+/P253R mice exhibited increased dorsoventral height and rostrocaudal length on the caudal skull and brain at 8 weeks. Our study indicates that the abnormal skull morphology of AS is caused by the combined effects of the maldevelopment in calvarias, cranial base, and brain tissue. These findings further deepen our knowledge about the pathogenesis of the abnormal skull morphology of AS, and provide new clues for the further analyses of skull phenotypes and clinical management of AS.
Introduction
Skull is composed of calvarial bones, craniofacial bones and cranial base.They are formed through distinct development modes [1].The calvarial bones are formed through intramembranous ossification, in which the mesenchymal precursor cells between the opposing osteogenic fronts of sutures directly
Ivyspring
International Publisher differentiate into bone-forming osteoblasts driving the growth of calvarial bones.Bones of the cranial base are formed through endochondral ossification.Endochondral ossification of the synchondroses is responsible for the growth and expansion of cranial base [2].
In the early development of skulls, the calvarial sutures remain open, holding the calvarial bones loosely together, which allows the coordinated development of the expanding skull and the underlying developing brain [3].Craniosynostosis is characterized by premature fusion of one or more calvarial sutures.To date, craniosynostoses have been found in over 100 distinct genetic syndromes, occurring in approximately 1 in 3000 individuals [4].Apert syndrome (AS), a genetic craniosynostosis, is caused by one of two missense mutations in adjacent amino acids, Ser252Trp(S252W) and Pro253Arg(P253R), of fibroblast growth factor receptor 2 (FGFR2) [5,6].AS is characterized with precocious closure of cranial sutures, midfacial hypoplasia and syndactyly of hands and feet [7,8].Additionally, patients with AS usually have raised intracranial pressure and mental retardation [9], whether it's related to the abnormal skull development is not clear presently.
We and other groups have previously generated Fgfr2 +⁄S252W and Fgfr2 +⁄P253R mouse models mimicking human AS using knock-in approach [10][11][12][13].Generally, these mouse models showed global skull phenotypes similar to that in Apert patients including dome-shaped skulls, underdeveloped midface, premature fusion of cranial base synchondrosis, malformed brain and brachycephaly, [10,14] etc. Quantitative analyses revealed that the skulls of Fgfr2 +⁄P253R mice were shortened along the rostrocaudal (RC) direction, especially in their face [15], but the breadth along the mediolateral (ML) axis of the frontal bone and the neurocranium were increased [16].
Although we have detailed description about the skull morphology of AS, the reasons for the characteristic skull shape of AS are not fully clarified presently.Head as an integrated structure unit is mainly composed of brain and skull bone which includes calvarias, craniofacial bones and cranial base.Normal development of skull requires mechanisms to ensure a proper coordination among the rates of suture closure, cranial base fusion and cerebral development.Theoretically, maldevelopment in anyone of these three parts can be involved in the pathogenesis of the abnormal skull shape in AS.
It's generally considered that premature fusion of sutures is the original reason for the abnormal skull shape of CS patients.The premature closure of coronal suture is most commonly exhibited in AS patients, which is also observed in the mouse models mimic AS [10][11][12][13].Individuals with AS are reported to have cartilaginous abnormalities in their cranial base, including premature fusion of the spheno-ethmoidal synchondrosis and the spheno-occipital synchondrosis, suggesting an important role of endochondral ossification in the skull malformation of AS [17,18].Similarly, dysmorphology of the cranial base is one of the significant skeletal abnormalities in mice with ubiquitous or chondrocyte-specific expression of FGFR2 with Pro253Arg mutation [19].Therefore, it is speculated that the disturbed cartilaginous development may also play an important role in the malformation of skulls of AS patients or AS mouse models.
Besides skull maldevelopment, AS patients also have abnormalities in nervous system such as mental retardation and brain dysmorphology [20][21][22][23].There are controversies about the reasons for the abnormal brain morphology of AS patients.Brain malformation in AS is generally considered as the result, at least in part, of the premature fusion of cranial sutures [24,25].Aldridge et al, however, by finding that the brain dysmorphology was developed before the fusion of calvarial sutures of AS mice, proposed that brain is primarily affected, rather than secondarily responding to skull dysmorphogenesis in AS mouse model [14].
To date, the pathogenesis of the skull abnormalities of AS is still not fully clarified.It is important to understand the accurate role of these individual composing parts of head in the malformation of AS skulls.Since in previous studies [10][11][12][13], the mutant FGFR2 is ubiquitously expressed in the mutant mice, we cannot clearly dissect the direct role of mutant FGFR2 in the abnormal development of calvarias, brain and cranial base.For example, it is difficult to distinguish the direct and/or indirect role of mutant FGFR2 in brain maldevelopment, and to know whether the abnormal brain development is the cause or result of the skull malformation.
In this study, we used Cre/loxp approach to obtain mutant mice with expression of activated FGFR2 (Pro253Arg) in chondrocytes, osteoblasts, and CNS progenitor cells, and quantitatively analyzed the skull morphology of these mutants at postnatal 4-and 8-week using three-dimensional micro-CT and morphometric assay.We also analyzed the brain morphology of mice with specific activation of FGFR2 in CNS progenitors (Nestin-253) at 8 weeks using micro-MRI based EDMA.We found that skulls of all three mutant strains and the brain of Nestin-253 mice are maldeveloped.Our data thus suggest that abnormal development in calvarias, cranial base and brain are all involved in the pathogenesis of the deformed skulls in AS mice, and there is direct effect of mutant FGFR2 on brain development.This study provides new insight for the mechanistic understanding and clinical management of Apert syndrome.
Mice and preparation of skeletons
Fgfr2 +⁄P253R-neo mice were kindly provided by Dr Chuxia Deng of NIDDK.Col2a1-Cre [26]and OC-Cre [27] mice were kindly provided by Dr Xiao Yang of the Academy of Military Medical Sciences of China.Nestin-Cre mice were purchased from Jackson laboratory.All mice were inbred to C57BL/6J mice.Col2a1-Fgfr2 +⁄P253R (Col2a1-253), OC-Fgfr2 +⁄P253R (OC-253) and Nestin-Fgfr2 +⁄P253R (Nestin-253) mice were generated by crossing male offspring of Fgfr2 +⁄P253R-neo and female offspring of Col2a1-Cre, OC-Cre and Nestin-Cre mice respectively.All procedures were approved by the Institutional Animal Care and Use Committee of Daping Hospital.Mice were genotyped using PCR method.4-week-old and 8-week-old mice with tissue-specific activating Fgfr2 +⁄P253R and wild-type (WT) mice from the same litter were used.After being sacrificed, the mouse skulls were skinned and fixed in 70% ethanol as described previously [10].
Skeletal analysis and histology
Skulls were subjected to high-resolution X-ray examination using Faxitron MX20.The whole-skeleton staining with Alizarin red and Alcian blue was performed as described [28].For histological analysis, the calvarias of mutant and control mice were fixed in 4% paraformaldehyde in 0.01M PBS (pH 7.4), decalcified in 15% EDTA (pH 7.4) and embedded in paraffin as described [29].Six micrometer-thick sections were sliced for sutures and staining with Hematoxylin and Eosin (H&E).
Micro-computed tomography, Magnetic resonance image procedures and collection of landmark data
The skull samples were scanned with micro-CT (VivaCT40, Scanco Medical AG, Switzerland).For 4-week and 8-week old skulls, Image acquisition was performed at 70 kV and 114 μA.For d3 skulls, the 45 kV and 177 μA was used for μCT scan.Two-dimensional images were used to generate three-dimensional reconstructions.At same stage, every measurement used the same filtering and segmentation values to obtain three-dimensional images.
For EDMA analysis, according to Richtsmeier's method [30], three-dimensional coordinate locations of 27 biologically relevant landmarks located on the cranial were recorded from three-dimensional CT images of skulls of
Morphometric methods
EDMA is a method for quantitative analyses of the form and growth characteristics in the geometric morphology [33,34].The collected coordinate data were subjected to WinEDMA software to calculate the FDM (Form Differences Matrix) between tissue-specific Col2a1-Fgfr2 +⁄P253R , OC-Fgfr2 +⁄P253R , Nestin-Fgfr2 +⁄P253R mice and their littermate controls as described in Richtsmeier et al. report [30].
Statistical analysis
Data were evaluated statistically in SPSS18.0.Statistics were evaluated using Student's t-test, P-values were considered significant at *P<0.05, **P<0.01.
Specific expression of Fgfr2-P253R in chondrocytes leads to Apert syndrome-like dysmorphology in mice
In previous reports, Apert syndrome mouse models with ubiquitous expression of mutant FGFR2 generated using EIIa-Cre (such as EIIa-Fgfr2 +/P253R, abbreviated as EIIa-253) exhibited smaller body size, shorter cranial dimensions along the rostrocaudal axis and broader breadth of the frontal bone than wild-type littermates [10,12,16].In this study, Col2a1-253 mice showed normal life span with normal body size and AS-like skull dysmorphology compared with wild-type mice.
Specific expression of Fgfr2-P253R in osteoblasts results in skull malformation in mice
The major abnormality of Apert mice is located on sutures, which is mainly formed through osteoblastogenesis.Genes driven by osteocalcin (OC) promoter are mainly expressed in mature osteoblasts, thus it was speculated that OC-253 mice would show significant malformation on skeleton, especially in calvarias.However, OC-253 mice showed normal body size and life span compared with wild type mice (Fig. 2A).We observed the mice at day 14, 4 and 8 weeks by radiographic images, Alizarin red and Alcian blue staining and histology.By measurement of skull lengths based on three-dimensional images, we found that the nasal bone were mildly shortened along the rostrocaudal axis at 8-week OC-253 mice (Fig. 2B, F. p<0.05).No other parameters including cranial cavity, coronal suture, cranial base synchondroses, basioccipital bone and sphenoid bone were not significantly changed (Fig. 2C-E, G-H).
Nestin-253 mice exhibit abnormal brains and skulls
We used Nestin-Cre to activate the mutant FGFR2 in CNS progenitor cells [35].Nestin-253 mice were normally survived more than 12 months with normal body size compared with WT mice (Fig. 3A).No abnormal development of cranial base was detected from X-ray and micro-CT analyses (Fig. 3B, C).We analyzed the brain morphology of 8-week old Nestin-253 mice using MRI scanning, and found that mutant mice exhibited increased dorsoventral height of caudal brain (Fig. 3D-F).Quantitative analysis of MRI images found that the dorsoventral height of caudal brain was increased by 5.3% in mutant mice (p<0.01,Fig. 3G), and other parts of Nestin-253 brain exhibited normal morphology.
The differential effect of tissue-specific activation of FGFR2 on the gross skull morphology of mice
To further clarify the distinct role of the misshaped calvarias, cranial base, and brain in the maldevelopment of AS skulls, we observed the gross skull morphology of Col2a1-253, OC-253, Nestin-253 and EIIa-253 mice at postnatal 4 weeks.From the superior view of the micro-CT based 3D reconstructed skulls, Col2a1-253 skulls showed similar changes to that in EIIa-253 mice in rostrocaudal axis(i.e.shortened length at 4 weeks after birth), but the severity of shortening was mild compared with EIIa-253 mice (Fig. 4B).EIIa-253 mice had typical "dome-shaped" calvarias (Fig. 4A), which is mild in Col2a1-253 mice.In OC-253 mice, the skulls showed no significant gross dysmorphology at 4 weeks (Fig. 4C).The skulls of Nestin-253 mice showed mildly shortened nasal bone compared with their wild-type littermates, whereas no remarkable difference in breadth was observed (Fig. 4D).By measuring calvarial bone mass of mutant mice, we found that bone volume (BV, p<0.01) and bone volume ratio (BV/TV, p<0.05) of EIIa-253 calvarias were decreased compared with WT littermates (Fig. 4E, I).Col2a1-253 calvarias showed similar decrease on BV (p<0.05)due to the rostrocaudally shorted skull, but no significantly change on BV/TV (Fig. 4F, J).OC-253 and Nestin-253 mice have not significant change on bone volume (BV) or bone volume ratio (BV/TV) (Fig. 4G, H, K, L).
Based on above data, we found that Col2a1-253, OC-253, and Nestin-253 mice all exhibited characteristic skull dysmorphology associated with AS maldevelopment, but their malformation was less severe than that of EIIa-253 mice.
EDMA analysis of the skulls of Col2a1-253 mice
EDMA was employed to further quantitatively analyze the cranial morphology of Col2a1-253, OC-253 and Nestin-253 mice.We analyzed the changes in the shape of skulls and the dimensions of several major skull-forming bones of 4-and 8-week-old mutant mice and their corresponding wild-type mice using FDM analysis of the EDMA method.According to FDM analysis, remarkable differences were revealed in the overall cranial shape of 4-week-old Col2a1-253 mice.Approximately 58% linear distances were shortened by over 5% (marked by blue lines), 33% distances exhibited ≤5% changes (not marked), and only 9% linear distances showed over 5% increases (broken red lines in Fig. 5-D, E, F).Eight-week-old Col2a1-253 mice, however, showed less malformation on skulls than 4-week-old Col2a1-253 mice.About 51% linear distances at 8-week stage showed a decrease over 5% (blue lines), 49% linear distances exhibited no significant changes (Fig. 5-G, H, I), and no linear distances showed a >5% increase.
In 4-week stage, Col2a1-253 mice exhibited the most severe malformation on midfacial region (Table 1).The frontal processes of the maxilla (landmarks 9 and 10) in Col2a1-253 mice were also shortened by 13.9%, and the zygomatic bone (landmarks 11 and 14) was substantially shortened by 21.2%, indicating that the Col2a1-253 mice also had the characteristic shallow orbit regions of AS skulls.The distance of paired anterior zygoma (landmarks 11 and 22) was increased by 11.2%.The distance between two lacrimal bones (landmarks 10 and 21) was increased by 9.4%.The breadth of the frontal bone (landmarks 12 and 23) was mildly increased by 5.4%.The length of nasal bone (landmarks 1 and 2) exhibited a 13.6% decrease.The frontal bone (landmarks 2 and 3) showed a mild decrease (5.9%).The premaxilla (landmarks 7 and 8) and maxilla length (landmarks 8 and 13) were decreased by 9.3% and 7.5%, respectively.Along with mediolateral (ML) direction, the breadth of the anterior palate (landmarks 8 and 19) was decreased by 7.8%.In neurocranium region, the length of lateral neurocranium (between landmark 15 and 16) displayed a decrease of 10.2%, and the width of caudal neurocranium (between landmark 16 and 27) was increased by 5.5%.The linear distances between the intersection of interparietal and occipital bones at the midline and the posterior palate (landmarks 5 and 13) showed a decrease of 6.4%(Table 1).The linear distance between landmark 4 and 15 displayed a 5.9% increase, and the linear distance between landmark 4 and 26 was increased by 6.1%.
At the 8-week stage, the midfacial shape of Col2a1-253 mice were less severe than those in 4-week-old mice (Table 1).The frontal processes of the maxilla (landmarks 9 and 10) exhibited a 9.2% decrease.Compared to 4-week-old Col2a1-253 mice, the degree of shortening of maxilla (landmarks 8 and 13) and zygomatic bone (landmarks 11 and 14) of 8-week-old Col2a1-253 mice was substantially decreased, which had a 8.7% and 9.3% decrease, respectively.The distance of the paired frontal zygoma (landmarks 11 and 22) had a 5.4% decrease.The length of nasal bone (landmarks 1 and 2) showed a 12.5% decrease.The premaxilla length (landmarks 7 and 8) also displayed a 11.2% reduction compared with their littermates.The paired frontal process of the maxilla (landmarks 9 and 20) was decreased by 7.2%.The distances between two lacrimal bones (landmarks 10 and 21) and the breadth of frontal bone (landmarks 12 and 23) were relieved to no significant changes (Table 1).
In 8-week-old Col2a1-253 mice, the neurocranium were not significantly reduced along the RC axis and had no significant change along the ML axis (Table 3).The linear distances between the intersection of interparietal and occipital bones at the midline and the posterior palate (landmarks 5 and 13) showed a decrease of 6.6%.(Table 1).
From EDMA analysis, we found that besides the ocular hypertelorism, the lengths were reduced in nasal bone, maxillae and zygoma in Col2a1-253 mice at 4-week stage, whereas the severity of deformity was decreased in RC direction at 8-week-old.Col2a1-253 mice at 4-week-old stage had decreased neurocranium length along the RC direction.Nevertheless, there were no significant change in neurocranium region of 8-week-old Col2a1-253 mice, indicating that malformation was relieved at 8-week stage.These data indicate that specific expression of FGFR2-P253R in cartilage alone led to head-shorten malformation phenotypes similar to that in Apert mice (EIIa-253), although the severity of malformation was milder.
EDMA analysis of the skulls of OC-253 mice
At postnatal age of 4 weeks, OC-253 mice showed grossly normal skull morphology except that the distances of landmark 15 to 16 and 26 to 27 were mildly decreased by 6% and 7.6%, respectively (Fig. 5-J, K, L; Table 1).In 8-week-old OC-253 mice, EDMA analysis showed that about 30% linear distances exhibited a ≤5% change (blue lines.Fig. 5-M, N, Q; Table 1).The changed distances were mainly located on facial bones.The length of nasal bone (landmarks 1 and 2) was shortened by 7.8%.The length of premaxilla (landmarks 7 and 8) was decreased by 7.2%.At the orbit regions, the distance was significantly shortened along RC axis, whereas no significant changes were found along the ML axis.The frontal process of the maxilla (landmarks 9 and 10) was shortened by 11.5%.Along the ML axis, the width between landmarks 9 to 20 and landmarks 11 to 22 was decreased by 6.2% and 7.4%, respectively.In neurocranium region, the distance between landmarks 14 and 25 had a 6.4% decrease.The distances between landmark 5 and 13, 5 and 24 were decreased by 6.1% and 6.3%, respectively (Table 1).From the FDM analysis, OC-253 mice showed mildly decreased skull length along the rostrocaudal axis at 8 weeks stage.
EDMA analysis of brains and skulls of Nestin-253 mice
To clarify the direct effect of mutant FGFR2 on brain development of Apert mice, we analyzed the brain morphology of Nestin-253 mice using 3D-reconstructed MRI image.11 landmarks were defined on brain and 3D coordinates of 8 weeks postnatal brains were employed to do EDMA analysis (Table 2; Fig. 6A, B).The results showed that, along the RC direction, the linear distance of landmarks 1 and 8 was decreased by 9.1%.The linear distance from landmarks 7 to 8 and from landmarks 4 to 11 showed 16.5% and 7.4% decrease, respectively, whereas the length of landmarks 3 to 5 (4 and 6) had a 10% increase.Along the mediolateral direction, however, there was no significant increase or decrease on the distance between landmarks 1 and 2, 3 and 4 or other breadth.Along the dorsoventral direction, the distance between landmarks 3 and 7 (4 and 7) were increased by 5.5% on average.The distance of the landmarks 5 and 7(6 and 7) were increased by 7.1% (Table 3; Fig. 6C, D).Thus, the rostrocaudal length and dorsoventral height on caudal brain were increased in the Nestin-253 mice.
As to skull morphology, at 4-week stage, about 5% linear distances were shortened by over 5% (blue lines in Fig. 5, P-R).However, the level of decrease was slight and no linear distance was shortened by over 10% (Table 1).For 8-week-old Nestin-253 mice, about 4% linear distances were increased by over 5% (Fig. 5, S-U, Table 1).
At 4 weeks, along RC axis, the length of nasal bone of Nestin-253 mice (landmarks 1 and 2) was decreased by 10%.Similarly, the linear distance between landmark 9 and 10 was decreased by 8.8%.The linear distances between landmark 15 and 16, 26 and 27 were increased by 16.1% and 6.4%, respectively (Table 1).
At the adult stage of 8 weeks, the length of nasal bone (landmarks 1 and 2) exhibited no significant changes.The linear distance of landmarks 7 to 8 was increased by 6.9%.The linear distance of landmarks 18 to 19 was increased by 7.2%.For the caudal part of skulls of Nestin-253 mice, the linear distances of landmarks 15 to 16 and 26 to 27 were increased by 21.6% and 9.4%, respectively.Although the linear distance of landmarks 4 to 27 had a <5% increase, the linear distance between landmark 4 and 16 was increased by 6.1%, however, the linear distance between landmark 16 and 27 was not significantly increased (Table 1), indicating that the dorsoventral height was increased slightly and the mediolateral breadth of this region was not affected by FGFR2-P253R mutation.In Summary, quantitative EDMA analysis of the skulls and brain revealed that mutant FGFR2 in Col2a1 expressing cells mainly affected cranial base, sutures and midface (Fig. 7A).Mice with specific expression of mutant FGFR2 in osteoblasts (OC) showed mildly shortened face (Fig. 7B).Specific expression of mutant FGFR2 in nestin-expressing cells mainly led to increased rostrocaudal length and dorsoventral height of the caudal brain (Fig. 7C).All three mutant mouse stains with expression of mutant FGFR2 in specific tissue exhibited skull malformation, which indicates that mutant FGFR2 played direct role in maldevelopment of these tissues.Furthermore, it's noticed that the phenotypes of each mutant strain just partially resembled the skull malformation of EIIa-253 mice, suggesting that the malformation of specific tissues partially contributed to dysmorphogenesis of skull of AS mice (Fig. 7E) [16,21,36,37], and that the maldevelopment of AS head is caused by the combined effects of abnormal development of calvarias, cranial base, brain and other tissues (Fig. 7D).Mice with specific expression of mutant FGFR2 in osteoblasts (OC) show mildly shortened rostrocaudal length in the face(B).Specific expression of mutant FGFR2 in nestin-expressing cells mainly leads to increased rostrocaudal length and dorsoventral height of the caudal brain(C).All three mutant mouse stains with expression of mutant FGFR2 in specific tissue exhibit skull malformation, which indicates that mutant FGFR2 plays direct role in the maldevelopment of these tissues.Furthermore, it's noticed that the phenotypes of each mutant strain just partially resemble the skull malformation of EIIa-253 mice, suggesting that the malformation of specific tissues partially contributes to dysmorphogenesis of skulls of AS mice (E), and that the maldevelopment of AS head is caused by the combined effects of abnormal development of calvarias, cranial base, brain and other tissues(D).
Discussion
The development of calvaria bone, cranial base and brain, three important components of head, are coordinately interacted.Maldevelopment of anyone of these tissues could lead to deformed skulls.The interaction among calvaria bone, cranial base and brain tissue can be exerted through biomechanical and/or biochemical mechanisms.Through biomechanical mechanism, hydrocephalus usually leads to large skull and cranial cavity [38].Patients with craniosynostosis are at increased risk for elevated intracranial pressure, which may restrict brain development [39].In the case of biochemical mechanism, brain tissue can secrete leptin to regulate bone tissue, and osteoblast-derived hormone osteocalcin could cross the blood-brain barrier to influence fetal brain development [40].
Apert syndrome, mainly caused by P253R and S252W mutation of FGFR2, is a common craniosynostosis [5,41] characterized with brachycephaly presumably resulting from premature fusion of cranial sutures.Since FGFR2 is widely expressed in cranial base, bone and brain tissue, it's difficult to distinguish the direct impact of FGFR2 on these tissues.However, clarifying the direct and indirect impact of FGFR2 on these tissues is important for the understanding of the mechanism underlying skull deformation and clinical management of Apert syndrome.For example, if brain maldevelopment of AS is related to the direct effect of mutant FGFR2, then currently used surgery mainly aimed to correct skull malformation may not lead to optimal alleviation of brain maldevelopment.Combination of other treatments such as biological therapies targeting the direct effect of mutant FGFR2 on brain development may bring better outcome.
Although Apert mouse models exhibit maldevelopment in calvarias, cranial base and brain [10,12,14,42], we still don't know the accurate roles of these composing part of head in the malformation of skulls, as the mutant FGFR2 is activated in all these tissue involved in skull deformation of AS.There are few comparative genetic analyses of the differential roles of calvarias, cranial base and brain in the pathogenesis of the skull malformation in AS using mouse models with tissue-specific activation of mutant FGFR2 [10,12,16,19].Also we don't know whether there is direct effect of mutant FGFR2 on the brain development.To clarify the distinct roles of calvarias, cranial base and brain in the skull deformity of AS, we generated mouse models expressing mutant FGFR2 specifically in osteoblasts, chondrocytes and neural progenitor cells, respectively by using the Cre/loxP approach.These mice are excellent models to explore not only the direct effect of mutant FGFR2 on the development of calvarias, cranial base and brain, but also the role of malformation of these components and their reciprocal interaction in the pathogenesis of skull malformation of AS.
We found that cartilage-specific expression of FGFR2-P253R mutant resulted in shortened head, which resembles the brachycephaly found in humans and mouse models with AS [19].However, the severity of skull malformation in Col2a1-253 mice was milder than that in EIIa-253 mice, indicating that the abnormal development of cartilage is strongly but not fully responsible for the skull malformation exhibited by AS.Col2a1-253 mice at 4-week stage had decreased rostrocaudal length, increased mediolateral breadth and dorsoventrally increased height of skulls.However, the skull breadth and height of Col2a1-253 mice had no significant changes at the 8 weeks, indicating that the abnormality of skulls was relieved during the 4-8 weeks interval.
In Apert syndrome, premature closure of coronal suture is most commonly exhibited.It's generally believed that the premature closure of suture is related to the accelerated osteogenic differentiation of suture mesenchyme.Consistently, in vitro cultured Fgfr2 +/P253R calvarias also exhibited coronal suture fusion [10].However, it is also suggested that the premature closure of coronal suture could be caused by abnormal tensile force, presumably resulting from the growth disturbance of cranial base [18,19,43].We observed no LacZ positive cells in calvarial sutures in Col2a1-Cre;ROSA26-LacZ mice (Supplemental Figure 2 A-D), while the Col2a1-253 mice still have coronal sutural synostosis.We therefore consider it likely that the changed tensile resulting from the shortening of cranial base may be involved in the premature closure of coronal suture.Although there was debate about the transdifferentiation of hypertrophic chondrocytes into osteoblasts for a long time [44], recent papers further supported this viewpoint [45,46] It is reasonable to speculate that the bony defect in cranial base of Col2a1-253 mice may be related to the disturbed transdifferentiation of hypertrophic chondrocytes into osteoblasts.It was reported that Runx2 and Ihh upregulation, which is accompanied by acceleration of chondrocytic maturation and hypertrophy, was detected in the cranial base of transgenic Col2a1-Fgfr2IIIcP253R mice [19].Since Runx2 is a vital transcription factor for osteoblastic differentiation [47], but also play important role in chondrocyte hypertrophy [48,49], and FGF/FGFR signaling stimulated the expression of Runx2 [50].It is known that Ihh is crucial to regulate chondrocyte proliferation and differentiation, and osteoblastic differentiation in endochondral bones [51].Besides, Runx2 could promote Ihh expression by directly interacting with its promoter [49].Thus, it is possible that acceleration of chondrocytic maturation and hypertrophy in cranial base synchondroses of Col2a1-253 mice was related to FGFR2/Runx2/Ihh pathway.
Interestingly, OC-253 mice that have specific activation of mutant FGFR2 in mature osteoblasts exhibited mild skull malformation, which indicates that mutant FGFR2 in mature osteoblasts plays minor role in the malformation of AS skulls.It's interesting to study whether the mutant FGFR2 in earlier osteoblasts have more profound effects on AS skull malformation by using Col1 or Osterix driven Cre mice.
Aldridge and colleagues investigated the development of brain and sutures in Fgfr2 +/P253R mice.They proposed that the abnormal brain morphology is directly related to mutant FGFR2, not the premature fusion of sutures [14].Consistently, in present study, by using tissue specific activation approach we found that the skulls of Nestin-253 mice showed increased caudal brain in rostrocaudal and dorsoventral direction at the age of 8 weeks, indicating that mutant FGFR2 may directly disturb brain development, which may be involved in the skull malformation of Fgfr2 +/P253R mice.FGFR2 is broadly expressed in brain [52,53], while Nestin expression in brain is more focused in progenitor cells of cerebral cortex, cerebellum, hippocampus, thalamus, midbrain, and hypothalamus [35,54].Thus, the expression patterns of FGFR2 and Nestin are partially overlapped in brain tissue, the actual effect of the mutant FGFR2 on brain development in AS (Fgfr2 +/P253R mice) may be more profound.We thus propose that the abnormal brain morphology and mental retardation of AS is also associated with the direct effect of mutant FGFR2 on brain development.In addition to its expression in the brain, Nestin is also expressed in mesenchymal stem cells (MSCs).We speculate that the abnormal facial bone may be also related to effect of FGFR2-P253R on MSCs.Additionally, the anomalies of facial skeleton in Nestin-253 mice may also be related to the indirect effect of the maldeveloped brain (such as secreted molecules from brain).Commonly used surgical correction of the misshaped skulls alone is not enough for effective alleviating the maldeveloped brain.Adjuvant biological therapies against FGFR2 signaling are needed to relieve the brain malformation and/or mental retardation of AS.
In conclusion, results from our genetic and quantitative EDMA studies indicate that the maldevelopment of AS skulls is caused by the combined effects of the abnormal development of calvarias, cranial base and brain, three important composing components of head.Among them, dysregulated endochondral ossification appears to play essential roles in the pathogenesis of skull malformation of AS.Our results also revealed that there is direct effect of mutant FGFR2 on brain development.Our study provides important insights for the mechanism understanding of the skull malformation, and new clues for designing therapies of AS.
Figure 2 .
Figure 2. The cranial morphologies of OC-253 and their littermate mice.Alizarin red and Alcian blue staining showed OC-253 mice (4-week-old) had no significant changes in overall skeleton size(A).Measurement of skull lengths based on three-dimensional images showed 8-week-old OC-253 mice had shorter nasal bone along the rostrocaudal axis compared to WT mice (B, F).Radiographic images and Alizarin red and Alcian blue staining revealed normal growth of synchondroses at d14 OC-253 mice (C, D).The cranial cavity of mutant mice was normal at 4 weeks after birth (E).The lengths of basioccipital bone and sphenoid bone were not significantly changed (G, H). p: presphenoid, s: basisphenoid, bo: basioccipital bone, eo: exo-occipital bone.(Student's t-test, *P<0.05).
Figure 3 .
Figure 3. Nestin-253 mice had abnormal brain compared with their littermates.The body size of Nestin-253 mice was not changed in comparison with sex-matched WT mice (4 weeks after birth) (A).Normal development of cranial bases were indicated by X-ray and micro-CT images at the age of d16 (B) and d14 (C).The MRI images of brain in WT (D) and Nestin-253 (F) and measurement on MRI (G) indicated the longer in height on caudal of brain in Nestin-253 mice than controls (E).p: presphenoid, s: basisphenoid, bo: basioccipital bone, eo: exo-occipital bone.(Student's t-test, **P<0.01).
Figure 6 .
Figure 6.The EDMA analysis of 3D-reconstructed brain of Nestin-253 mice.(A and B) The schematic diagram of 11 landmarks on mouse brain.(C and D) The FDM analysis of 8-week-old Nestin-253 brains (n=6) compared with WT littermates (n=8).Blue lines show those linear distances that showed a decrease of over 5%.Broken red lines show linear distances with a >5% increase.
Figure 7 .
Figure 7. Schematic about the pathogenesis of the skull deformation in Apert mice.Mutant FGFR2 in Col2a1 expressing cells mainly affects coronal sutures, midface and cranial base(A).Mice with specific expression of mutant FGFR2 in osteoblasts (OC) show mildly shortened rostrocaudal length in the face(B).Specific expression of mutant FGFR2 in nestin-expressing cells mainly leads to increased rostrocaudal length and dorsoventral height of the caudal brain(C).All three mutant mouse stains with expression of mutant FGFR2 in specific tissue exhibit skull malformation, which indicates that mutant FGFR2 plays direct role in the maldevelopment of these tissues.Furthermore, it's noticed that the phenotypes of each mutant strain just partially resemble the skull malformation of EIIa-253 mice, suggesting that the malformation of specific tissues partially contributes to dysmorphogenesis of skulls of AS mice (E), and that the maldevelopment of AS head is caused by the combined effects of abnormal development of calvarias, cranial base, brain and other tissues(D).
Table 1 .
Part of linear distances ratios of FDM for the comparison of face and neurocranium of mutant (MT) and wild-type (WT) skulls at 4 and 8 weeks.
Table 2 .
The definition of brain landmarks.
Table 3 .
Part of linear distances ratios of the form difference matrices (FDM) for the comparison of Nestin-253 with wild-type brain at 8 weeks.
|
2018-04-03T04:31:23.709Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "e2937f4f39b5a9c72169b6b21eaa741ce2901d59",
"oa_license": "CCBYNC",
"oa_url": "http://www.ijbs.com/v13p0032.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "20a1f7114878b65300ae4e68cea7b722dd3b22bc",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
57352777
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between Digit Ratio and Idiopathic Pulmonary Arterial Hypertension in Japanese Women
Aim: Endothelin-1 (ET-1) is the key vasoactive mediator in patients with pulmonary arterial hypertension (PAH), and sex steroids are known to influence ET-1 levels. Additionally, the second to fourth digit (2D:4D) ratio is a biometric marker influenced by testosterone concentrations and androgen receptor sensitivity in the uterus, and some reports have linked (2D:4D) ratio to disease predisposition among patients with gender-dependent conditions. Since idiopathic PAH (IPAH) is more prevalent in women, we hypothesized that the 2D:4D ratio could predict a female’s predisposition to developing PAH, reflecting an interaction between ET-1 and sex hormones. Method: This study analyzed 13 female patients with IPAH at Keio University Hospital and 41 unrelated agematched controls. The right hand of patients and controls was photographed using a digital camera and two experienced scorers measured finger lengths and 2D:4D ratios. Key findings: The IPAH and control groups had a mean age of 43.2 ± 3.5 and 40.9 ± 1.7 years, respectively. The 2D:4D digit ratio was significantly higher for patients with IPAH than for the control women; 0.975 ± 0.041 vs. 0.940 ± 0.038, P<0.05. The age at onset of PAH did not correlate with the ratio. Significance: Female patients with IPAH in this study had a higher 2D:4D digit ratio than age-matched healthy controls, suggesting lower prenatal circulating testosterone levels. In conclusion, the 2D:4D digit ratio is a useful biomarker for IPAH, and prenatal testosterone level could be an important factor for the protection against developing IPAH.
Introduction
Various digit ratios, and in particular the second to fourth digit ratio (second digit; 2D, fourth digit; 4D), are sexually dimorphic characteristics in humans [1], and indeed, evidence accumulated over the past decade indicates that the 2D:4D ratio is determined by prenatal estrogen and testosterone concentrations [2].
Some studies have already investigated the links between 2D:4D ratio and the etiology of sex-dependent behaviours with respect to immune system disorders, cardiovascular diseases like myocardial infarction [3], some cancers [4], and a number of adult-onset diseases prevalent among men such as amyotrophic lateral sclerosis (ALS) [5]. Therefore, the 2D:4D ratio is a potential predictor of not only fertility, but also sex-dependent disease.
While there are numerous reports of the relationship between 2D: 4D ratios and sex predispositions of various diseases, no such study has included patients with idiopathic pulmonary arterial hypertension (IPAH), which has also sex predispositions [6]. In addition, endothelin-1 (ET-1) is the key vasoactive mediator and therapeutic target in patients with PAH, and an association has been suggested between sex hormones and endothelin [7].
This study thus sought to investigate whether digit ratios have clinical importance as a marker of sexual predisposition to PAH. Since patients with PAH are predominantly women, we hypothesized a link between 2D:4D ratio and disease predisposition in IPAH, reflecting the association between sex steroids and ET-1.
Clinical methods
This was a case-control study involving 13 consecutive female patients with IPAH cared for at Keio University Hospital (Tokyo, Japan) from April 2011 to September 2011. We also invited 41 unrelated age-matched healthy women to participate in the study as controls. The diagnosis of PAH was confirmed for each patient by right heart catheterization using diagnostic criteria based on the American College of Cardiology (ACC)/American Heart Association (AHA) guidelines [8].
Finger length measurements
The fingers of patients and controls were designated as 2D and 4D. Photographs of the right hand were taken with the hand supinated and the fingers flattened to full extension on a sheet of white paper with a digital camera placed over the center of the white sheet. Digit length was measured from the basal crease of the digit to the tip using the measurement tool in Adobe Photoshop®. People with faint creases and those with contractures would not be reliably measurable in this study. Rather than selecting patients for these features before entry, which might be biased and arbitrary, we invited consecutive patients to participate and then used an objective rule to exclude hands with poor measurability. Two independent and experienced scorers who were blinded to the case-control status took the measurements from all images.
Statistical methods
Analyses were performed using the statistical package SPSS 19.0. Digit ratios were calculated and their mean values were compared with a t-test because the distribution of the values was normal (Shapiro-Wilk test of normality, P<0.05).
Results
A total of 13 PAH women and 41 control women were studied. Mean ages were not significantly different between the groups at 43.2 ± 3.5 years for the IPAH group and 40.9 ± 1.7 years for controls (P<0.05). The IPAH patient group had significantly higher 2D:4D ratios compared to the healthy control group (mean ratio was 0.975 ± 0.041 and 0.940 ± 0.038, respectively: P<0.05, Figure 1).
Discussion
The present study revealed that female patients with IPAH had a higher 2D:4D ratio than control women, suggesting that low serum testosterone prenatal levels and high estrogen prenatal levels in the uterus could predispose females to developing PAH.
In general, males have longer fourth digits relative to their second digits than females, and consequently have lower 2D:4D ratios. Developmental and prenatal concentrations of testosterone are linked genetically through the action of homeodomain-containing or homeobox (Hox) genes [9], and differences in androgen exposure in utero with high concentrations of fetal testosterone lead to low 2D:4D ratios. Different digits also show differential distributions of androgen and estrogen receptors [2], which showed that 2D:4D ratio is determined by the balance of prenatal testosterone to estrogen signaling during fetal digit development. Therefore, low 2D:4D ratios may suggest that the result of this paper suggested the onset of IPAH is influenced with the prenatal high testosterone and low estrogen. And as described above, there are some studies which investigated the links between 2D:4D ratio and the etiology of sex-dependent behaviors, therefore, the 2D:4D ratio is a potential predictor of also sexdependent disease.
IPAH is a rare but devastating disease and, if untreated, leads to right heart failure and premature death. To date, sex remains the most powerful modifier of disease development, as demonstrated by the high prevalence of IPAH in females between the age of 35 and 50 years. Some clinical studies of PAH indicated that abnormalities in estrogen metabolism may play a pathogenic role of PAH [10]. In contrast to these clinical studies, most animal studies have shown that female sex and estrogen supplementation can have a protective effect against PAH. This apparent contradiction between clinical studies and animal data gave rise to the "estrogen paradox" in PAH [11]. In addition, there seems to be a clear relationship between sex hormones and the vasoactive mediator endothelin, which is also important in the pathology of PAH. One study showed that estrogen attenuates hypoxia-induced pulmonary ET-1 gene expression in the lung tissue of adult female rats [12], while another found plasma basal levels of ET-1 increased in human males with low testosterone levels [7].
In this context, our study suggested a prenatal sex-steroid predisposition for PAH using the 2D:4D ratio as a marker. This study was limited by being observational only within a single center without notable findings of IPAH patients during prenatal period. However, in the context of the proposed balance between prenatal sex steroids and adult diseases, Chinnathambi et al. [13] also showed that high prenatal testosterone exposure leads to gonad-dependent hypertension during adult life. Little is known about the direct molecular relationship between prenatal sex steroid levels and the development of PAH, and thus our findings raise the novel possibility that testosterone and estrogen in utero could provide insight into the "estrogen paradox" of PAH.
Conclusion
Female patients with IPAH showed a higher 2D:4D digit ratio than healthy subjects, suggesting lower prenatal circulating testosterone levels. In conclusion, the 2D:4D digit ratio is a potentially useful biomarker for IPAH, and prenatal testosterone levels could be the next
|
2019-01-23T16:04:46.770Z
|
2015-02-01T00:00:00.000
|
{
"year": 2015,
"sha1": "ed73cc23b186b9d57b618df7f9a91354002e126c",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/relationship-between-digit-ratio-and-idiopathic-pulmonary-arterial-hypertension-in-japanese-women-2329-6925.1000175.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0369394950556491ab0d9ac1acfaa649d57a7fa8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248220798
|
pes2o/s2orc
|
v3-fos-license
|
Potent and Broad-Spectrum Bactericidal Activity of a Nanotechnologically Manipulated Novel Pyrazole
The antimicrobial potency of the pyrazole nucleus is widely reported these days, and pyrazole derivatives represent excellent candidates for meeting the worldwide need for new antimicrobial compounds against multidrug-resistant (MDR) bacteria. Consequently, 3-(4-chlorophenyl)-5-(4-nitrophenylamino)-1H-pyrazole-4-carbonitrile (CR232), recently reported as a weak antiproliferative agent, was considered to this end. To overcome the CR232 water solubility issue and allow for the determination of reliable minimum inhibitory concentration values (MICs), we initially prepared water-soluble and clinically applicable CR232-loaded nanoparticles (CR232-G5K NPs), as previously reported. Here, CR232-G5K NPs have been tested on several clinically isolates of Gram-positive and Gram-negative species, including MDR strains. While for CR232 MICs ≥ 128 µg/mL (376.8 µM) were obtained, very low MICs (0.36–2.89 µM) were observed for CR232-G5K NPs against all of the considered isolates, including colistin-resistant isolates of MDR Pseudomonas aeruginosa and Klebsiella pneumoniae carbapenemases (KPCs)-producing K. pneumoniae (0.72 µM). Additionally, in time–kill experiments, CR232-G5K NPs displayed a rapid bactericidal activity with no significant regrowth after 24 h on all isolates tested, regardless of their difficult-to-treat resistance. Conjecturing a clinical use of CR232-G5K NPs, cytotoxicity experiments on human keratinocytes were performed, determining very favorable selectivity indices. Collectively, due to its physicochemical and biological properties, CR232-G5K NPs could represent a new potent weapon to treat infections sustained by broad spectrum MDR bacteria.
Introduction
Across the past two decades, the quantity of MDR bacteria has grown dramatically worldwide [1,2]. MDR pathogens are defined as bacteria that are resistant to at least three classes of antibiotics and represent one of the biggest threats to global health and food security [3,4]. Bacteria's resistance to antibiotics occurs when pathogens, by different mechanisms, change their response to such drugs [5]. MDR bacteria belong to both Grampositive, Gram-negative, and Mycobacteria groups [6]. They are responsible for a wide range of infections that are becoming progressively more difficult to treat, as the antibiotics used to cure them become less and less, or even no longer, effective [7].
Antibiotic resistant bacteria can affect anyone, of any age, in any country in the world, causing an increasing number of deaths in hospitals, long-term care facilities, and community settings [8]. Additionally, to counteract infections sustained by MDR bacteria, We recently developed a novel one-pot, low-cost synthetic strategy for the preparation of highly functionalized pyrazole derivatives [19]. As such, the pharmaceutical relevance of pyrazoles, the in-house availability of a reliable synthetic protocol for their preparation, and the global need of new therapeutic options against infections by MDR bacteria prompted us to study the antibacterial properties of CR232 ( Figure 1). This compound has been selected based mainly on the structural similarity with BBB4, a 3-phenyl pyrazole recently reported as having antibacterial properties, which were further improved by its formulation in water-soluble dendrimer NPs [6].
Biomedicines 2022, 10, x FOR PEER REVIEW 3 of 22 We recently developed a novel one-pot, low-cost synthetic strategy for the preparation of highly functionalized pyrazole derivatives [19]. As such, the pharmaceutical relevance of pyrazoles, the in-house availability of a reliable synthetic protocol for their preparation, and the global need of new therapeutic options against infections by MDR bacteria prompted us to study the antibacterial properties of CR232 ( Figure 1). This compound has been selected based mainly on the structural similarity with BBB4, a 3-phenyl pyrazole recently reported as having antibacterial properties, which were further improved by its formulation in water-soluble dendrimer NPs [6]. Preliminary microbiologic investigations were carried out to determine the MICs of CR232; however, due to its total water-insolubility and its tendency to precipitate in the aqueous medium of the experiments, the results obtained were not completely reliable and the resulting MICs were difficult to interpret, with only MICs ≥ 128 µg/mL being assumed. Therefore, to perform further biological evaluations and possibly hypothesize a future clinical application of CR232, we solved the solubility issues of CR232 by two nanotechnological approaches, using both a dendrimer and liposomes as encapsulating and solubilizing agents [20]. In both cases, CR232-loaded NPs with enhanced water-solubility and properties suitable for in vivo administration were achieved [20].
Here, the NPs obtained by using G5K, a lysine-containing dendrimer, as a solubilizing agent (CR232-G5K NPs) were selected to be evaluated for their effects on both bacterial and normal human cells. A preliminary screening showed that CR232-G5K NPs possessed a remarkable antibacterial activity against MDR bacteria that were representative of both Gram-positive and Gram-negative species. Therefore, we have studied the antibacterial and bactericidal effects of CR232-G5K NPs on several clinical isolates of different species of both families, obtaining excellent results. Interestingly, CR232-G5K NPs were also effective against isolates of P. aeruginosa and K. pneumoniae that are resistant to colistin, for which all other clinically approved antibiotics, and even recently developed cationic macromolecules acting as the cationic antimicrobial peptides, failed [21].
Next, once the strong and broad-spectrum antibacterial and bactericidal activity of CR232-G5K NPs were established in order to evaluate the feasibility of their clinical application, their cytotoxicity on human keratinocytes was evaluated. In parallel, G5K and CR232 were also tested under the same conditions for comparative purposes. Preliminary microbiologic investigations were carried out to determine the MICs of CR232; however, due to its total water-insolubility and its tendency to precipitate in the aqueous medium of the experiments, the results obtained were not completely reliable and the resulting MICs were difficult to interpret, with only MICs ≥ 128 µg/mL being assumed. Therefore, to perform further biological evaluations and possibly hypothesize a future clinical application of CR232, we solved the solubility issues of CR232 by two nanotechnological approaches, using both a dendrimer and liposomes as encapsulating and solubilizing agents [20]. In both cases, CR232-loaded NPs with enhanced water-solubility and properties suitable for in vivo administration were achieved [20].
Here, the NPs obtained by using G5K, a lysine-containing dendrimer, as a solubilizing agent (CR232-G5K NPs) were selected to be evaluated for their effects on both bacterial and normal human cells. A preliminary screening showed that CR232-G5K NPs possessed a remarkable antibacterial activity against MDR bacteria that were representative of both Gram-positive and Gram-negative species. Therefore, we have studied the antibacterial and bactericidal effects of CR232-G5K NPs on several clinical isolates of different species of both families, obtaining excellent results. Interestingly, CR232-G5K NPs were also effective against isolates of P. aeruginosa and K. pneumoniae that are resistant to colistin, for which all other clinically approved antibiotics, and even recently developed cationic macromolecules acting as the cationic antimicrobial peptides, failed [21].
Next, once the strong and broad-spectrum antibacterial and bactericidal activity of CR232-G5K NPs were established in order to evaluate the feasibility of their clinical application, their cytotoxicity on human keratinocytes was evaluated. In parallel, G5K and CR232 were also tested under the same conditions for comparative purposes.
Why CR232-G5K NPs and Not a Liposome-Based Formulation?
CR232-G5K NPs were chosen due to their higher water-solubility, drug loading capacity (DL%), encapsulation efficiency (EE%), dendrimer structure, and cationic character, which are features reported to support antibacterial effects [22]. In particular, the cationic nature of CR232-G5K NPs would promote their interaction with the negative bacterial surface, thus favouring the localization and accumulation of NPs on prokaryotic cells. Additionally, the high DL% of CR232-G5K NPs, would allow for the release of large amounts of the transported pyrazole at the target bacteria upon administration of a low micromolar dose of the formulation. Moreover, it is now generally accepted that cationic macromolecules, including dendrimers, thanks to electrostatic interactions with the bacterial surface, cause the depolarization of the membrane and its progressive permeabilization through the formation of increasingly large pores [22]. In our case, pore formation would have favoured the entry of CR232 into the bacterial cell.
Chemical Substances and Instruments
The synthetic procedure for preparing CR232 and the polyester-based cationic NPs loaded with CR232 (CR232-G5K NPs) used in this study were recently reported [19,20]. In addition, experimental details and characterization data concerning CR232-G5K NPs are available in the Supplementary Materials (SM) (Section S1, including Sections S1.1-S1.8, Figures S1-S5, and Tables S1-S4). Further, the dose-dependent cytotoxicity experiments performed with G5K on eukaryotic ovarian cancer cells (HeLa) using a PAMAM-NH 2 dendrimer as a control and the related results are available in Section S2, including Figures S6 and S7 and Table S5 (SM).
Bacterial Species Considered in This Study
Several clinical isolates of Gram-positive and Gram-negative species, for a total of 36 strains, were employed in this study. All bacteria belonged to a compendium obtained from the School of Medicine and Pharmacy of the University of Genoa (Italy). Their identification was carried out by VITEK ® 2 (Biomerieux, Firenze, Italy) or the matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometric technique (Biomerieux, Firenze, Italy). In particular, 21 strains were of Gram-negative species, including four isolates of MDR P. aeruginosa. Among these, one strain was resistant to colistin, one to the combination avibactam-ceftazidime, and one was an MDR strain isolated from a patient with cystic fibrosis. The other Gram-negative strains were comprised of three isolates of Escherichia coli, of which one produced β-lactamase enzymes of the KPC family and one produced New Delhi metallo carbapanemases (NDMs), four isolates of MDR Stenotrophomonas maltophilia, four strains of Klebsiella pneumoniae carbapenemases (KPCs)producing K. pneumoniae (one of which was also resistant to colistin), and three isolates of MDR Acinetobacter baumannii. Finally, one Morganella morganii, one Providencia stuartii, one Proteus mirabilis, and one Salmonella group B were considered among the Gram-negative bacteria of this study. Fourteen strains were Gram-positive, including six isolates of the genus Enterococcus (3 vancomycin resistant (VRE) E. faecalis and three E. faecium, of which one vancomycin susceptible (VSE) and two VRE), one sporogenic B. subtilis, and seven clinical isolates of the genus Staphylococcus. In particular, four isolates were methicillinresistant S. aureus (MRSA) and three were methicillin-resistant S. epidermidis (MRSE), one of which was also resistant to linezolid.
Determination of the Minimal Inhibitory Concentrations (MICs)
The antimicrobial activity of CR232-G5K NPs was investigated, determining their MICs following the microdilution procedures detailed by the European Committee on Antimicrobial Susceptibility Testing EUCAST [23], as also reported in our previous studies [17,21]. Here, serial two-fold dilutions of a solution of CR232-G5K NPs (DMSO), ranging from 1 to 128 µg/mL, were used. All MICs were obtained in triplicate, the degree of concordance in all the experiments was 3/3, and the standard deviation (±SD) was zero.
Time-Kill Experiments
Killing curve assays for CR232-G5K NPs were performed on various isolates of S. aureus, E. coli, and P. aeruginosa as previously reported [17,24]. Experiments were performed over 24 h at concentrations four times that of the MICs. Human skin keratinocyte cells (HaCaT), derived from a generous gift of the Laboratory of Experimental Therapies in Oncology, IRCCS Istituto Giannina Gaslini (Genoa, Italy), were grown as a monolayer in RPMI 1640 medium supplemented with 10% fetal bovine serum (v/v), 1% penicillin-streptomycin, and 1% glutamine (Euroclone S.p.A., Milan, Italy), cultured in T-25 cm 2 plastic flasks (Corning, NY, USA) and maintained at 37 • C in a 5% CO 2 humidified atmosphere. Cells were tested and characterized at the time of experimentation as previously described [25].
Viability Assay
HaCaT cells were seeded in 96-well plates (at 4 × 10 3 cells/well) in complete medium and cultured for 24 h. The seeding medium was removed and replaced with fresh complete medium that had been supplemented with increasing concentrations of empty dendrimer (G5K), CR232, or CR232-G5K (0 µM, 1 µM, 5 µM, 10 µM, 15 µM, 20 µM, 25 µM, 50 µM, 75 µM, or 100 µM). Cells (quadruplicate samples for each condition) were then incubated for an additional 4, 12, or 24 h. The effect on cell growth was evaluated by the fluorescencebased proliferation and cytotoxicity assay CyQUANT ® Direct Cell Proliferation Assay (Thermo Fisher Scientific, Life Technologies, MB, Italy) according to the manufacturer's instructions. Briefly, at the selected times, an equal volume of detection reagent was added to the cells in culture and incubated for 60 min at 37 • C. The fluorescence of the samples was measured using the mono-chromator-based M200 plate reader (Tecan, Männedorf, Switzerland) set at 480/535 nm. The experiments were carried out at least three times and samples were run in quadruplicate.
Statistical Analyses
The statistical significance of differences between the experimental and control groups in cytotoxicity studies was determined via a two-way analysis of variance (ANOVA) with the Bonferroni correction. The analyses were performed with Prism 5 software (GraphPad, La Jolla, CA, USA). Asterisks indicate the following p-value ranges: * = p < 0.05, ** = p < 0.01, and *** = p < 0.001. The results have been reported in Section S2 of SM (Table S6). Concerning MIC values, experiments were made in triplicate and the concordance degree was 3/3 and ±SD was zero. Table 1 collects the main characteristics of CR232-G5K NPs that were determined, reported, and discussed in our previous work [20].
Antibacterial Effects of CR232-G5K NPs
Most of the pharmacological activities of the pyrazole nucleus have been studied since the year 1944, but the first studies on the antimicrobial effects of pyrazole derivatives begun only after the year 2000 [6].
Today, their antimicrobial properties are extensively documented [6,18]. Among the developed molecules containing the pyrazole ring, some have been demonstrated to possess broad-spectrum antibacterial activity and significant antibacterial effects against MDR isolates of A. baumannii, MRSA, and VRE. As reported, the antimicrobial properties of the molecules containing the pyrazole ring would be due to the presence of amino groups in the structure [26]. Assuming that the pyrazole derivative recently synthetized by our group (CR232) could also be a good candidate as new antibacterial agent, after having formulated it as dendrimer NPs for reasons explained in the introduction to this study, the obtained CR232-loaded water-soluble NPs have been tested here against several isolates of different species of Gram-positive and Gram-negative bacteria. FTIR: the images show the spectra of empty delivery system G5K of CR232 and of CR232-G5K NPs; 1 dynamic light scattering; 2 hydrodynamic diameters of particles; 3 polydispersity indices; 4 measures of the electrical charge of particles suspended in the liquid of acquisition (water); 5 correspondent values for G5K = 175.7 ± 1.8, 0.129 ± 0.035, +48.0 ± 6.40; 6 refers to CR232-G5K NPs; 7 refers to CR232 contained in solubilized CR232-G5K; § water-solubility improvements: 8 2311.1-fold; 9 733.3-fold; * LD 50 of paclitaxel = 22.4 µM and of G4-PAMAM-NH 2 = 4.7 µM.
Determination of MIC Values
The values of MICs and of minimum bactericidal concentrations (MBCs) were determined on several clinical isolates of different species of Gram-negative and Gram-positive bacteria for a total of 36 isolates, and the results obtained are given in Table 2 (Gram-positive) and Table 3 (Gram-negative). Table 2. MICs and MBCs of CR232-G5K NPs against bacteria of Gram-positive species, obtained from experiments carried out in triplicate 1 , expressed as µM and µg/mL, and those of CR232 released according to the DL% and the release profile of CR232-G5K NPs [20], expressed as µg/mL. According to the results reported in Tables 2 and 3, since the MIC value of 128 µg/mL was assumed as the cut-off value above which to consider the compound inactive, CR232-G5K NPs showed wide antibacterial profiles and were found to be inactive (MICs > 128 µg/mL) against only three out of the 36 strains, i.e., P. mirabilis, M. morganii, and P. stuartii, which represent particularly difficult-to-treat isolates of Gram-negative species. CR232-G5K NPs appeared to be visibly active against three MRSA, two out of four KPCs-producing strains of K. pneumoniae, and one out of four MDR P. aeruginosa that are also resistant to the combination avibactam-ceftazidime (MICs = 2.89 µM), while it proved to be very potent against all other Gram-negative (MICs = 0.72-1.44 µM) and Gram-positive bacteria tested, displaying the best antibacterial effects on Enterococci (MICs = 0.36-1.44 µM), S. epidermidis (MICs = 0.36 µM), and B. subtilis (MIC = 0.36 µM). In all cases, MBC values equal to, or superimposed on MIC values, were observed. The macromolecular complex containing CR232 displayed micromolar MIC values even lower than those observed for a potent cationic copolymer reported to have a remarkable broad-spectrum activity [21].
CR232-G5K
Furthermore, CR232-G5K NPs showed very low MIC values against a MDR strain of P. aeruginosa isolated from a cystic fibrosis patient (MIC = 0.72 µM), against both a MDR strain of P. aeruginosa and a strain of KPCs-producing K. pneumoniae that are also resistant to colistin (MICs = 0.72 µM), and against a strain of MDR P. aeruginosa that is also resistant to the avibactam-ceftazidime combination (MIC = 2.89 µM) clinically used to counteract bacterial resistance to carbapenems [27]. The activity of CR232-G5K NPs against these particularly drug-resistant strains of P. aeruginosa and K. pneumoniae is particularly relevant considering that these pathogens represent frightening superbugs responsible for severe nosocomial infections associated with dramatic outcomes [17]. Table 3. MICs and MBCs of CR232-G5K NPs against bacteria of Gram-negative species, obtained from experiments carried out in triplicate 1 , expressed as µM and µg/mL, and those of CR232 released according to the drug loading and the release profile of CR232-G5K NPs [20], expressed as µg/mL. To our knowledge, except for BBB4-G4K NPs recently developed and essayed by us as antibacterial agents for future clinical applications [6,28], there are only two other studies in the literature on the polymer formulation of pyrazole derivatives [26,29] and only one of those reports the development of pyrazole-based polymers for antibacterial purposes, although for textile finishing and not for therapeutic uses [26]. In this regard, the antibacterial activity of CR232-G5K NPs showed a significantly extended spectrum of action in comparison with that of BBB4-G4K NPs [6], resulting in it being active against most of both Gram-positive and Gram-negative isolates that were considered herein. Additionally, compared to BBB4-G4K NPs, CR232-G5K NPs showed good potency against MRSA, while against MRSE they were 10-fold more potent. Therefore, with this new study, we have obtained a significant improvement in the potential of our pyrazole-based dendrimers as potential antimicrobial agents.
CR232-G5K
As previously determined [20], CR232-G5K NPs contain CR232 31.7% (w/w) and after 24 h, they release 99.3% of the entrapped CR232 into the physiological medium. Based on these data, the MIC and MBC values of the free CR232 released in the medium by the amounts of NPs that inhibited the bacterial growth were estimated and reported in Tables 2 and 3 (columns 4 and 5). These data were essential to compare the antibacterial effects of CR232 delivered by CR232-loaded NPs with those of pristine CR232 (MICs ≥ 128 µg/mL) and those of other small molecules containing the pyrazole nucleus previously assayed as antibacterial agents. As summarized in Tables 2 and 3, the MIC values estimated for the released CR232 were in the range of 5.05-20.2 µg/mL on Grampositive bacteria (except for three MRSA isolates (MICs = 40.4 µg/mL)), and in the range of 10.1-20.2 µg/mL on Gram-negative bacteria (except for P. mirabilis, M. morganii, and P. stuartii (MICs > 40.4 µg/mL), for two out of four KPCs-producing K. pneumoniae, and for one out of four MDR P. aeruginosa that is also resistant to the combination avibactamceftazidime (MICs = 40.4 µg/mL)). In all cases, the estimated MBC values doubled or overlapped those of the observed MICs. According to these values, the insignificant antibacterial effects of pristine CR232 (MICs ≥ 128 µg/mL) were improved by more than 3.2-25.3 times against bacteria of Gram-positive species and by more than 3.2-12.7 times against those of Gram-negative ones. Interestingly, the antibacterial effects of a series of 5-amido-1-(2,4-dinitrophenyl)-1H-4-pyrazolecarbonitriles, structurally like CR232 for the presence of a nitrile group on C4, were previously evaluated against bacterial collections. Notably, they were tested on American Type Cell Culture (ATCC) representatives of MRSA and MSSA, as well as against Persian Type Culture Collection (PTCC) representatives of P. aeruginosa, B. subtilis, and on an isolate of E. coli [30]. Curiously, the MICs observed on P. aeruginosa and B. subtilis were not reported. In any case, different from CR232 released by NPs, which proved to have remarkable antibacterial effects, all the reported compounds were inactive against E. coli (MIC > 400 vs. MICs = 20.2 µg/mL of CR232), while most compounds were less active than the CR232 released by CR232-G5K NPs against MRSA [30].
In another study conducted by Bekhit and Abdel-Aziem, a library of 12 pyrazole derivatives were again assayed on bacterial collections, including ATCC isolates of E. coli and S. aureus, observing MICs from 50 to > 200 µg/mL and MICs in the range 12.5-> 200 µg/mL, respectively. Considering only the most active pyrazole derivatives developed by Bekhit and colleagues (compounds 7 and 12a), while 12a displayed MIC = 50 µg/mL against E. coli and 25 µg/mL against S. aureus, 7 displayed MIC = 12.5 µg/mL against S. aureus and 100 µg/mL against E. coli. According to these results, even if less active than 7 against S. aureus, CR232 was 2.5-fold more active than 12a and 5-fold more potent than 7, on E. coli [31], thus establishing the high potency of the nanoengineered CR232 against bacteria of Gram-negative species, which, more than Gram-positive isolates, represent a global health concern [17].
Moreover, especially against E. coli and P. aeruginosa, CR232 released by the delivery system developed in this study displayed MBC values similar or lower than those obtained by the majority of a series of pyrazole-thiosemicarbazones (3a-g) and their pyrazole-thiazolidinone conjugates (4a-g) that were recently reported by Ebenezer and colleagues [33]. In particular, against E. coli, the MBCs estimated for CR232 were lower than those previously observed for compounds 3c, 3d, 3g, and 4c (MBCs = 20.2 µg/mL vs. MBCs = 137.9, 66.8, 69.6, and 150.5 µg/mL), while against P. aeruginosa, the MBCs of CR232 were lower than those observed for 3b, 3e, 3f, 4a, and 4c (MBCs = 20.2 µg/mL vs. MBCs = 139.3, 71.5, 134.2, 162.6, and 37.6 µg/mL), where compounds 3b, 3e, 3f, and 4a were completely ineffective on E. coli. Additionally, CR232 showed MBC values about 4.5-fold lower than those obtained for compounds 3e and 3g against K. pneumoniae, and MBCs that were 1.7 and 1.9-fold lower than those determined for compounds 3b and 4c against MRSA. In any case, we make note that in the study by Ebenezer, as well as in all the other previous studies reviewed herein, the bacterial cells used to assay the pyrazole derivatives developed by the authors were always derived from ATCC or other bacterial collections. In this regard, we think that an overall merit of our study consists in having tested our compound on clinically relevant MDR bacterial isolates from infected patients. Interestingly, against P. aeruginosa, which represents one of the most frightening superbugs responsible of severe nosocomial infections associated to dramatic outcomes [17], the nanoengineered CR232 released by NPs displayed very low MIC values (MICs = 10.1, 10.1, 40.4 µg/mL, respectively) against one strain isolated from a patient affected by cystic fibrosis, one resistant to colistin, and one resistant to the recently developed association of avibactam-ceftazidime. Similarly, very low MICs (10.1 µg/mL) were also determined against a clinical isolate of KPCs-producing K. pneumoniae resistant to colistin, which is responsible for untreatable severe neonatal bacteremia.
The MIC values observed for CR232-G5K NPs and those estimated for the nanoengineered CR232 (free compound) delivered by the dendrimer formulation were compared to the MICs of commonly used antibiotics against the specific Gram-positive (Table 4) and Gram-negative (Table 5) pathogens. As the molecular weights (MW) of CR232-G5K NPs and antibiotics are very different, we compared the MIC values expressed as micromolar concentrations (µM), which provide how much of the equivalents of the substance under investigation have been administered to bacteria to obtain inhibition. Conversely, since CR232 (small molecule) released by NPs share similar MW values with the antibiotics, the comparison was carried out using the µg/mL scale, as per usual. Table 4. MIC values of CR232-G5K NPs and those estimated for the CR232 (free compound) delivered by the dendrimer formulation against bacteria of Gram-positive species, obtained from experiments carried out in triplicate 1 , and those of reference antibiotics expressed as µM and µg/mL. Table 5. MIC values of CR232-G5K NPs and those estimated for the CR232 (free compound) delivered by the dendrimer formulation against bacteria of Gram-negative species (the isolated against which CR232-G5K NPs were inactive have been omitted), obtained from experiments carried out in triplicate 1 , and those of reference antibiotics expressed as µM and µg/mL. Accordingly, on all Gram-positive bacteria, the MICs (µM) of CR232-G5K NPs were exceptionally lower than those of the reference antibiotics, while those of CR232 released by NPs, expressed as µg/mL, were lower by 3.2-50.7 times.
Strains
Similarly, on all isolates of Gram-negative species, CR232-G5K NPs proved to be extraordinarily more potent than the reference antibiotics, while CR232 released from NPs, for all but two out of four isolates of KPCs-producing K. pneumoniae, displayed MICs lower by 1.6-6.3 times. Notably, both CR232-G5K NPs and the nanoengineered CR232 released by NPs emerged as active against colistin-resistant P. aeruginosa and K. pneumoniae strains with MIC values of 0.72 µM vs 18.5 µM of colistin (CR232-G5K NPs) and of 10.1 µg/mL vs. 16 µg/mL of colistin (CR232), respectively. Considering that colistin is the last therapeutic option against P. aeruginosa isolates resistant to all antibiotics these days, including carbapenem, and against carbapenem-resistant hypervirulent K. pneumoniae (CR-hvKP), and that, due to the emergence of colistin-resistant strains, will be soon no longer usable, the identification of a new antibacterial agent that is also active also against colistin-resistant P. aeruginosa and K. pneumoniae represents an exceptional achievement for this study.
Time-Killing Curves
Time-kill experiments were performed with CR232-G5K NPs at concentrations equal to 4 x MIC on at least four strains for species of P. aeruginosa, S. aureus, and E. coli, including one colistin-resistant isolate of P. aeruginosa (strain 265) and one P. aeruginosa strain that is also resistant to the combination of avibactam-ceftazidime (strain 259), one isolate of E. coli producing NDMs (strain 462), and one MRSA (strain 187). As depicted in Figure 2, showing the curves obtained for the strains specified above, CR-232 G5K NPs displayed an extremely strong bactericidal effect against all the tested pathogens, causing an immediate and rapid decrease in the original cell number, which led to a total extinction of bacteria after only 2 h of exposure to CR232-G5K NPs. No significant difference among the various strains tested was observed, regardless of their specific resistances to different antibiotics. During the next two hours, and up to 24 h, no significant regrowth was observed for all the isolates, including the colistin-resistant P. aeruginosa isolate. Considering the difficulty in the treatment of P. aeruginosa strains that have developed resistance to colistin, the rapid bactericidal profile demonstrated by the CR232-formulation developed here must certainly be considered to have considerable interest for desirable clinical use. on clinically relevant Gram-positive and Gram-negative strains, as well as MDR and even on isolates producing NDMs, against which, currently, no combination antibiotic/inhibitor is clinically approved [27].
Cytotoxicity of G5K, CR232, and CR232-G5K NPs on HaCaT Human Keratinocytes Cells
In addition to a proper water solubility, a new antibacterial agent should hopefully selectively inhibit the bacterial cell without damaging the eukaryotic one. This capability can be obtained by determining the values of selectivity index (SI) given by the ratio between the concentration of antibacterial agent capable to kill 50% of eukaryotic cells (LD50) and values of MICs. Thus, hoping for a possible cutaneous use of CR232-G5K NPs, a doseand time-dependent cytotoxicity study was performed on human keratinocytes (HaCaT) to evaluate the effects on cell viability of pristine CR232, CR232-G5K NPs, and of the nanomanipulated CR232 provided by the quantity of NPs administered. HaCaT were selected as they are the principal type of cells found in the epidermis and are more susceptible to colonization by bacteria, fungi, and parasites. Since in our previous work, we have reported the cytotoxicity of the empty dendrimer G5K on HeLa cells [20], for comparison purposes, we herein evaluated the cytotoxicity of G5K on HaCaT cells. The cytotoxic activity of CR232, CR232-G5K NPs, and G5K at concentrations of 1, 5, 10, 15, 20, 25, 50, 75, and 100 µM was determined after 4, 12, and 24 h of exposure to the cells. The results are shown in Figure S8, Section S2 (SM).
The cytotoxic effects of all compounds tested were strongly influenced by the exposure time. Thus, after 4 h, G5K was not cytotoxic even at the highest concentration tested (100 µM), leaving 88.5% of the cells alive and showing proliferation on the control (viability > 100%) at concentrations in the range 10-25µM. At this exposure time, the cytotoxicity of CR232 and that of CR232-G5K NPs was very similar. A slight proliferation was observed at 5-10 µM concentrations, while for higher concentrations, a slow decrease in cell viability was observed until reaching values of 41% viability at 100 µM concentration. It is noteworthy that, although the herbicidal and fungicidal effects of molecules containing the pyrazole ring have been reported [34][35][36], as to our knowledge, the only pyrazole-containing molecule known to possess bactericidal activity against certain Gramnegative and Gram-positive bacteria is ceftolozane, a semi-synthetic, broad-spectrum, fifth-generation cephalosporin antibiotic. Unfortunately, since inactivated by β-lactamase enzymes produced by several MDR bacteria, this drug is administered in combination with tazobactam, a β-lactamases inhibitor [27]. While tazobactam is capable of preventing ceftolozane inactivation by serine-type β-lactamases, it is unable to protect it from metalloβ-lactamases, thus establishing the inactivity of the ceftolozane/tazobactam combination against NDMs-producing bacteria [27]. Therefore, our work is the first study in which the bactericidal properties of a pyrazole-containing macromolecule has been successfully investigated by first measuring MBC values and then running time-kill experiments, which confirmed that CR232-G5K NPs possess a very potent and very rapid bactericidal effect on clinically relevant Gram-positive and Gram-negative strains, as well as MDR and even on isolates producing NDMs, against which, currently, no combination antibiotic/inhibitor is clinically approved [27].
Cytotoxicity of G5K, CR232, and CR232-G5K NPs on HaCaT Human Keratinocytes Cells
In addition to a proper water solubility, a new antibacterial agent should hopefully selectively inhibit the bacterial cell without damaging the eukaryotic one. This capability can be obtained by determining the values of selectivity index (SI) given by the ratio between the concentration of antibacterial agent capable to kill 50% of eukaryotic cells (LD 50 ) and values of MICs. Thus, hoping for a possible cutaneous use of CR232-G5K NPs, a dose-and time-dependent cytotoxicity study was performed on human keratinocytes (HaCaT) to evaluate the effects on cell viability of pristine CR232, CR232-G5K NPs, and of the nano-manipulated CR232 provided by the quantity of NPs administered. HaCaT were selected as they are the principal type of cells found in the epidermis and are more susceptible to colonization by bacteria, fungi, and parasites. Since in our previous work, we have reported the cytotoxicity of the empty dendrimer G5K on HeLa cells [20], for comparison purposes, we herein evaluated the cytotoxicity of G5K on HaCaT cells. The cytotoxic activity of CR232, CR232-G5K NPs, and G5K at concentrations of 1, 5, 10, 15, 20, 25, 50, 75, and 100 µM was determined after 4, 12, and 24 h of exposure to the cells. The results are shown in Figure S8, Section S2 (SM).
The cytotoxic effects of all compounds tested were strongly influenced by the exposure time. Thus, after 4 h, G5K was not cytotoxic even at the highest concentration tested (100 µM), leaving 88.5% of the cells alive and showing proliferation on the control (viability > 100%) at concentrations in the range 10-25µM. At this exposure time, the cytotoxicity of CR232 and that of CR232-G5K NPs was very similar. A slight proliferation was observed at 5-10 µM concentrations, while for higher concentrations, a slow decrease in cell viability was observed until reaching values of 41% viability at 100 µM concentration. After 12 h of exposure, the toxic effects were higher for all samples, with G5K being the most toxic compound and CR232 the least toxic one at concentrations in the range of 1-15 µM. At the highest concentration of 100 µM, the cell viability decreased under 50% for all compounds, evidencing very similar results for G5K and CR232 (40.4 and 40.8 %, respectively), while for CR232-G5K NPs, the cell viability was of 22.2%. After 24 h of exposure, the cytotoxicity of all compounds increased further, with CR232 being the least toxic compound at low concentrations (1-5 µM). At higher concentrations, the cytotoxicity of G5K and of CR232 was very similar, and for both compounds, the cell viability decreased to under 50% at a concentration of 25 µM to reach very low values at concentrations 100 µM. Curiously, for CR232-G5K NPs, the cell viability decreased to under 50% at a concentration of 5 µM, but at 100 µM, the cell viability evidenced a cytotoxicity lower than those of the empty dendrimer and of the pyrazole derivative.
With the aim of understanding whether, with its nanotechnological manipulation, the cytotoxicity of CR232 has been reduced or improved, and to calculate the SI values (LD 50 /MIC) of CR232, CR232-G5K NPs, and the nanoengineered CR232 provided by the dose of NPs administered, we reported the data of cell viability % obtained at 24 h of exposure vs. the concentrations of G5K, CR232, and CR232-G5K NPs. Next, based on the DL% value previously determined for CR232-G5K NPs (31.7%) [20], we estimated the concentrations of the nanoengineered CR232 provided by the formulation (41.2-4120.0 µM), which participated in the cytotoxic effects observed upon the administration of CR232-G5K NPs 1-100 µM. The obtained curves have been shown in Figure S9, Section S2 (SM). Using appropriate parts of the curves in Figure S9 and the equations of the regression models that best fit the related dispersion graphs, we determined the desired LD 50 . Figure 3a-c shows the dispersion graphs used, the best fitting regression models, and the related equations of G5K, CR232 (concentrations 1-100 µM), CR232-G5K NPs (concentrations 1-15 µM), and of the nanoengineered CR232 (concentrations 41.2-824.2 µM), used to compute their LD 50 .
CR232-G5K NPs 1-100 µM. The obtained curves have been shown in Figure S9, Section S2 (SM). Using appropriate parts of the curves in Figure S9 and the equations of the regression models that best fit the related dispersion graphs, we determined the desired LD50. Figure 3a-c shows the dispersion graphs used, the best fitting regression models, and the related equations of G5K, CR232 (concentrations 1-100 µM), CR232-G5K NPs The best fitting regressions models, which were polynomial for G5K and CR232, and exponential for CR232-G5K NPs and nanoengineered CR232 provided by NPs, were decided based on the value of the related coefficient of determination, R 2 . Moreover, since at concentrations of CR232-G5K NPs > 15 µM and of nano-manipulated CR232 > 824.2 µM the cell viability remained constant, data over these concentrations were not considered to obtain the related dispersion graph, their tendency lines, or their relative equations. The obtained equations, their R 2 values, the computed values of LD50, and the range of SI obtained for the untreated CR232, for CR232-G5K NPs, and for the nanoengineered CR232 provided by NPs using Equation (1) have been reported in Table 6. The SI values of The best fitting regressions models, which were polynomial for G5K and CR232, and exponential for CR232-G5K NPs and nanoengineered CR232 provided by NPs, were decided based on the value of the related coefficient of determination, R 2 . Moreover, since at concentrations of CR232-G5K NPs > 15 µM and of nano-manipulated CR232 > 824.2 µM the cell viability remained constant, data over these concentrations were not considered to obtain the related dispersion graph, their tendency lines, or their relative equations. The obtained equations, their R 2 values, the computed values of LD 50 , and the range of SI obtained for the untreated CR232, for CR232-G5K NPs, and for the nanoengineered CR232 provided by NPs using Equation (1) have been reported in Table 6. The SI values of CR232-G5K NPs computed for each isolate used in this study have instead been reported and are observable in Tables 2 and 3.
were LD 50 is the lethal dose (µg/mL or µM) of the antibacterial agent against HaCaT cells and MIC is the minimum inhibiting concentration (µg/mL or µM) displayed the same molecule against bacteria. According to the LD 50 data reported in Table 6, although the nano-formulation of CR232 developed here seems to be considerably cytotoxic, having a LD 50 value of 5.6 µM (3.4-fold lower than that of G5K and 4.0-fold lower than that of untreated CR232), according to the MICs observed on isolates used in this study, its SI values (2-16) were remarkably higher than those determinable for pristine CR232, assuming MICs ≥ 128 µg/mL (SI ≤ 0.05789). In particular, SI values of CR232-G5K NPs were 34.5-276.4-fold higher than those of CR232, thus establishing that by formulating CR232 in NPs using G5K, a potent antibacterial agent was obtained that is more promising that the pristine pyrazole for being applied in therapy. Additionally, considering the LD 50 values determined for the nanoengineered CR232 provided by the concentrations of NPs administered to the HaCaT cells (236.1 µM), it can be established that with our nanotechnological strategy, in addition to having improved the solubility of CR232 in water and its antibacterial effects, we have also reduced its cytotoxicity by 10.8 times. To date, the scientific community does not agree on the criterion for assessing the minimum acceptable value of SI. In fact, it was reported that SI values ≤ 5.2 were acceptable for South African plant leaf extracts with antibacterial properties, that antibacterial plant extracts were considered bioactive and non-toxic if SI > 1, and that SI should not be less than 2 [37][38][39][40]. Theoretically, the higher the SI ratio, the more effective and safer a compound would be during in vivo treatment for a certain bacterial infection. Collectively, the SI values determined in this study for CR232-G5K NPs (which are in the range 2-16) could be high enough to suggest they have a promising role as an antibacterial agent suitable for future clinical development.
Conclusions
A CR232-loaded dendrimer formulation (CR232-G5K NPs) was previously synthetized to solve the very poor solubility of CR232 that prevented the determination of reliable Furthermore, CR232-G5K NPs also displayed very low MIC values (MIC = 0.72 µM) against colistin-resistant P. aeruginosa and K. pneumoniae isolates, which are currently untreatable by the available antibiotics. In addition, CR232-G5K NPs were effective against an isolate of P. aeruginosa resistant to the combination of avibactam-ceftazidime developed to counteract bacterial resistance to carbapenems.
Estimations of the MICs of the nanoengineered CR232 released by NPs at 24 h established MICs in the range 5.05-40.4 µg/mL against Gram-positive species, while MICs in the range 10.1-40.4 µg/mL were established against Gram-negative ones.
In all cases, both the MICs of CR232-G5K NPs and those estimated for the CR232 released by NPs were lower than those supposed for pristine CR232, which is difficult to determine due to its water insolubility, thus confirming that our nanotechnological approach not only succeeded in obtaining a water-soluble CR232 formulation suitable for both in vitro investigations and future in vivo applications, but also improved the antibacterial potency of CR232.
Additionally, in time-kill experiments carried out on strains of P. aeruginosa, S. aureus, and E. coli, including one colistin-resistant P. aeruginosa, one avibactam/ceftazidimeresistant P. aeruginosa, as well as NDMs-producing E. coli and MRSA strains, CR232-G5K NPs displayed an extremely strong and rapid bactericidal effect against all the tested pathogens, regardless their resistance to antibiotics. Finally, to evaluate the possible future clinical application of CR232-G5K NPs as a therapeutic agent, especially for skin infections, we examined its cytotoxicity on human keratinocyte cells (HaCaT) to determine the selectivity indices (SI) both of CR232-G5K NPs and CR232 provided by NPs upon administration of the dendrimer formulation. Accordingly, by formulating CR232 in NPs the SI values of pristine CR232 were improved by 34.5-276.4 times Additionally, according to the LD 50 values determined for the nanoengineered CR232 provided by the concentrations of NPs administered to the HaCaT, by our nanotechnological strategy, the cytotoxicity of CR232 was reduced by 10.8 times. The results obtained in this study establish that, by formulating CR232 in NPs using G5K, a potent antibacterial agent more promising that the pristine pyrazole was obtained and could be applied in therapy.
Supplementary Materials:
The following are available online at https://www.mdpi.com/article/ 10.3390/biomedicines10040907/s1, Section S1. Synthesis and Characterization of CR232-loaded Dendrimer Nanoparticles (CR232-G5K NPs). Figure S1. SEM images of G5K (a) and CR232-G5K (b) particles. Table S1. Data of the calibration curve: A mean , C CR232 , C CR232p , residuals, and absolute errors (%). Figure S2. CR232 linear calibration model. Table S2. Values of A obtained for the five concentrations of CR232-G5K NPs analysed and the related C CR232 obtained from Equation (1). Results concerning the concentration of CR232 in CR232-G5K NPs, DL%, EE%, molecular formula, and MW of CR232-loaded NPs, as well as the difference between the MW obtained by 1 H NMR that was computed using UV-Vis, expressed as error %. Table S3. Results obtained from solubility experiments performed on untreated CR232 and CR232-G5K NPs. Table S4. Results obtained from DLS analyses on G5K NPs and CR232-G5K NPs: particle size (Z-ave, nm), PDI, and ζ-p. Figure S3. Titration curves (error bars not reported since they are difficult to detect) (a); β values vs. pH values and values of β means of CR232-G5K NPs (presented as bars graph) and of three PAMAM of fourth generation for comparison (b). Figure S4. CR232 CR % at pH 7.4 for 24 h obtained by both weighting the CR232 amounts passed in PBS solutions at fixed time points (purple line) and by UV-Vis analyses on these samples re-dissolved in DMSO (orange line). The error bars have not been reported on the graph to avoid confusion. Figure S5. Linear regression of the Weibull kinetic mathematical model with the related equation and R 2 value. Section S2. Biological Investigations. Figure S6. Cell viability (%) of HeLa cells exposed for 24 h to G5K and G4-PAMAM-NH 2 0-100 µM and the polynomial tendency line associated to the curve obtained for G5K with Equation (6) used to determine the LD 50 of G5K. Figure S7. Cell viability (%) of HeLa cells exposed for 24 h to G4-PAMAM-NH 2 0-14 µM and the associated polynomial tendency line with Equation (7) used to determine the LD 50 . Table S5. Equations (6) and (7) and the LD 50 values of G5K and of G4-PAMAM-NH 2 on HeLa cells (24 h). Figure S8. Dose-and time-dependent cytotoxicity activity of G5K, CR232, and CR232-G5K NPs at 4 h, 12 h, and 24 h towards HaCaT cells. Table S6. Results from statistical analysis. The statistical significance of differences between experimental and control groups was determined via a two-way analysis of variance (ANOVA) with the Bonferroni correction. Figure S9. Dose-depended cytotoxicity curves obtained when reporting the cell viability % vs. the concentrations of G5K, CR232, CR232-G5K NPs at 24 h of exposition and the amount of nanoengineered CR232 provided by the quantity of CR232-G5K NPs administered in graph. References [20,28,[41][42][43][44] are cited in the supplementary materials.
|
2022-04-18T02:35:44.189Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "120b2deac617c401a540e06356569a8622a89c06",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "120b2deac617c401a540e06356569a8622a89c06",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251307949
|
pes2o/s2orc
|
v3-fos-license
|
A novel procedure to investigate social anxiety using videoconferencing software: A proof-of-concept study
Social anxiety disorder (SAD) is very common and can be significantly disabling. New treatments are needed as the remission rate for SAD is the lowest of all the anxiety disorders. Experimental medicine models, in which features resembling a clinical disorder are experimentally induced, are a cost-effective and timely approach to explore potential novel treatments for psychiatric disorders. Following the emergence of SARS-CoV-2, there is a need to develop experimental medicine models that can be carried out remotely. We developed a novel procedure to investigate SAD (the InterneT-based Stress test for Social Anxiety Disorder; ITSSAD) that can be carried out entirely online by a single investigator, potentially reducing costs and maximising internal reliability. The procedure involves an anticipatory period followed by a naturalistic social interaction task. In a sample of 20 non-treatment-seeking volunteers with symptoms of SAD, the ITSSAD induced significant subjective anxiety and reduced positive affect. Further, increased social anxiety symptoms at baseline predicted increased anxiety during the social interaction task. This protocol needs further validation with physiological measures. The ITSSAD is a new tool for researchers to investigate mechanisms underlying social anxiety disorder
Introduction
Social anxiety disorder (SAD) is one of the most common mental disorders, with an estimated lifetime prevalence of more than 6% in Europe (Fehm et al., 2005).SAD can be significantly disabling due to excessive apprehension regarding social situations, leading to avoidance and an impairment in functioning (Hendriks et al., 2016).New treatments for SAD are needed, as only 64.9% of patients remit after 4 yearsthe lowest remission rate of all the anxiety disorders (Hendriks et al., 2016).Experimental medicine models, in which important resembling features of a clinical disorder are experimentally induced, can be a cost-effective and timely approach to explore potential novel treatments for psychiatric disorders (Baldwin et al., 2017).Following the emergence of the SARS-CoV-2 pandemic, in-person research and social contacts have been restricted in many parts of the World.This has highlighted the need for tasks and experimental procedures that can be conducted virtually or online to allow research into anxiety disorders to continue (Kirschbaum, 2021).
A key element in the development of SAD is social-evaluative threat (Clark and Wells, 1995;Wong et al., 2020;Wong and Rapee, 2016).Social-evaluative stimuli are those that implicitly or explicitly communicate judgement of a person, for example facial expressions, eye contact or behaviours such as applauding or leaving a room (Wong and Rapee, 2016).It is thought that a combination of trait factors such as inherited temperament, culture, parent behaviour and previous life events lead to these social-evaluative stimuli being appraised as threatening (Wong and Rapee, 2016).Resultant changes in neurobiology, cognition and behaviours designed to detect and eliminate threatening social-evaluative situations (e.g.amygdala overactivity, anticipatory and post-event processing, avoidance behaviour) might be important in maintaining that high level of threat (Nelemans et al., 2017;Wong and Rapee, 2016).
A number of tasks have employed social-evaluative situations to investigate stress.Possibly the most widely used paradigm that involves the induction of social-evaluative stress in laboratory conditions is the Trier Social Stress Test (TSST) (Dickerson and Kemeny, 2004;Frisch et al., 2015;Kirschbaum et al., 1993).The TSST involves a short preparation period followed by a public speaking task and surprise mental arithmetic task performed in front of an observing panel of two or more experimenters (Kirschbaum et al., 1993).This task reliably induces subjective stress and anxiety, and worsens negative mood (Allen et al., 2014).Two studies in adults (Eagle et al., 2021;Harvie et al., 2021) and one in adolescents (Gunnar et al., 2021) have shown that a TSST administered via videoconferencing platforms can induce as robust a stress response as an in-person version.Variations on the TSST include giving a short speech observed by one or more judges (Kocovski et al., 2011), performing mental arithmetic while being shown feedback about 'expected performance' (Dedovic et al., 2005), and singing in front of an audience (Brouwer and Hogervorst, 2014).Challenges with carrying out such tasks include the logistics of organising an observing panel and controlling for potential confounds such as the gender composition of the panel and their behaviour (Frisch et al., 2015;Narvaez Linares et al., 2020).In addition, although these tasks induce considerable stress, it is unclear whether this stress is consistent with symptoms of SAD.For example, cortisol reactivity is usually an outcome measure of the TSST, and heightened cortisol responses to the TSST (on a population level) have been associated with the prevalence of stress-related disorders generally, not SAD specifically (Miller and Kirschbaum, 2019).Further, those with social anxiety have demonstrated both increased (Roelofs et al., 2009) and reduced (Cris ¸an et al., 2016;Shirotsuki et al., 2009) cortisol responses to the TSST compared with healthy volunteers.Task design has been highlighted as a potential reason for these inconsistent results (Cris ¸an et al., 2016).Furthermore, while patients with SAD do experience fear during public speaking, this is not a specific feature, as many individuals might experience this, without having SAD (Panayiotou et al., 2017).
Here we report a proof-of-concept study to highlight a novel social interaction paradigm designed to induce social anxiety employing a naturalistic social interaction and videoconferencing software (the InterneT-based Stress test for Social Anxiety Disorder; ITSSAD).The ITSSAD includes a simple task involving 'getting to know' another person, which can induce significant anxiety in those with SAD and significant physiological arousal in healthy volunteers with high levels of social anxiety (Nordahl et al., 2016;Shalom et al., 2015).We hypothesised that this naturalistic task, likely to be encountered in daily life by those with SAD and easily reproducible online, would induce detectable anxiety in an online experimental setting.We focused on measuring subjective anxiety as experimentally-induced subjective stress is positively associated with sub-clinical and clinical social anxiety symptoms (Panayiotou et al., 2017;Taylor et al., 2020).Further, there is evidence that subjective stress reactivity in social situations is an important factor in the maintenance of SAD (Nelemans et al., 2017).If subjective stress/anxiety induced by an experimental model of SAD were associated with trait social anxiety symptoms, rather than a more generalized measure of trait anxiety, this would suggest that subjective acute anxiety is specific and relevant for SAD.We therefore hypothesised that the anxiety induced by the ITSSAD task would be associated with trait social anxiety symptoms specifically and not with a generalized trait anxiety measure.
The ITSSAD
The ITSSAD (Fig. 1) can be carried out entirely online.We designed the ITSSAD to induce anxiety through anticipation, and subsequent experience of, a naturalistic social-evaluative situation (Allen et al., 2017;Dickerson and Kemeny, 2004).The ITSSAD begins with a 5-minute anticipatory period.During this, we showed participants task instructions as follows: "In 5 minutes you will take part in a social interaction online using videoconferencing software.Your task will be to take some time to get to know the other person as you normally would.Just be yourself.You can talk about anything you want other than this experiment.You will be watched by 3 other experimenters who will be assessing your behaviour.We would like you to have your camera on during this interaction." After the anticipatory period, participants enter a videoconference.Present in the videoconference is an experimenter who introduces themselves as the person the participant is tasked with 'getting to know' and introduces a (mock) observing panel of 'experts' who are present to monitor the participants' behaviour.These appeared to be attendees to the videoconference who had turned their cameras off, but in reality were 'dummy' accounts logged into by the experimenter on other devices/browser windows and placed on mute.This allowed us to maintain a social-evaluative context whilst only having one experimenter.A previous study investigated whether a judging panel needed to be visible to induce stress during a public speaking task in healthy male volunteers: there was no significant difference in physiological stress between those who completed the task in front of a visible panel, and those who completed the task while the panel was behind a one-way mirror (Andrews et al., 2007).This indicates that the suggestion of the presence of a panel is adequate to induce a social-evaluative context and subsequent anxiety.We also named the dummy accounts to indicate the panel contained a mix of genders, as this has been shown to induce greater stress than a single-gendered panel (Narvaez Linares et al., 2020).We ensured that, along with the experimenter, two apparently male and two apparently female 'experimenters' were visible to the participant.To reinforce the social-evaluative context, the experimenter also informs the participant that the interaction will be recorded for review later.
The experimenter then begins a 5-minute timer and participants are asked to begin the social interaction task.The experimenter was briefed not to initiate conversation, instead allowing the participant to sit in silence if they did not initiate conversation.Experimenters were briefed to respond with non-elaborate verbal answers to questions posed by a participant whilst maintaining as neutral a facial expression as possible, much like the judging panel in the original TSST (Allen et al., 2017;Kirschbaum et al., 1993).If a silence lasted more than 30 seconds, the experimenter could prompt the participant with a short statement, for example, "I am a student at the university".
On completion of the 5 minutes, participants enter a recovery period.During this time, participants complete a coached mindfulness exercise.Mindfulness strategies are known to reduce post-event processing in SAD (Cassin and Rector, 2011;Shikatani et al., 2014) and so this was included as a 'mood repair'.Following the mindfulness task participants are fully debriefed.
Design of proof-of-concept study 2.2.1. Ethics statement
This study was reviewed and approved by the Ethics and Research Governance Office at the University of Southampton (reference: 61411) and performed in accordance with relevant local guidelines and regulations and the Declaration of Helsinki.Prior to starting the study, participants were informed that the aim was to explore social anxiety symptoms during videoconferencing.As the protocol involved some deception (described above), participants were fully debriefed at the end of the study and informed consent was sought a second time for us to retain their data.No participants withdrew consent for their data to be used.
Participants
For this proof-of-concept study, we recruited 20 participants aged 18-45 years with sub-clinical to clinical social anxiety symptoms.We felt a practical paradigm for exploring social-evaluative threat should induce anxiety with a large effect size.In a within-subjects design, a sample size of 20 participants will detect an effect of at least d = 0.66 with 80% power.Therefore, if subjective anxiety was significantly induced in this study, the effect would likely be moderate to large and suggest proof-of-concept of the paradigm.Social anxiety symptoms were assessed through the social phobia inventory (SPIN): a validated 17-item self-rated questionnaire (Connor et al., 2000).Participants with a SPIN of greater than 14 were included.This cut-off can differentiate between those with SAD of varying intensity and those with no social anxiety symptoms (Connor et al., 2000).We excluded participants via a self-report questionnaire if they reported: any current psychiatric disorder other than SAD; any history of psychosis or bipolar affective disorder; any significant physical illness; any recent treatment (either psychological or any systemic medication excluding paracetamol in the preceding 8 weeks); regularly using illicit substances; consuming more than 21 units of alcohol per week; or consuming more than 8 caffeinated drinks a day.
Study procedure
The study was carried out using both Qualtrics XM online survey software (https://www.qualtrics.com)(Qualtrics, 2021) and Microsoft Teams (https://teams.microsoft.com)(Microsoft, 2021).Participants initially completed a screening questionnaire that included the SPIN.Those who were eligible were then invited to attend a test session.Participants entered the test session via a private, personalised link sent to them by e-mail.The participants completed the test session from a private space of their choosing.
To fully characterize this non-treatment seeking sample, on entry into the session, participants completed the following questionnaires to assess trait anxiety and personality characteristics: Social Interaction Anxiety Scale (SIAS, (Mattick and Clarke, 1998)), Brief Fear of Negative Evaluation Scale (Brief FNE, (Leary, 1983)), and a modified version of the generalised anxiety disorder 7-item (GAD-7, (Spitzer et al., 2006)), where each question was represented by a visual analogue scale ranging from "not at all" to "nearly every day".After these assessments, participants completed the ITSSAD as described above.
Outcome measures
We measured subjective anxiety and mood before (at session baseline) and after the anticipatory period, and after the social interaction task.At all three timepoints, participants were asked to complete the modified GAD-7 with visual analogue scales ranging from "not at all" to "all of the time".Each item on this version of the GAD-7 was scored between 0 and 100.All items were then summed to give a total score (maximum 700).This version of the GAD-7 has been shown to be sensitive to state changes in anxiety with high resolution (Huneke et al., 2020).The GAD-7 questionnaire also captures social anxiety symptoms with good sensitivity (Kroenke et al., 2007).Subjective mood was assessed at all three timepoints through the Positive and Negative Affect Schedule (PANAS, (Watson et al., 1988)).
Statistical analysis
We carried out statistical analysis using the afex package in R (https://CRAN.R-project.org/package=afex)(Singmann et al., 2021).We assessed change in anxiety and mood over time through linear mixed-effects models (estimated using restricted maximum likelihood).Time was entered as a fixed effect while participant was included as a random effect.We chose to analyse the data through linear mixed-effects modelling as this allows greater retention of data when repeated measures are unbalanced, e.g.due to dropouts during the study.In this study, one participant dropped out prior to completing the anticipatory period and a further participant dropped out prior to completing the social interaction task.Linear mixed-effects modelling allowed us to retain datapoints already collected for these participants in the analysis.Where there was a significant effect of time (degrees of freedom calculated via Satterthwaite's method), we carried out post-hoc pairwise comparisons (t-tests) to assess for significant differences between timepoints.All statistical hypotheses were two-tailed and significance values for post-hoc comparisons were adjusted using the Tukey method.
To assess whether anxiety during the social interaction task was related to social anxiety symptoms, as opposed to trait generalized anxiety, we created an exploratory linear mixed-effects model including interaction terms of time*SPIN and time*trait GAD-7 as fixed effects, with participant included as a random effect.Both SPIN and GAD-7 variables were centered on the mean prior to carrying out the analysis.The significance of the interactions was tested through two-tailed F tests (degrees of freedom calculated via Satterthwaite's method).We explored the direction of significant interactions through analysis of simple main effects.
Results
Baseline characteristics of the participants are summarised in Table 1.On average, the participants exhibited moderate to severe social anxiety symptoms, with a mean SPIN of 38.95 ± 11.63.The majority (85%) were female.
We also carried out an exploratory linear-mixed effects analysis examining the effect of trait generalized anxiety and social anxiety on subjective anxiety experienced during the ITSSAD.Both time*trait GAD-7 (F (2,33.8)= 12.06, p = 0.0001) and time*SPIN (F (2,32.3)= 5.13, p = 0.0116) interactions were significant.Post-hoc analysis of simple effects demonstrated that SPIN score positively predicted anxiety during the speaking task, but was not significantly related to anxiety beforehand.Conversely, there was a significant positive effect of trait GAD-7 on anxiety during anticipation, but no significant effect on anxiety during the speaking task (Table 3 and Fig. 3).
Discussion
In this proof-of-concept study, we showed that it is possible to induce social anxiety symptoms through a novel procedure using videoconferencing software (ITSSAD).Subjective anxiety was increased by a pretask anticipation period, and anxiety remained elevated following a naturalistic social interaction task.In addition, positive affect decreased during the pre-task anticipation period, and positive affect was not significantly different following the social interaction task.Finally, increased baseline SPIN scores predicted increased anxiety during the social interaction, while trait GAD-7 did not, suggesting this task specifically induces features of social anxiety disorder.
Recently, a number of online versions of social stress tests have been developed (Eagle et al., 2021;Gunnar et al., 2021;Harvie et al., 2021).However, these protocols are designed to induce stress and do not necessarily induce social anxiety symptoms specifically.
The ITSSAD utilizes a naturalistic social interaction task that is easy to administer and is likely to be ecologically valid.Subjective anxiety increased following both the anticipation period and social interaction task.In addition, anxiety during the social interaction was predicted specifically by SPIN scores at baseline.This suggests the social interaction activated cognitive factors important in SAD.Social interaction tasks are known to induce social anxiety symptoms and can activate psychological mechanisms, such as negative self-evaluation (Nordahl et al., 2016).Exploration of the factors involved in the ITSSAD are outside the scope of the current proof-of-concept study, but this warrants further investigation.
Our novel approach potentially possesses other advantages over previously developed social stress tests.A limitation of the original TSST and its offline and online variants is the logistical challenge and human resources cost involved in setting up the test sessions.The TSST and its variants require laboratory space and multiple individuals (at least 2) to be available contemporaneously for approximately 30 minutes to test a single participant (Allen et al., 2017;Kirschbaum et al., 1993).In comparison, the ITSSAD can be carried out by a single investigator with only a laptop in any private space.In addition, an important potential confounder of the original TSST and online variants is the characteristics and behaviour of confederates.Variations in acting between different confederates, or within confederates, can affect the internal reliability of the TSST (Allen et al., 2017;Frisch et al., 2015;Wallergård et al., 2011).There is also evidence that committee members can empathically mirror the stress of the participant (Buchanan et al., 2012), potentially leading to distorted interactions.The gender composition of the panel is also known to be an important determinant of stress in the participant (Narvaez Linares et al., 2020).A number of virtual reality adaptations of the TSST have been developed to attempt to mitigate against this: however, the type of virtual reality seems to be relevant.Immersive environments, defined as completely replacing audio-visual cues with virtual reality, demonstrated significantly greater cortisol reactivity than non-immersive environments (Helminen et al., 2019).These kinds of immersive environments require costly headsets and other equipment, as well as resources to build the virtual world.By comparison, in the ITSSAD the observing panel can be 'dummy' accounts controlled by a single experimenter.This allows complete control over confederate behaviour and characteristics of the panel such as gender composition.This is likely to provide high internal reliability of the protocol for minimal cost.Further studies are needed to determine internal reliability and costs of this protocol in comparison to other tests of social-evaluative threat.
There are some limitations of this proof-of-concept study.Firstly, the sample size of 20 is small and our findings should accordingly be interpreted with caution.We also did not recruit a low socially anxious sample, and so it is unclear whether anxiety induced in the ITSSAD only occurs in those with high social anxiety symptoms.Future studies should determine how low socially anxious individuals behave when completing the ITSSAD.In addition, we did not collect physiological measures.The TSST is known to increase cortisol levels as well as activate the sympathetic nervous system, which are important features for its validity in investigating stress-related disorders (Allen et al., 2017;Narvaez Linares et al., 2020).Administering the TSST via videoconference also increases both heart rate (Eagle et al., 2021;Harvie et al., 2021) and salivary cortisol concentrations (Gunnar et al., 2021).For the ITSSAD, we were interested in subjective stress response as this is positively associated with trait social anxiety and might be a factor in the maintenance of SAD (Nelemans et al., 2017;Panayiotou et al., 2017;Taylor et al., 2020).Nevertheless, if the ITSSAD were to induce Fig. 3. Line plots showing predicted modified GAD-7 score over time during the ITSSAD protocol.Points represent predicted means, and error bars represent 95% confidence interval.The effect of SPIN score when trait GAD-7 is held constant is shown in (A).This shows that participants with increased SPIN scores are predicted to experience increased anxiety during the speaking task.SPIN scores are predicted to have little effect on anxiety prior to the speaking task.Conversely, the effect of trait GAD-7 when SPIN is held constant is shown in (B).Trait GAD-7 is predicted to affect anxiety experienced during anticipation, but has little effect on anxiety during the speaking task.Abbreviations: GAD-7, Generalised Anxiety Disorder screener; SPIN, Social Phobia Inventory.autonomic anxiety responses then this paradigm could be useful for investigating other potential psychopathophysiological mechanisms of SAD, for example anxiety sensitivity and interoception (Dixon et al., 2015).Brief social interactions are known to induce cardiovascular responses consistent with threat in those with high trait social anxiety (Shalom et al., 2015;Shimizu et al., 2011).It is therefore likely that our protocol would induce similar physiological responses, but this needs testing empirically.Additionally, our participants were also mostly female and due to the small sample size we could not assess for an effect of gender on subjective anxiety.Women tend to exhibit higher subjective stress reactivity than men (Kelly et al., 2008;Rausch et al., 2008).It is unknown whether our results would replicate in a more male-predominant sample.We also did not measure natural recovery following the task, opting instead for a mood repair after the task.We did this to ensure safety and stabilisation of volunteers who were participating in the study remotely.However, post-event processing is thought to be an important factor in the aetiology and maintenance of SAD (Wong et al., 2020;Wong and Rapee, 2016).Furthermore, we did not measure anxiety and mood following the mood repair, so we cannot be sure how quickly induced anxiety 'washes out' following aided recovery.Future studies exploring natural and aided recovery, and post-event processing, following the ITSSAD are warranted.Lastly, we did not control for activities participants undertook, or substances ingested, prior to the testing session.Activities as diverse as brushing teeth, engaging in physical exercise, and eating can affect cortisol responses in the TSST (Narvaez Linares et al., 2020).However, the impact of these behaviours on subjective anxiety is less clear.Regardless, we observed a robust anxiety response without such rigorous control.Further studies are needed to determine whether controlling activities for a period of time before the testing session can improve signal to noise ratio.
Our novel InterneT-based Stress test for Social Anxiety Disorder (ITSSAD) induced significant anxiety in volunteers with subclinical to clinical social anxiety.Subjective anxiety during the social interaction task correlated with trait symptoms of social anxiety disorder.The ITSSAD possesses many advantages for investigating social anxiety including that it is low-cost, easy to carry out, has high internal validity due to complete control of confederates, and involves a naturalistic social interaction task.The ITSSAD is a new tool for researchers to investigate the mechanisms of social anxiety disorder.
Fig. 1 .
Fig. 1.Summary of the protocol for our modified Trier Social Stress Test, the ITSSAD.
Fig. 2 .
Fig. 2. Violin plots showing modified GAD-7 (A), PANAS negative affect (B) and PANAS positive affect (C) scores over time.Anxiety and negative affect increased (vs.pre-anticipation session baseline), while positive affect decreased.Points represent estimated marginal means, and error bars represent 95% confidence interval.Significance values shown originate from post-hoc pairwise t-tests with Tukey adjustment for multiple comparisons.Abbreviations: GAD-7, Generalised Anxiety Disorder screener; PANAS, Positive and Negative Affect Schedule.
Table 1
Sample characteristics Note: values are reported as mean ± standard deviation for continuous variables, and count (%) for categorical variables.Abbreviations: SPIN, Social Phobia Inventory; Brief FNE, Brief Fear of Negative Evaluation Scale; SIAS, Social Interaction Anxiety Scale; GAD-7, Generalised Anxiety Disorder 7-item.N.T.M.Huneke et al.
Table 2
Summary of post-hoc pairwise comparisons.
Note: values are reported as estimated mean difference ± standard error.All significance values are Tukey-adjusted for multiple comparisons.Abbreviations: GAD-7, Generalised Anxiety Disorder 7-item; PANAS, Positive and Negative Affect Schedule.
Table 3
Summary of simple effects of trait generalized anxiety and social anxiety symptoms on subjective anxiety experienced during the ITSSAD protocol.
|
2022-08-04T19:16:36.326Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "7bed35f1f5ec252858eca2d3a0e62fa06d425b85",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.psychres.2022.114770",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "230a9639a86d0d1db2802d7b609f53f5fee59e4a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
11186510
|
pes2o/s2orc
|
v3-fos-license
|
Bone marrow CD34+ cell subset under induction of moderate stiffness of extracellular matrix after myocardial infarction facilitated endothelial lineage commitment in vitro
Background The stiffness of the myocardial extracellular matrix (ECM) and the transplanted cell type are vitally important in promoting angiogenesis. However, the combined effect of the two factors remains uncertain. The purpose of this study is to investigate in vitro the combined effect of myocardial ECM stiffness postinfarction with a bone marrow-derived cell subset expressing or not expressing CD34 on endothelial lineage commitment. Methods Myocardial stiffness of the infarct zone was determined in mice at 1 h, 24 h, 7 days, 14 days, and 28 days after coronary artery ligation. Polyacrylamide (PA) gel substrates of different stiffnesses were prepared to mechanically mimic the myocardial ECM after infarction. Mouse bone marrow-derived CD34+ and CD34– cells were seeded on the flexible PA gels. The double-positive expression for DiI-acetylated low-density lipoprotein (acLDL) uptake and fluorescein isothiocyanate-Ulex europaeus agglutinin-1 (FITC-UEA-1) binding, the endothelial lineage antigens CD31, von Willebrand factor (vWF), Flk-1, and VE-cadherin, as well as cytoskeleton were measured by immunofluorescent staining on day 7. Cell apoptosis was evaluated by both immunofluorescent staining and flow cytometry at 24 h after culture. Results We found that the numbers of the CD34+ cell subset adherent to the flexible substrates (4–72 kPa) was much larger than that of the CD34– subset. More double-positive cells for DiI-acLDL uptake/FITC-UEA-1 binding were seen on the 42-kPa (moderately stiff) substrate, corresponding to the stiffness of myocardial ECM at 7–14 days postinfarction, compared with those on substrates of other stiffnesses. Similarly, the moderately stiff substrate showed benefits in promoting the positive expressions of the endothelial lineage markers CD31, vWF, Flk-1, and VE-cadherin. In addition, the cytoskeleton F-actin network within CD34+ cells was organized more significantly at the leading edge of the adherent cells on the moderately stiff (42 kPa) or stiff (72 kPa) substrates as compared with those on the soft (4 kPa and 15 kPa) substrates. Moreover, the moderately stiff or stiff substrates showed a lower percentage of cell apoptosis than the soft substrates. Conclusions Infarcted myocardium-like ECM of moderate stiffness (42 kPa) more beneficially regulated the endothelial lineage commitment of a bone marrow-derived CD34+ subset. Thus, the combination of a CD34+ subset with a “suitable” ECM stiffness might be an optimized strategy for cell-based cardiac repair. Electronic supplementary material The online version of this article (doi:10.1186/s13287-017-0732-x) contains supplementary material, which is available to authorized users.
(Continued from previous page)
Conclusions: Infarcted myocardium-like ECM of moderate stiffness (42 kPa) more beneficially regulated the endothelial lineage commitment of a bone marrow-derived CD34 + subset. Thus, the combination of a CD34 + subset with a "suitable" ECM stiffness might be an optimized strategy for cell-based cardiac repair.
Keywords: Myocardial ECM stiffness, Myocardial infarction, CD34, Endothelial, Background A great deal of attention has been given to optimizing the cell treatment approach for myocardial infarction (MI) [1,2]. The commitment of engrafted cells into vascular endothelial cells within damaged myocardium is regarded as one of the major methods for promoting cell-based cardiac repair [3]. The myocardial tissue microenvironment in the injured area and the type of transplanted cells are important factors in promoting stem cell specification in the infarct zone [4]. The physical properties of tissue extracellular matrix (ECM), such as stiffness, regulate stem cell adhesion, proliferation, migration, differentiation, and fate [5][6][7]. After MI, cardiac tissue stiffness changes from flexible to rigid in a time-dependent manner [8]. The consecutive changes in myocardial ECM stiffness might result in the differences in the survival of engrafted cells and cell lineage commitment, which are expected to determine the cell therapeutic effect. Thus, the stiffness of the myocardium ECM might determine which time point is optimal for cell repair for the infarcted myocardium, and the matrix property might also affect stem cell specification capacity differently for the type of stem cell. Bone marrow-derived mononuclear cells (BMMNCs) are the most commonly used cell lineage in clinical studies. Our previous study showed that the optimal time frame for implantation of BMMNCs after MI was1 to 2 weeks after the infarction [9]. Myocardial ECM stiffness within this time domain (elastic modulus~42kPa) was more suitable for BMMNCs to differentiate into endothelial lineage cells and commit to angiogenesis. These favorable effects subsequently transferred to improved left ventricular systolic function and enhanced remodeling [9]. However, BMMNCs represented an unselected and mixed cell population. It is important to further determine whether matrix stiffness could selectively affect the BMMNC cell subsets and optimal conditions for the endothelial lineage commitment. A bone marrowderived CD34 + cell subset, a well characterized population of stem cells, are powerful endothelial progenitor cells, and transplantation of this cell subset significantly enhances the formation of new blood vessels in injured hearts [10,11]. However, it is not clear whether the CD34 + subset also presents the same ECM stiffness-dependent differentiation principle as BMMNCs. In addition, it is not known whether the cell lineage commitment is different between CD34 + and CD34cell subsets cultured on substrates with different stiffnesses. In the present study, we simulated myocardial ECM stiffness at different time points after infarction using an in vitro system, and investigated the effect of matrix stiffness as well as the expression of the cell surface marker CD34 of BMMNCs on endothelial lineage commitment.
Isolation of mouse BMMNCs
Femurs from 6-week-old male Balb/c mice were flushed three times in phosphate-buffered saline (PBS) with a 26-G needle to collect bone marrow cells. BMMNCs were isolated by density gradient centrifugation using Ficoll-Paque separator liquid (1.083 g/ml; Sigma-Aldrich, USA) using an established protocol [12]. Briefly, 3 μl of cell suspension was carefully laid over 3 ml of Ficoll-Paque liquid in a 15ml conical tube, and centrifuged 2000 rounds/min for 30 min at 4°C. The mononuclear cell layer at the interphase was transferred to a new 15-ml conical tube, washed twice, and then resuspended in complete M199 culture medium (Gibco, USA). Flow cytometry was performed to determine the percentage of CD34 + and CD34subsets among BMMNCs.
Isolation of CD34 + and CD34cell subsets CD34 + and CD34subsets were isolated from mouse BMMNCs using a magnetic activated cell sorting (MACS) system. The collected BMMNCs were sensitized with FITC-labeled rat anti-mouse CD34 monoclonal antibodies (BD Biosciences, USA), and then incubated with anti-FITC microbeads (Miltenyi Biotec, Germany). The cells labeled with anti-CD34 antibodies and microbeads were enriched by MACS separator and MS columns, and the CD34 + cells were then released from the Dynabeads. To increase the purity of the CD34 + cells, we re-separated the first collected cell fraction with a second MS column. The remaining unlabeled CD34cell subset was also collected as the control. The purity of the collected CD34 + cell subset was detected by flow cytometry (BD Biosciences, USA), and the viability was assessed using methylene blue before cell culture.
Determination of myocardial ECM stiffness postinfarction
Fifteen mouse models of MI were prepared by the ligation of the left anterior descending coronary artery in 6-week-old male Balb/c mice. The stiffness (elastic modulus, E) of the infarcted myocardial ECM at 1 h, 24 h, 7 days,14 days, and 28 days after MI were measured by atomic force microscopy, as described in our previous study [9]. Briefly, the fresh hearts post-MI were dissected and stored in 0.9% sodium chloride solution (n = 4 per time point). Hearts were then sectioned parallel to the longitudinal axis of the left ventricle from the vascular ligation point to the cardiac apex to yield tissue samples~0.5 mm in thickness. Heart samples, which were mounted and immobilized on coverglass with adhesive tape, were placed on an atomic force microscope (Nanoscope IIIa, USA), and indented by an apyramidtipped cantilever with spring constant of 60 pN/nm (Nanoprobes, USA) using a contact mode in 0.9% sodium chloride solution at room temperature. Mechanical information at 10 positions per sample was obtained and at each position five force-indentation plots were recorded. NanoScope software 5.30 (Veeco, USA) was used to acquire images. Elastic modulus was calculated based on the formula previously described by Domke [13] In vitro simulation of myocardial ECM stiffness using the matrix gel system Flexible polyacrylamide (PA) gel substrates were used to mimic the myocardial ECM stiffness at four time points postinfarction (1 h, 24 h, days 7-14 and days 14-28 post-MI). Briefly, 0.1 N NaOH was poured onto a~20-mm round glass as a cleaning process (part of a 35-mm culture dish; Shengyou, China). 3-aminopropyltri-methoxysilane (Sigma-Aldrich, USA) was spread evenly onto the bottom and then washed with distilled H 2 O; 0.5% glutaraldehyde (Sigma-Aldrich, USA) was then used to increase the binding ability of the PA gels to glass substrates. Thereafter, the PA gel solution was prepared with various volume ratios of 2% bisacrylamide to 40% acrylamide. To induce bisacrylamide crosslinking, 10% ammonium persulphate and N,N,N,N-tetramethylethylenediamine were added to the PA gel solutions. Drops of PA gel solution were subsequently added to the glutaraldehyde-treated aminosilanized round glass, and chlorosilanized round coverslips (~18 mm) were placed on top of the PA solution. The substrates polymerized between the round coverslip and the glass bottom as thin films. When the substrates were polymerized for 30 min, the coverslips were carefully removed and the gels were washed with 50 mM HEPES three times. The gels were then activated by using a heterobifunctional agent, sulfo-SANPAH (Thermo Fisher Scientific, USA), and chemically crosslinked with fibronectin (10 mg/ml; Biosource, USA). The fibronectin coated on the four flexible substrates was labeled using mouse anti-human fibronectin antibody (R&D Systems, USA) and Alex Fluo 488-labeled goat anti-mouse secondary antibody (Invitrogen, USA), and was detected by the immunofluorescence method. No significant differences were seen in the concentrations of the coated fibronectin among the flexible substrates (Additional file 1). Four PA gel substrates with stiffnesses of 4 kPa, 15kPa, 42 kPa, and 72 kPa mechanically mimicked the myocardial ECM at 24 h, 1 h, days 7-14, and days 14-28 after MI, respectively [9]. Before being used for cell culture, the culture substrates were washed with PBS and exposed to UV light with PBS immersion for 15 min for sterilization purposes. Thereafter, the PBS was removed and replaced with Medium199 (25 mM HEPEs; Sigma) to allow equilibrium by putting the dishes in an incubator for at least 2 h.
Cell culture on the flexible substrates
Mouse CD34 + and CD34cell subsets isolated from BMMNCs were suspended in complete Medium199 (Sigma) containing 20% fetal bovine serum (FBS) and 2.5 ng/ml vascular endothelial growth factor (VEGF; Peprotech, USA). These two cell subsets were seeded at a density of 5 × 10 5 cells/dish in the above prepared culture dishes, respectively. The dishes were maintained at 37°C in an incubator containing 5% CO 2 . After 48 h in culture, nonadherent cells were removed and adherent cells were cultured continuously. Culture medium was changed every 48 h. A flowchart of CD34 + and CD34subset isolation, the flexible substrate preparation, and the cell culture conditions are shown in Fig. 1.
Identification of surface markers of endothelial lineage cells
The adherent cells were rinsed with PBS and fixed in 4% paraformaldehyde for 15 min at room temperature. The cells were incubated in normal goat serum (
Cytoskeletal staining
After being fixed in 4% paraformaldehyde, the adherent cells were stained overnight at 4°C with anti-paxillin antibody (Abcam, USA) diluted at 1:100 in PBS buffer (0.02% NaN 3 , 3% bovine serum albumin (BSA) and 0.2% Triton X-100). Subsequently the labeled cells were stained with goat anti-rabbit IgG (H&L) antibody (Abcam, USA) diluted at 1:200 in PBS buffer (0.02% NaN 3 , 3% BSA) at room temperature for 1.5 h. The cells were then incubated at room temperature with phalloidin-TRITC (Sigma-Aldrich, USA) diluted 1:1000 in PBS buffer (0.1% Triton X-100). Finally, nuclei were stained with 1 μg/ml DAPI for 15 min at room temperature. It was difficult for adhesive cells to be trypsinized from the flexible substrates at day 7; due to the absence of sufficient cells, the present study did not carry out the initial study protocol on the semi-quantitative measurement of integrins and transmembrane receptors regulating cell-ECM adhesion using Western blot. Cytoskeletons were observed at 200× and 630× magnification by a laser scanning confocal microscope. In addition, to elucidate the differences in cell morphology and extension on the flexible substrates, the areas and circumferences of adhesive cells were measured and calculated based on cell imaging at day 1 and day 7 using Graphpad Prism 5.0 software (GraphPad Software, USA).
Detection of cell apoptosis and cell survival in CD34 + cells
After being cultured for 24 h, CD34 + cells were harvested by trypsinization and were subsequently stained using Annexin V-PE/7-AAD to evaluate cell apoptosis with an Apoptosis Detection Kit (BD Biosciences, USA). Apoptotic cells (Annexin V + /7-AAD -) were analyzed by FACScan (BD Biosciences, USA). The percentages of apoptotic cells were analyzed using FlowJo software (TreeStar, USA). Meanwhile, the adherent CD34 + cells were also processed with live/dead staining using the LIVE/DEAD Cell Imaging Kit (488/570) (Invitrogen, USA) at 24 h after culture. In brief, cells were co-stained with 2× stock premixed with Live Green vial and Dead Red vial for 15 min at room temperature. After being washed with PBS, the LIVE/DEAD cells were visualized at 200× magnification with a laser scanning confocal microscope.
Statistical analyses
Five randomly selected microscopic fields per cell sample were collected and the number of positive cells per high-power field (HPF) was recorded. To increase the reproducibility of the measurements, three separate tests were conducted for each specific assay. Data are presented as means ± standard deviation. The statistical differences between ECM stiffnesses were tested using two-way analysis of variance (ANOVA). The differences between two cell subgroups were analyzed using Student's t test. P < 0.05 was considered as a statistically significant difference.
Results
ECM stiffness regulated the presentation of endothelial progenitor characteristics as DiI-acLDL uptake/FITC-UEA-1 binding Among mouse BMMNCs, the percentages of CD34 + and CD34cell subsets were 12.4% and 87.6%, respectively (Fig. 2a). After the purification with the MACS system, the purity of the CD34 + subset was~95.0% (Fig. 2b). The numbers of CD34 + cells adherent to the different flexible substrates on day 7 after culture are presented in Fig. 3. Compared with those on compliant or soft culture substrates of 4 kPa and 15 kPa, the total number of CD34 + cells increased more when they were cultured on the moderately stiff substrate of 42 kPa (corresponding to the stiffness of myocardial ECM at days 7-14 postinfarction). In addition, the total number of CD34 + cells cultured on the 42-kPa substrate tended to increase more compared with the cells cultured on the 72-kPa stiff substrate (P = 0.058; Fig. 3a and b). Interestingly, the soft 4-kPa substrate significantly reduced the percentage of endothelial progenitor cells as evidenced by the alteration in DiI-acLDL and FITC-UEA-1 doublepositive cells (P < 0.001; Fig. 3c). Meanwhile, the total number of CD34cells increased more when they were cultured on the moderately stiff 42-kPa substrate compared with the cells cultured on the soft substrates (4 kPa and 15 kPa; Fig. 4a and b). Again, the soft 4-kPa substrate significantly reduced the percentage of endothelial progenitor cells as evidenced by the alteration in DiI-acLDL and FITC-UEA-1 double-positive cells (P < 0.001; Fig. 4c). In addition, no significant differences in the percentage of DiI-acLDL/FITC-UEA-1 doublepositive cells were observed among the 15-kPa, 42-kPa, and 72-kPa cell culture systems (P > 0.05; Figs. 3c and 4c). The overall impact of the flexible substrates on the CD34 + and CD34cell subsets was comparable, but the total number of CD34cells was significantly lower than that of CD34 + cells cultured under the same conditions (Figs. 3 and 4). In addition, the CD34 + cell culture system on the flexible substrates with the four different degrees of stiffness consistently displayed a much greater number of cells with endothelial progenitor characteristics than did the CD34culture system (P < 0.001).
ECM stiffness regulated the expression of surface markers for endothelial cell lineage
The expressions of the vascular endothelial cell lineage markers CD31, vWF, Flk-1, and VE-cadherin were much greater in CD34 + cells compared with the CD34cells under all different culture conditions ( Fig. 5a and c, Figs. 6 and 7, and Fig. 8a and c). Furthermore, the expression of these markers was consistently found to be the highest in CD34 + cells cultured on the moderately stiff 42-kPa substrate (P < 0.01; Fig. 5a and b, Figs. 6 and 7, and Fig. 8a and b). The expression of these markers was lowest in CD34 + cells cultured on the substrate with a stiffness of 4 kPa. Similarly, the expression of CD31, vWF, and VEcadherin in CD34cells also gradually increased in the culture substrates from 4 kPa to 15 kPa to 72 kPa and to 42 kPa ( Fig. 5c and d, Figs. 6 and 7, and Fig. 8c and d). These findings suggest that the moderately stiff substrate has an advantage over other culture substrates in the induction of endothelial cell lineage markers in both the CD34 + and CD34subsets.
ECM stiffness regulates cytoskeleton formation and cytoskeleton arrangement in CD34 + cells
To confirm the cytoskeletal organization and focal adhesion, fluorescence localization of paxillin and F-actin of CD34 + cells on the flexible culture substrates was detected. The cells on the 4-kPa and 15-kPa substrates, the soft substrates, displayed a suborbicular shape with focal adhesions formed along the cell edge, and developed less lamellipodia (Fig. 9). Meanwhile, F-actin was highly enriched at the leading edge of the crawling cells. However, the F-actin network was organized indistinctly. In contrast, the adherent cells on the moderately stiff or stiff (42 kPa or 72 kPa) substrates, especially the 42-kPa substrate, were mostly elongated or spindle-like shaped with distinct perisomatic lamellipodia. Moreover, the Factin network was circumferentially organized, increasingly at the leading edge of the adherent cells with increasing ECM stiffness (Fig. 9). Additionally, the surface areas and circumferences of cells on the moderately stiff and stiff (especially 42 kPa) substrates were dramatically more than those on the soft substrates at day 7 ( Fig. 10a and b). The findings indicate that, compared with the soft substrates, the relatively rigid substrates, especially the 42-kPa substrate, prompted CD34 + cells to generate close contact with the ECM as well as to present a strong cell-spreading and lamellipodial protrusive activity.
ECM stiffness regulatesd cell apoptosis and cell survival in CD34 + cells
To further understand the underlying mechanism of substrate stiffness on endothelial cell commitment and proliferation, we determined the percentage of apoptosis of CD34 + cells cultured on the flexible substrates using Live Green vial/Dead Red vial co-staining assay. The percentage of dead cells on the soft substrates (4 kPa or 15 kPa) was more than that on the moderately stiff or stiff substrates (42 kPa or 72 kPa) (Fig. 11). Furthermore, CD34 + cells were also co-stained with Annexin V and 7-AAD to detect early or late apoptosis by flow cytometry. The proportion of early cell apoptosis on the 4-kPa, 15-kPa, 42-kPa, and 72-kPa substrates was 24.5%, 24.0%, 21.6%, and 21.4%, respectively. The rate of late cell apoptosis was 13.4%, 11.1%, 8.4%, and 9.2%, respectively (Fig. 12). Thus, CD34 + cells on the soft substrates had higher percentages of apoptosis compared with the cells cultured on the moderately stiff or stiff
Discussion
In the present study, we have provided mechanistic insights into the impacts of infarcted myocardial ECM stiffness on the endothelial cell lineage commitment of bone marrow-derived CD34 + and CD34cell subsets. Our results demonstrate that infarcted myocardium-like ECM stiffness regulates the cytoskeletal arrangement, cell survival, cell-ECM adhesion, and cell differentiation in both CD34 + cells and CD34cells derived from BMMNCs. Moreover, there were significant differences in the specification of the endothelial cell lineage between the two cell subsets induced on the flexible culture substrates, which were distinct from conventional glass rigid substrates. The matrix stiffness of 42 kPa, corresponding to myocardial ECM at days 7-14 after MI, was more suitable for the induction of FITC-UEA-1 and DiI-acLDL double-positive cells, as well as the expression of endothelial cell lineage markers such as CD31, vWF, Flk-1, and VE-cadherin in both CD34 + and CD34 -subsets. Meanwhile, in the cell culture system with the flexible substrates, the CD34 + cell subset showed higher endothelial lineage commitment compared with the CD34subset under various culture conditions. Thus, it is clear that an optimal ECM stiffness promotes the endothelial lineage commitment of bone marrow-derived CD34 + cells in vitro. The combination of an optimal cell subset and a suitable ECM stiffness may provide a potentially useful strategy to enhance cell-based cardiac repair after MI.
For the repair of ischemia or infarcted myocardium, the addition of the proper stem/progenitor cells to the suitable microenvironment seems to be promising via promoting neovascularization [14][15][16]. The definition and classification of stem/progenitor cells mainly depends on the cell surface antigens. CD34 is an important cell surface marker of hematopoietic progenitor cells. Bone marrow-derived CD34 + cells have the potent potential to differentiate into endothelial lineage cells which are deeply involved in neovascularization [17]. Meanwhile, the phenotype CD34 functions to mediate the attachment of cells to the ECM [18]. In the present study, we found significant differences in cell attachment to the flexible substrates between the CD34 + cell subset and the CD34subset. Furthermore, as compared with the CD34subset, the CD34 + subset was more easily induced into the endothelial cell lineage, which potentially facilitates angiogenesis as well as cardiac repair. The difference in specification efficiency between the two cell subsets might result from the differences in cell attachment to the flexible substrates. Furthermore, in terms of the CD34 + cell subset, the present study shows a significant difference in the number of cells expressing endothelial phenotypes (not the percentage) on the flexible substrates. However, there exists a consistently high percentage of cells expressing endothelial phenotypes on the 15-to 72-kPa substrates. This suggests that the significant differences in endothelial phenotype expression might result from differences in the cell adherence capacity. In the present study, CD34 + cells seemed to more easily adhere to the flexible substrates than the CD34cells. Mechanically, the characteristics of the CD34 antigen in improving cell adherence to the ECM might provide an explanation for the preference of CD34 + cell survival and specification on the flexible substrates. In contrast, CD34cells presented a lower survival and specification ratio. In addition, CD34 + cells showed distinct focal adhesion, cytoskeletal organization, and cellular morphology on the flexible substrates with varied stiffness. Moreover, cytoskeletal architecture and cell-ECM adhesions become increasingly organized with increasing stiffness. Paxillin, as a connection between ECM and cells, regulates cell fate and even cell specification by influencing cell attachment and cytoskeleton formation (mainly referring to the F-actin network) [19]. Cell adhesion and apoptosis were both regulated by cell-ECM interaction. Disruption of the cell-ECM interaction promotes cell apoptosis [20]. In the present study, CD34 + cells had a lower apoptotic rate on the moderately stiff or stiff substrates (42 kPa or 72 kPa) than those on the soft substrates (4 kPa and 15 kPa). Based on the analysis on the cytoskeleton and cell morphology, the differences in cell apoptosis might relate to the higher adhesive strength of CD34 + cells to the relatively rigid substrates or the stronger cell-ECM interaction. On the other hand, cell-ECM interaction is also widely believed to play an important role in cell survival and differentiation [21]. Stem/progenitor cells are able to sense and respond to the surrounding tissue physical microenvironment, which is known as the ECM stiffness [22]. Accumulating data have shown that tissue ECM stiffness plays an important role in stem cell adhesion, survival, and lineage commitment [23,24]. Following MI, myocardial ECM stiffness might be an important physical condition impacting the efficacy of cell implantation by influencing these cellular biological behaviors [8,9,25]. Thus, the simulation in vitro to the myocardial physical microenvironment post-MI is thought to be essential in detecting implanted cell biology and validating an optimal cell therapy strategy. Indeed, in the present study, infarcted myocardium-like ECM stiffness showed a significant influence on the potential pro-angiogenesis ability of bone marrow-derived CD34 + cells. Furthermore, myocardial ECM at days 7-14 post-MI might offer an optimal physical microenvironment for commitments of engrafted pluripotent cells to endothelial cell lineage, which suggests that the beneficial effect on infarcted myocardium repair might be time-or stiffnessdependent. Moreover, CD34 + cells under induction of ECM stiffness present a similar stiffness-dependent differentiation principle as BMMNCs, and might consequently exert an important role in regulating the efficacy of cell implantation for the damaged myocardium, as reported in our previous study [9]. Furthermore, these findings might partially explain the optimal timing of stem cell implantation after MI.
Notably, almost all of the previously published randomized controlled trials (RCTs) consistently performed cell therapy at days 0 to 7 after MI [26]. Moreover, our previous meta-analysis on these RCTs indicated the more favorable effect of bone marrow-derived stem cell engraftment at 4-7 days after MI on improving left ventricular ejection farction (LVEF) and decreasing left ventricular (LV) end- systolic dimensions than a procedure performed within 24 h following MI [27]. Since days 7-14 post-MI is a "timedomain blank zone" in previous clinical studies on cellbased cardiac repair, it may be important to further investigate the biological behavior of engrafted cells and the efficacy of cell therapy within this "time-domain blank zone" due to the unavoidable therapeutic delay for acute MI. Our previous studies verified the optimal efficacy of cell therapy at 7-14 days after MI, which might relate to ECM stiffnessdependent angiogenesis [9]. Overall, the procedure of cell therapy either too early or too late after acute MI was not productive in terms of promoting cardiac repair due to the absence of the "suitable" stiffness of myocardial ECM. Although the previous clinical trials confirmed the efficacy of bone marrow-derived cell implantation within 24 h or more than 30 days after MI [28,29], the magnitudes of the beneficial effects were significantly different. Based on our present findings, the time-dependent changes in myocardial ECM stiffness after MI may contribute to the differing efficacy of cell therapy at these different time points.
|
2017-12-14T18:09:44.397Z
|
2017-12-01T00:00:00.000
|
{
"year": 2017,
"sha1": "0d1c341b6d159b47c8c4a0344b37b1dd1dd6b2e9",
"oa_license": "CCBY",
"oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-017-0732-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d1c341b6d159b47c8c4a0344b37b1dd1dd6b2e9",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
133633368
|
pes2o/s2orc
|
v3-fos-license
|
Land subsidence threats and its management in the North Coast of Java
Cities on the north coast of Java such as Jakarta, Semarang, Pekalongan, and Surabaya are vulnerable to environmental pressures such as sea level change and land subsidence. Land subsidence can be caused by natural and anthropogenic processes. Geologically, the north coastal plain of Java consists of unconsolidated Holocene alluvial deposit. The recent alluvial deposit is prone to compaction, and further aggravated by anthropogenic forces such as groundwater extraction and land development. Understanding the complex interaction of natural and manmade factors is essential to establish mitigation strategy. Although the impacts of land subsidence are widely felt, many do not realize that land subsidence is taking place. This paper presents a brief review of the land subsidence threats in the North coast of Java and proposes a recommendation for suitable management response.
Introduction
Cities located in the low lying coastal areas are vulnerable to environmental changes such as sea level change and land subsidence. Currently, the observed subsidence rates in coastal megacities are exceeding the sea level rate by a factor of ten [1]. More than half a billion people live in this kind of areas and are vulnerable to the risks of coastal flooding, wetland loss, shoreline retreat and loss of infrastructure [2]. Land subsidence is defined as the gradual vertical movement of the earth surface due to the subsurface movement of the earth materials. Land subsidence problem can arise from natural and anthropogenic problems or combination of both. Natural subsidence can result from processes such as tectonics, sediment compaction, peat oxidation and isostatic adjustment [3][4][5][6]. Maninduced causes are due to fluids, gas and solids extraction, and the addition of surface loads [7][8][9].
It is reported that there were over 150 areas of recent land subsidence across the world, particularly along the coasts, industrialized and densely populated areas [10]. Like other countries, Indonesia is also vulnerable to land subsidence hazard. Java is the most densely populated island of Indonesia, hosting about 60% of the total Indonesian population. During the last 50 years, industrialization and urbanization have transformed the rural, agricultural Java Island into a mixed rural-urban to a megaurban area [11,12]. The large population and increasing anthropogenic activities have brought about more pressures towards the subsurface environment. The north coast of Java hosts main cities and urban areas including the capital Jakarta. The north coast Java road is a vital hub connecting the west part to the eastern part of the island. Natural and anthropogenic forces are thought to have contributed to land subsidence occurrence in this area.
Land subsidence is a subtle process, affecting a large area and going on at a slow rate. Many do not realize that it is taking place until the impacts are felt. Increasing coastal floods, sea water intrusion, failure of well casings, differential settlements, cracks and other damages of buildings and infrastructures are some of the impacts of land subsidence. Estimation of the economic costs of land subsidence is enormous, for Semarang city alone could reach 3.5 trillion Rupiah [13]. This paper is aimed to present a brief review on land subsidence in the North Coast of Java Island. The paper discusses the main features of the land subsidence including the subsurface geology, land subsidence monitoring, analysis of possible causes and mechanism. At the end, this paper proposes a suitable management response to tackle this ongoing land subsidence problem.
Geology and engineering geology of the Northern Java alluvial plain
The northern Java alluvial plain extends from the west of Serang-Jakarta-Cirebon plain to Brebes-Pekalongan plain, Semarang-Rembang plain and ends at the east at Surabaya plain ( Figure 1). Northern alluvial plain of Java consists of unconsolidated clay, silt, sands, and gravels of Quaternary age [14]. The hydrogeology of unconsolidated alluvial plain occurs as multi aquifers and aquitards system. Aquifers in the alluvial plain occur as lenses, interbedded with thick aquitards [15][16][17]. As the municipal water supply is limited, the population and industries mostly rely on groundwater [18]. Over exploitation of groundwater aquifers causes lowering of the piezometric head and induces dewatering of compressible aquitards. Aquitards are saturated, a highly compressible material with low hydraulic conductivity. Subsidence follows the dewatering of aquitard which does not occur immediately due to the low hydraulic conductivity of the layer. Engineering boreholes along the North Java coast showed similar characteristics of aquitards particularly at the upper 20m depth [19,20]. Typical engineering properties related to the consolidation of North Java clay are presented in Table 1. Table 1 suggests that the clay is of soft consistency, low hydraulic conductivity and highly compressible. regular intervals and not always using the same method. Below are the highlights of monitoring and mechanism of land subsidence occurring in the North Java cities:
Jakarta
Several land subsidence measurement techniques was being employed in Jakarta over the period of 1982-2010 [25][26][27][28]. In general, the subsidence rate varies spatially and temporally of about 1-10 cm/year and at some places could reach to 25-28 cm/year [29]. The high rate of subsidence is associated with the high rate of groundwater extraction. The groundwater extraction in Jakarta is high as reflected by the piezometric head decline of 0.2-2 m/year [30]. However study by [31] revealed that hydraulic pressure, more than hydrostatic, occurred in Tongkol, North Jakarta where subsidence was also taking place. The existence of this overpressure indicated that apart from anthropogenic factors, natural compaction may also contribute to land subsidence process in Jakarta.
Semarang
Land subsidence in Semarang has been extensively studied from many aspects: geodetic monitoring, groundwater extraction, and engineering geology. Geodetic measurements during 1980 until 2010 revealed that the land subsidence rate varied spatially from 1-10 cm/year, getting higher towards the north coast [28,32,33]. Overexploitation of groundwater extraction is thought to contribute towards the fast subsidence rate. Microgravity studies [34,35] revealed that there is a deficit of groundwater due to overexploitation particularly in Kaligawe and Genuksari areas. It is corroborated by groundwater level measurements showing fast lowering of the piezometric head [36]. [15,37] used the subsurface properties to quantify the rate of land subsidence. Comparing the calculated rate of subsidence to the geodetic monitoring results, we found that the geodetic rate exceeds the calculated rate by 1.0-2.0 cm/year. [21] observed from the occurrence of excess pore pressure from CPT test in the north east of Semarang, indicating the possibility of natural compaction. Provided the geodetic rate is consistent, the discrepancy of land subsidence rate may arise from unquantifiable natural driving factor.
Pekalongan and Surabaya
Limited information is available from published literature about the land subsidence of Pekalongan and Surabaya. However, monitoring studies revealed that land subsidence occurred in the area. Time series analysis of InSAR images revealed that subsidence occurred in Pekalongan at the rate of 4.8-10.8 cm/year, suspected to be caused by groundwater withdrawal for agriculture [28]. GPS monitoring during 2007-2010 in Surabaya showed subsidence rate of 1.0 -2.7 cm/year occurring in industrial and built areas [38]; possibly related to groundwater withdrawal.
Analysis and proposed recommendation
Research on land subsidence in the North Java coast can be categorized into monitoring by geodetic methods, engineering geology characterization, and predictive numerical modeling. Intensive studies were mostly conducted in large cities such as Jakarta and Semarang where the impacts of land subsidence are mostly visible. Analysis of the hydrogeology and engineering geology of the North Java coastal plain suggests that the thick compressible deposit is prone to compaction due to natural and man-induced causes. Although many type of research have been conducted, subsidence has not been halted, and there are still some challenging issues regarding land subsidence in the North Java coastal area. Some of the matters are:
Subsurface characterization
The vast area of North Java requires detailed subsurface lithologic and engineering characterization. Information of the subsurface condition is not restrictive to land subsidence issue but greatly benefit regional development purposes. Previous monitoring of groundwater level and subsidence rate had been intermittent, and many have been discontinued.
Information gap between scientific publications and stakeholders
Although the damages due to land subsidence are widely visible, no specific actions have been imposed to mitigate the impacts. The lack of awareness of the public and stakeholders could be attributed to the insufficient knowledge regarding the land subsidence process and mechanism and also the magnitude of the economic impacts due to subsidence.
Based on the above analysis, some recommendations are proposed: • Continuous, long-term groundwater monitoring and land subsidence monitoring Groundwater monitoring data is useful to assess the groundwater balance condition and to determine groundwater control measures. Monitoring of subsidence could be achieved by regular GPS campaign, interferometry using satellite images, borehole extensometer or simply installation of stable benchmarks. • Subsurface engineering and hydrogeological investigation • Bridging scientific information into administrative policy and control Results of scientific research should be the basis of administrative policy and control. For example the subsurface condition, results of groundwater level and subsidence monitoring could be used as the basis of groundwater withdrawal regulation and control measures. Scientific information also should be the basis of land subsidence disaster management measures, for example charging up the aquifers through artificial recharge, building flood retaining dikes, pump stations, etc. • Active involvement of local stakeholders Successful mitigation of land subsidence requires the active participation of local stakeholders, particularly for monitoring, formulation of local regulation and control measures.
Conclusions
Industrialization and urbanization have been rapidly experienced in the North Java coastal area. For the last three decades, this area has changed considerably and such trend is expected to continue shortly. Consequently, the demands for land usage and groundwater resource will be accelerated to cater the increasing population and development. Land subsidence is likely to continue in the cities of Jakarta, Semarang, Surabaya, and Pekalongan and may occur in other cities along the North Java coast as well. Therefore, prevention of land subsidence and mitigation of land subsidence induced hazards will continue to be a demanding task for governments, engineering geologists, and geotechnical engineers. Recommendations are proposed in attempts to halt land subsidence rate and minimize its impacts.
|
2019-04-27T13:09:05.371Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "352107b83fd768123ce514fe89fb0900b1e36322",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/118/1/012042/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9c995d6a5830ac84feed8141bba40a801766cf1e",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Physics",
"Geology"
]
}
|
44665800
|
pes2o/s2orc
|
v3-fos-license
|
Structure- Activity Relationship Study and Function-Based Petidomimetic Design of Human Opiorphin with Improved Bioavailability Property and Unaltered Analgesic Activity
Opiorphin QRFSR-peptide is an endogenous human regulator that was discovered using a functional biochemical approach [1,2]. Its characterization demonstrated that it is an authentic physiological dual inhibitor of Zn-dependent metallo-ectopeptidases, neutral endopeptidase (NEP EC3.4.21.11) and aminopeptidase N (AP-N EC3.4.11.2). These enzymes are implicated in the rapid inactivation of endogenous circulating opioid agonists, namely the enkephalins. As a consequence, opiorphin improves the specific binding and affinity of enkephalin-related peptides to membrane opioid receptors [3]. The enkephalin neuropeptides play key roles in the control of nociceptive transmission and in the modulation of the activity of cerebral structures governing the motivation and the adaptive balance of emotional states [4-8]. By increasing the half-life of circulating enkephalins, opiorphin, at systemically or centrally active doses (1-2 mg/kg I.V. or 5-10 μg/kg I.C.V.), produces analgesia in various murine models of pain [1,9,10]. At equivalent doses, opiorphin also exerts antidepressant-like effects in the standard model of depression, the forced swim test [11,12]. All opiorphin-induced effects are specifically mediated via endogenous enkephalin-related activation of μ and/or δ opioid pathways.
Introduction
Opiorphin QRFSR-peptide is an endogenous human regulator that was discovered using a functional biochemical approach [1,2]. Its characterization demonstrated that it is an authentic physiological dual inhibitor of Zn-dependent metallo-ectopeptidases, neutral endopeptidase (NEP EC3. 4.21.11) and aminopeptidase N (AP-N EC3. 4.11.2). These enzymes are implicated in the rapid inactivation of endogenous circulating opioid agonists, namely the enkephalins. As a consequence, opiorphin improves the specific binding and affinity of enkephalin-related peptides to membrane opioid receptors [3]. The enkephalin neuropeptides play key roles in the control of nociceptive transmission and in the modulation of the activity of cerebral structures governing the motivation and the adaptive balance of emotional states [4][5][6][7][8]. By increasing the half-life of circulating enkephalins, opiorphin, at systemically or centrally active doses (1-2 mg/kg I.V. or 5-10 µg/kg I.C.V.), produces analgesia in various murine models of pain [1,9,10]. At equivalent doses, opiorphin also exerts antidepressant-like effects in the standard model of depression, the forced swim test [11,12]. All opiorphin-induced effects are specifically mediated via endogenous enkephalin-related activation of µ and/or δ opioid pathways.
The discovery of opiorphin is the first demonstration of the existence of a physiological regulator of enkephalin bioavailability in humans. As an upstream modulator of opioid pathways in humans, it is thus of major interest from a therapeutic points of view. Indeed, endogenous human opiorphin appears to intervene in the process of adaptation mediated by enkephalins that is associated with nociception. As a consequence, opiorphin is a promising template for the design of a new class of drug-candidates able to efficiently alleviate a number of severe and chronic pain syndromes, without morphine side effects. The actions of opiorphin could be induced at a specific opioid receptor restricted pathway dynamically stimulated by natural effectors, such as enkephalins, that are recruited according to the nature, duration and intensity of the stimulus. This mechanism of action avoids excessive stimulation of ubiquitously distributed opioid receptors and prevents serious side effects such as respiratory depression, sedation, constipation, physical and psychic dependence and tolerance that have been reported in the case of µ-opioid agonists. We previously demonstrated that opiorphin subchronic intake does not develop significant abuse liability or antinociceptive drug tolerance. In addition, anti-peristalsis is not observed [10].
Measurement of Ectopeptidase Activities using 96-well fluorimetric assays: Under conditions of initial velocity measurement (steady state), hydrolysis of substrates was measured by real-time monitoring of their metabolism rate by the respective recombinant and membrane-bound peptidases, in the presence and absence of tested inhibitory compound (concentrations ranging from 0.01 to 100 µM).
Measurement of NEP-endopeptidase activity using FRET specific peptide-substrate, Abz-dR-G-L-EDDnp: Using the black half-area 96 well micro-plate, the standard reaction consisted of enzyme (12 ng) in 100 mM Tris-HCl pH 7 containing 200 mM NaCl and 0.05% Brij 35 (100 µl final volume). The substrate (15 µM final concentration) was added after preincubation for 10 min at 28°C and the kinetics of appearance of the fluorescent signal (RFU) was directly analyzed for 20-40 min at 28°C (2 to 3 min interval successive measures) by using a fluorimeter micro-plate reader (monochromator Infinite 200-Tecan) at 320 nm and 420 nm excitation and emission wavelengths, respectively.
Measurement of NEP-Carboxy DiPept idase activity using FRET specific peptide-substrate Abz-R-G-F-K-DnpOH: Using the black halfarea 96 well microplate, the standard reaction consisted of enzyme (2.5 ng) in 100 mM Tris-HCl pH 6.5 containing 50 mM NaCl and 0.05% Brij 35 (100 µl final volume). The substrate (4 µM final concentration) was added after pre incubation for 10 min and the kinetics of appearance of the fluorescent signal (RFU) was directly analyzed for 20-40 min at 28°C (2 to 3 min-interval successive measures) using the fluorimeter reader at 320 nm excitation and 420 nm emission wavelengths.
In addition, the intra-molecularly quenched fluorogenic peptide, Mca-BK2 (2.5 µM final concentration), was submitted to hydrolysis by 2 ng rhNEP under the same experimental conditions as those described above. Under these conditions the hNEP-enzyme acted upon Mca-R-P-P-G-F-S-A-F-K-(Dnp)-OH as a CarboxyDiPeptidase preferentially cleaving the A-F bond but also as an EndoPeptidase cleaving the G-F bond.
To measure ECE1-ectopeptidase activity, the same protocol described previously was applied except that Mca-BK2 substrate was used at 7.5 µM final concentration and rhECE1 at 5 ng final concentration.
Measurement of DPPIV activity using FRET specific peptidesubstrate G-P-7amido-4-Mca: Using the black half-area 96 well microplate, the standard reaction consisted of enzyme (7 ng) in 100 mM Tris-HCl pH 8 (100 µl final volume). The substrate (5 µM final concentration) was added after preincubation for 10 min and the kinetics of appearance of the fluorescent signal (RFU) was directly analyzed for 20-40 min at 28°C (2.3 min-interval successive measures) using the fluorimeter reader at 380 nm excitation and 460 nm emission wavelengths.
Measurement of AP-N-ectopeptidase activity using Ala-AMC substrate: Using the black half-area 96 well microplate the standard is their rapid degradation by circulating peptidases and the limited permeation of peptides across biological barriers. In order to search for functional derivatives of opiorphin endowed with improved half-life stability and bioavailability properties a Structure-Activity Relationship (SAR) study on opiorphin was first carried out. Then opiorphin analogs were tested for their inhibitory potency against the two membrane-anchored human ectoenzymes, NEP that has both endopeptidase and carboxydipeptidase activities and AP-N, by using selective fluorescence-based enzyme in vitro assays [13][14][15]. Comparative degradation kinetics were done using experimental in vitro systems to evaluate metabolic half-life in human plasma, which is a reliable prediction model for in vivo stability. Metabolic stability parameters in human liver microsomes were also determined.
The final aim of the research described here was to design and analyze functional analogs of opiorphin that display in vivo bioavailability properties superior to the native peptide, in particular, an increase in circulating peptidase resistance and in permeation across epithelial and endo-epithelial membrane barriers, without affecting in vitro and in vivo biological properties, namely, selective inhibition of human NEP and AP-N ectoenkephalinases and potent inhibition of pain behavioral responses in rat model.
Chemicals
All peptides, human opiorphin and opiorphin derivatives, were synthesized by Genosphere Biotechnologies (Paris-France). Analytical RP-HPLC and electrospray MS confirmed the purity (≥ 95%) and molecular mass of the synthesized peptides.
FRET-based Enzyme In Vitro assays
Formal kinetic analysis was performed for each assay using realtime fluorescence monitoring of specific substrate hydrolysis.
Sources of the human ectopeptidases:
Human recombinant NEP and human recombinant AP-N (devoid of their respective N-terminal cytosol and transmembrane segment) were purchased from R&D Systems (France) and used as a pure source of peptidases. Membraneanchored NEP and AP-N expressed by a human cell line in culture (serum-free medium), namely, LNCaP epithelial prostate cells, were also used as a source of native ectopeptidases. Human recombinant DPPIV (dipeptidyl amino peptidase IV) and human recombinant ECE-1 (endothelin converting enzyme), purchased from R&D Systems, were used to assess compound specificity.
FRET is the distance-dependent transfer of energy from a donor fluorophore (Abz=ortho-aminobenzoyl or Mca=7-methoxycoumarin-reaction consisted of enzyme (3.5 ng) in 100mM Tris-HCl pH 7.0 (100 µl final volume). The Ala-AMC substrate (20 µM final concentrations) was added after preincubation for 10 min at 28°C and the kinetics of appearance of the signal was monitored for 20-40 min at 28°C using the fluorimeter reader at 380 nm excitation and 460 nm emission wavelengths.
Measurement of membrane human NEP-endopeptidase activity using tritiated substance P substrate: the method used was previously described in Wisner et al. and Rougeot et al. [1,16].
The background rate of substrate autolysis, representing the fluorescent signal obtained in the absence of enzyme, was subtracted to calculate the initial velocities in RFU (Relative Fluorescent Unit)/ min. Data were analyzed using Magellan 6.0 software to evaluate initial velocities and with Excel Microsoft software. IC 50 estimates were obtained from a sigmoidal curve fit to a plot of % inhibitory activity versus log inhibitor concentration, using Prism software. For each curve, inhibitors were tested across a range of concentrations differing in half log unit increments.
In vitro pharmacokinetic and metabolic studies: Human blood was collected in pre-chilled tubes containing 1% sodium citrate (buffered at pH 7) and kept at 4°C. The plasma was collected after centrifugation at 400 × g for 30 min at 4°C, then aliquoted and stored at -80°C.
Peptide solutions were extemporaneously prepared in order to add the appropriate concentration of peptidein a volume of 10 µl, thus avoiding dilution of the plasma. The plasma peptide solutions were then mixed and incubated in a shaking water bath at 37°C with a continuous and slight shaking for the preset kinetic time period. The reaction was stopped by cooling the tubes simultaneously in ice and by the addition of 0.1N final concentration of HCl. For opiorphin PK experiments, a mixture of 1 µg or 40 µg QRFSR-peptide, containing 100 or 500.10 3 cpm QR [ 3 H-F] SR (3.6 Ci/mmole, CEA-Saclay), was used. Controls, in which protease free human plasma (Methanol/TFA extract) is substituted for fresh plasma, were included. For certain experiments, different inhibitors of plasma peptidases were added immediately prior to the addition of opiorphin-peptide, bestatin, an inhibitor of aminopeptidases or GEMSA and an inhibitor of carboxypeptidase B.
All samples were stored at -80°C until subjected to Sep-Pak extraction and RP-HPLC chromatography.
C18 Solid-phase extraction: Acidified (HCl 0.1N final concentration) and clarified biological samples were applied to C18-SepPak cartridges (Waters, France) preconditioned with three successive cycles of methanol (Lichrosolv, Merck) and pure water and ultimately maintained in 0.1% TFA-water. After applying the samples to the top of the cartridge and washing with 0.1%TFA-water (5 ml), the analytes were eluted with 100% methanol containing 0.1% TFA (5 ml). The fractions were collected at 4°C, frozen at -80°C and then lyophilized at -110°C for 48 h. Under these conditions, recovery of the marker QR [ 3 H-F] SR-opiorphin, added to plasma samples, was 76 ± 5 % (mean ± SD for n=20).
Finally, dried extracts were re-suspended in 250 µl pyrolyzed water at 4°C then centrifuged 30 min at 4000 rpm and +4°C to quantify opiorphin-related components by radioactivity measurement (radiometer Wallac, PerkinElmer) and RP-HPLC in conjunction with PDA and radiometer analyses and/or ELISA-Opiorphin immunoassays.
Reverse phase C18-HPLC Chromatography: RP-HPLC, coupled with online PDA (224 nm) and radiometric (150-TR PerkinElmer) detection, was used to separate, identify and semi-quantify the different opiorphin-related molecular forms contained in human plasma extracts from in vitro PK experiments. Reversed Phase-High Performance Liquid Chromatography (RP-HPLC) used a C18-bonded stationary phase and an acetonitrile mobile phase in the presence of 0.1% trifluoroacetic acid (TFA, Sigma-France).
The re-suspended extracts (equivalent to 100 µl initial plasma volume), obtained during the above-described procedures, and were applied to the top of the C18/RP-HPLC analytical column (150×4.5 mm Luna 5 µ Phenomenex-France) under TFA 0.1%-water solvent equilibrium conditions. The various components were eluted and isolated according to their hydrophobic characteristics, in a 25min linear gradient from 0% to 50% acetonitrile (Lichrosol, Merck), containing 0.1% TFA at a 1 ml/min flow rate (Surveyor HPLC system, Thermo Scientific-France). The entire HPLC system was thermoregulated at 12°C. Each fraction (1 ml) was collected and lyophilized at -110°C for 48 h. Each chromatographic profile was driven, integrated and analyzed by the ChromQuest software. The peak height values of each peak of interest as well as those for a defined inner standard peak were calculated. Eluted fractions were collected at a 1 min timeinterval. Each fraction was lyophilized at -110°C for 48 h. In opiorphin PK experiments, the content of radioactivity of each sample, i.e., crude plasma, plasma extracts, HPLC fractions, was determined to evaluate the recovery of each processing step.
The opiorphin-like content of samples (SepPak extracts and/or HPLC fractions) was also measured using a quantitative and specific immunoassay (competitive-ELISA) developed in the laboratory [17].
Immunoassay for opiorphin:
The recently published protocol was used to assess the opiorphin-like content of samples [17]. Optimized assay conditions are summarized as follows: For the coating, 40 ng of the Y-[(CH 2 ) 12 ]-QRFSR peptide per 200 µl coating buffer (100 mM potassium phosphate, P H 7.1) were added to individual wells on a 96well micro-titration plate and incubated overnight at +4°C with light shaking. In parallel, 100 µl of standard or samples, that were serially diluted 2-fold with incubation buffer (200 mM Tris-HCl, pH 7.5+150 mM NaCl+0.1% Tween 20+0.1% bovine serum albumin), were preincubated in Screen Mates tubes (Matrix, Thermo Scientific-France) overnight at 10°C, in the presence of 100 µl anti-opiorphin antibody diluted at 1/80 000. The following day, after washing 5 times with washing buffer (1 tablet PBS-Sigma in 200 ml pure pyrolyzed water + 0.1% tween 20), 250 µl of saturation buffer (20 mM Tris-HCl, pH 7.5+150 mM NaCl+0.1% Tween 20+0.5% gelatin) were added to the individual coated-wells and incubated for at least 1 h at 20°C. Then, after washing, 100 µl of the pre-incubated immunological reaction were transferred onto the coated and saturated micro-titration plates and incubated 1.30 h at 10°C in a humid atmosphere. After washing, 100 µl of the anti-rabbit IgG conjugated to HRP (Pierce, ThermoScientific-France), diluted in Tris buffer (20 mM Tris-HCl, pH 7.5+150 mM NaCl+0.1% Tween 20+0.1% BSA) at 1/3 000, were added to each well and incubated for 1 h at 20°C. After incubation an ultimate wash was performed and 100 µl of the HRP chromogenic substrate (StepUltraTMB-ELISA, ThermoScientific-France) were added and incubated for 30-45 min at 20°C. Finally, the reaction was stopped by adding 100 µl 4N H 2 SO 4 . Plates were read at 450 mm with a microplate spectrophotometer (Infinite M200, Tecan-France) and the results In vivo studies using a rat pain model Animals: Male Wistar rats (Harlan, France) weighing 250-280 g were used in this study. After a 7-day acclimatization period, they were weighed and randomly housed according to the treatment groups in a room with a 12 h alternating light/dark cycle (9:00 pm/9:00 am) and controlled temperature (21 ± 1°C) and hygrometry (50 ± 5%). Food and water were available ad libidum. They were experimentally only tested once. Chemicals: Opiorphin analog (Genosphere Biotechnologies, France) was dissolved in vehicle solution (55% of PBS 100 mM-45% of Acetic acid 0.01N) and systemically (I.V.) injected, 10 to 15 min prior to the behavioral tests, at doses ranging from 0.5 to 2 mg/kg body weight. Morphine HCl (Francopia, France) was dissolved in saline (0.9% sodium chloride in distilled water) and injected I.V. 15 min before the behavioral test, at 2 mg/kg dose. All drugs were administered in a volume of 1 ml/kg body weight.
The Formalin Test:
The previously prescribed protocol [1,10,16] was used to assess the analgesic potency of opiorphin analog in a chemical-induced inflammatory pain model. Groups of 8 rats were used for each experiment. 50 µl of a 2.5% formalin solution was injected under the surface of the left hind paw 10-15 min after I.V. injection of opiorphin analogs, morphine or vehicle. The duration of formalin-injected paw licking and the number of inflamed paw flinches and body tremors were recorded for a period of 60 min after formalin administration. The behavioral scores were expressed as means ± standard error of the mean (SEM) for n=8 rats.
Statistical Evaluation:
The significance of differences between groups was evaluated using the Kruskal-Wallis one-way analysis of variance (KWT, a non-parametric method) for comparison between several independent variables across the experimental conditions. When a significant difference among the treatments was obtained, the Mann-Whitney post hoc test (MWT) was applied to compare each treated group to the control one. For all statistical evaluations, the level of significance was set at P < 0.05. All statistical analyses were carried out using the software StatView®5 statistical package (SAS, Institute, Inc., USA).
Structure-activity relationship study
In order to identify the amino acid residues or functional groups required for opiorphin inhibitory potency toward both AP-N and NEP human ectopeptidases, the molecular relationship of structure to activity, namely Structure-Activity Relationship (SAR), of opiorphin native peptide was first investigated. The inhibiting activity of each modified compound was evaluated toward human recombinant NEP (rh-NEP) and AP-N (rh-AP-N), the residual enzyme activity was measured by continuous fluorimetric assays in the presence of specific fluorescent substrate.
The importance of the free C-terminal carboxyl group of the QRFSR-COOH peptide in inhibitory potency toward rhNEP, in particular, rhNEP CarboxyDiPeptidase activity. Indeed, the amidation of the C terminal (QRFSR-CONH 2 ) gives rise to a compound displaying diminished inhibitory potency toward rhNEP.
The key role played by the aromatic side chain ofPhe 3 residue (QRFSR) in the inhibitory potency of opiorphin toward rhNEP and rhAP-N activities. Indeed, substitution with a Tyr residue (QRYSR) led to a compound displaying up to an 8-fold decrease in rhAP-N inhibition potency and a slight decrease in rhNEP inhibition potency. Substitution by an Ala residue led to a compound with completely diminished inhibitory potency toward both rhNEP and rhAP-N [18].
The importance of the RFS central residues of the QRFSR peptide in the inhibitory potency of opiorphin toward rhNEP. The compounds QRGPR -QHNPR -QRFPR displayed equivalent inhibitory potency toward rhAP-N but a low or totally diminished inhibitory potential for rhNEP.
The importance of the guanidium side chains of the Arg 2 (R 2 ) and Arg 5 (R 5 ) residues in the inhibitory potency of opiorphin toward rhAPN. Indeed, their respective substitution by the εamine side chain of Lys residue (QKFSR and QRFSK) led to compounds displaying more than a 10-fold decrease in rhAP-N inhibitory potency while showing equivalent rhNEP inhibitory potency. Their respective substitution by an Ala residue confirmed these results [18].
In summary, there is a clear structural selectivity in the functional interaction of opiorphin with both human NEP and AP-N ectoenkephalinases. The aromatic residue of Phe 3 plays a critical role in the interactions of opiorphin with both targets. In addition, the C-terminal FSR tri-amino acids constitute the minimal active sequence for NEP inhibition; moreover, FSR-peptide is 10 times more active than the natural QRFSR-peptide in its inhibition potency toward rhNEP. Conversely, it seems that the entire amino acid sequence of opiorphin is required for full rhAP-N inhibition. In general, our results demonstrate that any change in the intra-peptide sequence inhibits or even abolishes at least one of the two inhibitory activities. In contrast, addition of an amide link with a Tyr residue at the N-terminal position of the peptide ([Y]-QRFSR) does not reduce the inhibitory potency of the peptide toward either human target and does not affect its antinociceptive potency in a pain rat model [1].
Metabolism of the native opiorphin peptide
In order to evaluate the half-life of circulating opiorphin in the human bloodstream, the fate of the natural peptide was analyzed using in vitro kinetic models. The metabolic profile of opiorphin native peptide in human plasma as a function of incubation time at 37°C is shown in Figure 1A. The major metabolism products, generated following a 60-min incubation period of 1 µg QRFSR/QR [ 3 H-F] SR per500 µl Figure 1A), to the parent Gln 1 -RFSR-peptide. This result does not concur with a previous report indicating that pGlu formation (in enzymatic or non-enzymatic processes) minimizes susceptibility to degradation by aminopeptidase [19]. It is also interesting to point out that pGlu 1 -RFSR peptide is an efficient NEP inhibitor. To a lesser extent, a more hydrophilic molecular population was also observed on the HPLC profile, reaching a maximum from the 2 min time-point and remaining stable at about 12% over the 30 min incubation period. The chromatographic and kinetic behaviors of this population lead us to suggest that it could result from an opiorphin-related product binding to a human plasma component.
Selection of potent bioactive opiorphin peptidomimetics:
The peptidomimetic strategy consists of altering the physical characteristics of a peptide without changing its biological activity. Here we wished to design and select functional derivatives of opiorphin that would display in vivo bioavailability properties superior to the native peptide, in particular increased resistance against proteolytic degradation. Several modifications are known to improve the metabolic stability of peptides. Conventional modifications consist of protecting the NH 2 -and COOH-terminal ends by N-acetylation and C-amidation, respectively. However, SAR studies (see above) reveal that these modifications inhibit or even abolish opiorphin inhibitory potencies. Alternatively, amino acids can be selectively substituted with non-natural amino acids, most notably by a D-enantiomer or β-amino acid [20]. However, as previously reported, ⇓ changes on the structural conformation of N-and C-terminal amino acids (N-and C-terminally homologated opiorphin, ß 2 hGln-Arg-Phe-Ser-ß 3 hArg),while increasing by about 7-fold the metabolic half-life of the modified opiorphin in human plasma, reduced by up to 10-fold its inhibitory potency toward both targets. This indicated that the relocation of the terminal carboxy and/ or amino groups has an impact on opiorphin interaction with the enkephalin-inactivating NEP and AP-N [21].
A third possibility to increase the enzymatic stability of peptides is to reduce their peptide character (pseudo peptides), substituting peptide bonds with isosteric surrogates. The isosters most frequently used are the reduced peptide bond (methyl-amino, CH 2 -NH), the retro-inverso link, the aza group or polyethylene chain spacers such as (CH 2 ) 6 or (CH 2 ) 12 . Depending on the chemical residue in corporated, the most direct consequences are increased resistance to the lytic action of circulating peptidases and an increase in lipophilicity that serves to facilitate transport across biological barriers [20,22]. However, such chemically stabilized peptides can lose some, if not all, of their biological activity, such as the retro-inverso D-amino acid opiorphin analog that lost its ability to inhibit NEP (unpublished observations by Rougeot C).
Consistent with the above, most of the QRFSR-peptide changes failed to reproduce the biological activity of the natural peptide. However, a series of opiorphin derivatives were screened and selected step by step on the basis of their dual inhibitory potency for hNEP and/ or hAP-N. To test for specificity, hit compounds were further tested with respect to other members of the metallo-ectopeptidase family, such as DPPIV and ECE. Here we present only functional opiorphin derivatives displaying significant in vitro inhibitory activity toward human NEP and AP-N. human fresh plasma, were isolated by RP-HPLC in conjunction with PDA (224 nm) and radiometric detections and semi-quantified using Chromquest software. The data are expressed by relative peak height. The addition of tracer quantity of tritiated opiorphin established the drug plasma concentration with high precision even for small amounts of compound not usually detected using standard PDA detection. Finally, analyses with Kinetica software were used to predict from the concentration-time course the metabolic half-life of the native compound, either from plasma-induced hydrolysis and/or chemical changes.
Native QRFSR-peptide disappears from human plasma with a metabolic half-life evaluated at 5 min (R 2 =0.88, n=5 time points over the 8 min time course of incubation). One metabolite appeared as early as 1 min after incubation, reaching maximum relative levels after 30 min incubation. Its appearance inversely correlated with the disappearance of Gln 1 -RFSR native peptide ( Figure 1A). The maximal appearance after 30 min incubation was blocked in the presence of150 µM bestatin, a selective inhibitor of amino peptidases ( Figure 1B). In contrast, its appearance was not affected by 150 µM GEMSA, a selective carboxy peptidase B inhibitor ( Figure 1B). This result suggests that opiorphinis primarily hydrolyzed to an RFSR-peptide metabolite resulting from the activity of a plasma exo-aminopeptidase, potentially a glutamyl peptidase. Interestingly, the RFSR-peptide is about 3 fold less inhibitory than the native QRFSR-peptide toward both rhNEP and rhAP-N. To increase the 5 minutes half-life of native opiorphin, changes were designed at the level of this sensitive site.
Two additional radioactive molecular populations were distinguished on the RP-HPLC chromatograms during the time course
[C]-QRFSR peptide
We previously showed that addition of a Tyr residue at the N-terminal position of the opiorphin-peptide does not affect its in vitro inhibitory potency or its in vivo antinociceptive properties [1]. Potent NEP and AP-N inhibitors were designed on the basis that the molecules contain a strong metal-coordinating group [23]. These observations were also used to design an opiorphin peptidomimetic carrying at the N-terminal moiety a Cys-thiol functional group that is a strong Zn atom-coordinating group.
[C]-[amino-hexanoic-acid spacer]-QRFSR peptide
In an attempt to protect the opiorphin derivative against degradation by circulating amino peptidases and thus increase its metabolic stability, a [CH 2 ] 6 polyethylene bridge [amino-hexanoic-acid spacer] was substituted for the peptide bond joining the Zn-chelating Cys 0 and the Glu 1 amino acids.
Surprisingly, the additional polyethylene bridge between the Cys 0 and Glu 1 residues of the [C]-QRFSR peptide was caused a decrease in inhibitory potency, of more than one order of magnitude relative to [C]-QRFSR peptide, toward NEP enzyme and particularly toward NEP-carboxypeptidase activity, whereas no difference in affinity towards AP-N was detected relative to [C]-QRFSR peptide.
QRF-[S-O-Octanoyl]-R peptide
Comparative conformational analyses of the opiorphin peptide revealed that the hydroxyl group of the Ser 4 residue does not seem to play a critical role in its bioactive conformation for hNEP [18]. Therefore, initially we tested the product resulting from esterification by octanoic acid, [CH 2 )8], of the serine hydroxyl group of the QRFSR peptide.
As shown in Figure 2 In the biologically relevant in vitro assay, using substance P, the physiological NEP substrate and human cell membranes as sources of native human NEP, the [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R peptide prevented, in a concentration dependent manner, substance P cleavage mediated by membrane-bound hNEP-Endopeptidase (mhNEP-Endo) activity with an IC 50 at 1.6 ± 0.4 µM (r 2 =0.95, n=13 determination points) ( Figure 5). Under the same assay conditions, it appears to be at least five times more potent than opiorphin natural peptide toward hNEP [1]. In addition using fluorescent substrates with human cell membranes as sources of native hNEP, the [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R peptide inhibited in a concentration dependent manner the mhNEP-Endo activity with an IC 50 at 1.6 ± 0.4 µM, and mhAP-N activity with an IC 50 at 0.9 ± 0.1 µM ( Figure 5). Thus, the designed analog presents similar affinity towards human NEP and AP-N, whether they are in a native membrane-anchored or recombinant soluble conformation.
In vitro assays using human recombinant DPP4 or ECE-1 revealed that the [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R compound did not inhibit rhECE1 and rhDPPIV-ectopeptidase activities even at 100 µM final concentration. These results indicate that, similarly to opiorphin, the opiorphin analog shows excellent selectivity with respect to related zinc-metallo peptidases, such as ECE1 (closely structurally related to NEP with 40% sequence identity) and DPPIV that is involved among other endopeptidases, including NEP, in the inactivation of the substance Pandbradykinin.
Di-peptide analogs can be metabolically more resistant to peptidase degradation. We tested the cystine-dipeptide (single disulfide bond connecting the Cys 1
[dCys]-QRF-[Ser-O-octanoyl]-[dArg]
Another strategy to protect peptide compounds against degradation by circulating peptidases is the replacement of the N-term and C-term amino acid residues, which are major targets for degradation by circulating exopeptidases, by their respective D-enantiomer.
As shown in Figure 6, [dC]-QRF-[S-O-(CH 2 ) 8 ]-[dR] derivative peptide inhibited, in a concentration dependent manner, rhNEP-Endo activity with an IC 50 at 4 ± 1 µM (r 2 =0.97, n=30 determination points) and rhNEP-CDP activity with an IC 50 at 21 ± 1 µM (r 2 =0.99, n=30 determination points). Strikingly, this derivative was at least 200 times more potent against rhAP-N activity than against rhNEP with an IC 50 at 0.022 ± 0.002 µM (r 2 =0.98, n=43 determination points). We also used human cell membranes as a source of native membrane-bound hNEP and hAP-N and confirmed that the [dC]-QRF-[S-O-(CH 2 ) 8 ]-[dR] peptide displays an unbalanced inhibitory profile. Indeed, it showed a dose-dependent inhibition of mhNEP-Endo activity with an IC 50 at 9 ± 1 µM (r 2 =0.98, n=21 determination points) and of mhNEP-CDP activity with an IC 50 at 37 ± 5 µM (r 2 =0.95, n=21 determination points). In addition, it appeared to be 30-100 times more potent toward mhAP-N activity than toward mhNEP(IC 50 at 0.3 ± 0.1 µM, r 2 =0.95, Furthermore, substitution of the L-Arg 5 by its respective D-enantiomer clearly affected the inhibitory potency of the compound toward hNEP-carboxydipeptidase. The related [dC]-QRF-[S-O-(CH 2 ) 8 ]-R peptide inhibited mhNEP-CDP activity with an IC 50 at 2.6 ± 0.3 µM (r 2 =0.98, n=30 determination points), about ten times more potent than the D-Arg 5 counterpart. Such a difference leads us to propose the existence of a stereo-chemical requirement for optimal interaction of the peptide with the catalytic site of NEP. Conversely, the substitution of the L-Cys 0 by its respective D-enantiomer clearly enhanced the inhibitory potency of the compound toward hAP-N (about 50 times more potent than the L-Cys 0 counterpart) and may be due to the fact that its spatial conformation provides tight binding to the AP-N target.
derivative probably displays some superior in vivo bioavailability properties compared to native opiorphin peptide, such as a possible gain in circulating amino-and carboxy -peptidase resistance. However, it's very modest gain in hNEP inhibitory potency, combined with a distinctly unbalanced bioactive profile eliminated it as a suitable candidate molecule. Therefore, only the C-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R derivative was retained for further exploration.
Metabolism and Toxicity profile of the best performing opiorphin functional derivative
Metabolism in fresh human plasma: We established overall in vitro pharmacokinetic and metabolic parameters, based on an in vitro time-dependent system, using opiorphin or its derivative incubated in human plasma. Kinetica software, which is used to predict the metabolic half-life (T½) of the parent peptide from the concentrationtime course, was used in this study.
As shown above, in vitro kinetic analyses in human plasma revealed that the native QRFSR-peptide disappears with a half-life evaluated at 5 min. Its disappearance results in part from the cyclization of Gln 1 (16% maximum) but mainly from the hydrolytic removal of both Gln 1 -and pGlu 1 -peptides by plasma amino peptidases (reaching a maximum of 84% of the parent peptide at 60 min incubation) and also, to a small extent (12%), from potential complex formation. 8 ]-R derivative is more metabolically stable in human plasma than opiorphin native peptide. In addition, it is important to point out that the major biotransformation product of the parent derivative, the cystine-dipeptide, is as active as the parent peptide in fluorescence-based NEP and AP-N assays.
All together, the data showed that, as expected, opiorphin derivative Drug Absorption and in vitro Cytotoxicity: A range of in vitro ADME-Tox assays provided by Cerep Laboratories (Celle L'Evescault-France) allowed us to evaluate a number of factors including drug absorption and membrane permeability with the A-B permeability and P-glycoprotein ATPase efflux system [24]. The Caco-2/TC7 (pH 6.5/7.4) human cell line gives an indication of the intestinal epithelial transport potential of compounds [24]. Metabolic stability, using human liver microsomes and in vitro cytotoxicity in cell-based assays that measure cellular parameters such as cell viability, nuclear size and mitochondrial membrane potential using the HepG2 human cell line can also be evaluated.
Our data demonstrate that, compared to the reference positive and negative controls, no apparent in vitro human cell toxicityis observed for either QRFSR native peptide or [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R derivative peptide, incubated at 10, 30 and 100 µM final concentrations for 72 h at 37°C. For example, relative to controls at 100 µM, the peptides increased cell proliferation by 1and 12%, respectively and reduced nuclear size and mitochondrial membrane potential only by 1 and 5%, respectively. However, there is a clear decrease in the metabolic stability of the opiorphin derivative in the presence of human liver microsomes compared with the native opiorphin peptide: at 10 µM final concentration and after 60 min incubation, 2.5% of the parent [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R compound remains versus 47% remaining in the case of opiorphin. Surprisingly, the opiorphin derivative, although endowed with higher lipophilicity than opiorphin native peptide, did not display significantly increased trans-membrane cell permeability over the 60 min incubation-period at 37°C, as the apparent permeability coefficient of both tested compounds was <0.2×10-6 cm/s (10 µM test concentration and HPLC-MS/MS detection method). However, this result is probably due to the cellular model used, namely, TC7 human epithelial intestinal cells derived from the CaCO2 cell line, and known to express membrane-bound NEP and AP-N ectoenzymes. The cell line, therefore, is not an appropriate model for permeability studies of NEP and/or AP-N-inhibitor-ligands. Indeed, the mean recovery of the compounds in donor samples was dramatically low (0% for QRFSR and 14% for the derivative) due mainly to binding to TC7 cell membranes.
We then tested in vivo acute toxicity using a rat model provided by CERB (Centre de Recherches Biologiques, Baugy-France). CERB experimental conditions are based on a stepwise procedure, each step uses 3 male rats for each compound. No mortality occurred among the animals treated with QRFSR natural peptide at 100 mg/kg maximum dose, administered as a bolus in the caudal vein. This dose is a 100-fold the effective I.V. dose in the rat pain model. In contrast, the rat treated with a 100 mg/kg dose of [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R analog died 3 minutes after treatment; however, no mortality occurred among the 3 animals treated at 30 mg/kg I.V. These animals were further observed for general clinical and neurobehavioral signs, based on the Irwin method, for 14 days [25]. No clinical signs were observed during the course of the study of both peptides. Body weight gain was normal and no gross organ or tissue changes were detected by necropsy.
In conclusion, under the experimental conditions adopted by CERB, opiorphin natural QRFSR peptide administered intravenously at 100 mg/kg and and body tremors were recorded over the 60 min-test period. The formalin test measures the behavioral response to a chemical-induced inflammatory nociception, which induces two distinct nociceptive phases separated a stationary interphase: a early acute phase (first 10 min after formalin injection) followed by a late phase in which a more tonic pain is elicited.
Here, we demonstrate that the [C]-[(CH 2 )6]-QRF-[S-O-(CH 2 ) 8 ]-R functional opiorphin analog inhibits, in a dose-dependent manner, the pain behavior induced by long-acting chemical stimuli with significant antinociceptive effect at 0.5, 1 and 2 mg/kg I.V. doses over early and later phases of the test (Figure 8). Thus, compared to the control vehicle rats, the opiorphin analog-treated rats at 1 and 2 mg/kg dose spent significantly less time in paw licking over the first 10 min-test period, from 161 ± 19 sec (vehicle) to 89 ± 15 sec (1 mg/kg) and 77 ± 6 sec (2 mg/kg) (P<0.05 and 0.01 vs vehicle by Mann-Whitney U-test, MWT, n=8 rats/group) as morphine-treated rats at 2 mg/kg I.V. dose (43 ± 13 sec, P<0.01 by MWT). The 1 and 2 mg-treated rats also spent significant less time in paw licking over the second 10-30 min period, from 468 ± 48 sec (vehicle) to 287 ± 28 sec (1 mg/kg) and 224 ± 27 sec (2 mg/kg) (P<0.01 and 0.001 vs vehicle by MWT, n=8 rats/group). The 0.5 mg/kg-treated rats also spent at least 30% less time in inflamed paw licking over pain periods: 98 ± 10 sec (early phase) and 334 ± 54 sec (late phase) compared to vehicle-treated rats 161 ± 19 sec and 468 ± 48 sec, respectively (P<0.05 vs vehicle by MWT, n=8 rats/group). From 30 min post-formalin injection, the duration of paw licking decreased in a parallel manner in both vehicle-and opiorphin analog-treated rats and their behavioral responses to the test compound as well as to morphine were not significant. Conversely, during this 30-60 min period, the control vehicle rats exhibited an important increase in the total number of formalin-injected paw flinches and body tremors. And systemic administration of opiorphin analog at 2 mg/kg significantly reduced this pain behavioral score throughout the 30 to 60 min timeperiod, from 300 ± 31 (vehicle) to 218 ± 15 (P ≤ 0.05 vs vehicle by MWT, n=8 rats/group).
This model was previously used for testing native opiorphin activity and we demonstrated that opiorphin, at 1 and 2 mg/kg I.V. doses inhibits nociception in both acute early and tonic late phases of the test by primarily activating µ-opioid pathways [1,10].
Thus, our data clearly indicate that the [C]-[(CH 2 ) 6 ]-QRF-[S-O-(CH 2 ) 8 ]-R opiorphin analog inhibits nociception induced by acute and long-acting chemical stimuli in the rat model. Strikingly, although metabolically more resistant and more potent in its ability to inhibit enkephalin-degrading ectopeptidases, the opiorphin analog-induced pain reduction in the formalin test is similar to the opiorphin natural peptide, in terms of dose effect, delay and duration of action. This could be due to the loss of a significant proportion of active derivative by dimerization and/or by hepatic metabolism in vivo in rats.
Conclusion
The goal of the study described here was to design and characterize functional analogs of opiorphin that display in vivo bioavailability properties superior to the native peptide. The inhibitory potency of the main functional derivatives toward human NEP and AP-N is summarized in Table 1. A close structural selectivity in the functional interaction of opiorphin with both human NEP and AP-N targets was first demonstrated by SAR studies, thus limiting the possibilities of chemical changes. Nevertheless, results of the study clearly demonstrate that addition of a N-terminal Zn-chelating group, a Cys-thiol group and replacement of the first labile peptide bond by a polyethylene surrogate, a [CH 2 ] 6 linker, and, finally, substitution of Ser 4 by a octanoyl-Ser, Ser-O-[CH 2 ] 8 , to the native opiorphin amino acid sequence produced a high performing C-[(CH 2 ) 6 ]-QRF[S-O-[CH 2 ] 8 ]-R derivative. This designed analog displays reinforced inhibitory potency toward hAP-N activity (more than 10-fold increase) and toward hNEP-Endopeptidase and CarboxyDiPeptidase activities (more than 40-fold increase) relative to the QRFSR natural peptide. Moreover, the analog shows increased stability in human plasma compared to unmodified opiorphin. Finally, we demonstrate that it retains the full analgesic activity characteristic of the opiorphin native peptide, in terms of delay of action and effective doses, in the behavioral formalin-induced pain rat model. If we consider that the maximum effective analgesic dose for the two compounds is 1 mg/kg I.V., the safety-effectiveness ratio is estimated at 30 for the designed analog and at 100 for the native peptide.
|
2019-03-30T13:12:07.004Z
|
2013-11-14T00:00:00.000
|
{
"year": 2013,
"sha1": "b40011427298148b259db7811c993485bf40af15",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/structure-activity-relationship-study-and-functionbased-petidomimetic-design-of-human-opiorphin-with-improved-bioavailability-property-and-unaltered-analgesic-activity-2167-0501.1000122.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5e5f5afbb3d0a20bf4b4a06c8928faf2222cdd8b",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
51807654
|
pes2o/s2orc
|
v3-fos-license
|
Legged Robotic Systems
Walking machines have been attempted since the beginning of the technology of transportation machinery with the aim to overpass the limits of wheeled systems by looking at legged solutions in nature. But only since the last part of the 20-th century very efficient walking machines have been conceived, designed, and built with good performances that are suitable for practical applications carrying significant payload with relevant flexibility and versatility. In this chapter we have presented a survey of the variety of current solutions and prototypes of walking machines and we have illustrated fundamental characteristics and problems for their design and operation. The worldwide feasibility of walking machines is presented by discussing the activity at LARM: Laboratory of Robotics and Mechatronics in Cassino (Italy) as concerning with low-cost easy-operation solutions that can really make the walking machines available to non expert users for many applications.
Introduction
Walking machines have been attempted since the beginning of the technology of transportation machinery with the aim to overpass the limits of wheeled systems by looking at legged solutions in nature.But only since the last part of the 20-th century very efficient walking machines have been conceived, designed, and built with good performances that are suitable for practical applications carrying significant payload with relevant flexibility and versatility.In this chapter we have presented a survey of the variety of current solutions and prototypes of walking machines and we have illustrated fundamental characteristics and problems for their design and operation.The worldwide feasibility of walking machines is presented by discussing the activity at LARM: Laboratory of Robotics and Mechatronics in Cassino (Italy) as concerning with low-cost easy-operation solutions that can really make the walking machines available to non expert users for many applications.
Walking in Nature
Movement is a fundamental distinguishing feature of animal life.The locomotion over a surface by means of limbs or legs can be defined as walking whatever are the number of limbs or legs that are used.Different ways of walking have been achieved by the evolutionary process in nature.The vertebrate animals have a spinal column and one or two pairs of limbs that are used for the walking.These limbs are located beneath the body.Arthropoda animals including crustaceans, insects, centipedes, millipedes, symphylans, pauropodans and trilobites are characterized by a segmented body that is covered by a jointed external skeleton (exoskeleton), with paired jointed limbs on each segment so that they can have an high number of limbs.In this type of animals a stable walking can be achieved with a minimum of six limbs that are located in a side position with respect to the animal's body since they cannot use the flexibility of the spinal column for regulating the masses' positions during the walking.A large variety of efficient mechanical and physiological designs have evolved in nature in order to fit with the characteristics of a given physical environment and different locomotion modes.Animals seem to have evolved to be as fast as possible, to have the best possible acceleration, maneuverability and endurance, and to have energy consumption as low as possible.However, these objectives are not always compatible with each others.For example tortoises are designed to walk with energy consumption as low as possible but they cannot be fast.Similarly, an animal that has been adapted to sprint as fast as possible, has not good endurance.Usually, nature evolution can be expected to have preferred compromises between the requirements of speed, endurance, and energy consumption.Thus, a wide range of different solutions can be found in nature.The locomotion for a legged animal can be analyzed in term of three main components: the power source, the transmission system, the power outputs.The power source is located in the muscles where chemical energy is converted in mechanical energy.The power outputs are the parts of an animal that are directly in contact with the environment and produces the motion.The transmission system transmits the mechanical energy from the muscles to the power outputs.For vertebrate animals this transmission system is composed of bones and articulations, and the power outputs are usually the feet.Legged locomotion systems that have evolved in nature, show very good performances in terms of stability, payload capabilities, dynamic behavior.Thus, usually they are considered a very important source of inspiration for designing legged robotic systems mainly for aspects ranging from the mechatronic design to the path planning and gait generation.Several researchers have stressed these topics by using a multidisciplinary approach.For example, several studies have been addressed to the transmission system of vertebrate legged animals from a kinematic point of view.In fact, bones and articulations can be easily modeled as links and joints of a kinematic architecture.Examples of for biped, quadruped and hexapod locomotions in nature are shown in Fig. 1 to 5 with their simplified kinematic architectures.Those animals have been and still are inspiration both for design and operation of walking legged systems.In the following main features are reported for each animal but more details can be and have been considered in inspiring/mimicking for walking legged systems.In particular, Fig. 1a) shows the most attractive biped locomotion: a human being.Figure 1b) also shows a kinematic scheme of a human being.Each leg in Fig. 1b) can be considered to have seven degrees of freedom.In a human being the muscles are distributed so that the forward motion is more efficient than the backward and side motion.The maximum speed is about 11.0 m/s during a 100 meter run.The average weight of a human being is 650 N. Main characteristics of the human being in terms of biped locomotion are reported in Tab. 1 in which data refer to general common operation.Maximum values of performances strongly depend on situations, environments of life, and training and they can reach even values higher then in Tab.1.a) b) Figure 1.Biped locomotion by a human being: a) a picture 1 ; b) a kinematic scheme Figure 2a) and b) show an ostrich and a simplified kinematic scheme of its two legs, respectively.Its kinematic scheme is similar to the human being and each limb can be considered to have seven degrees of freedom.Even in this case the muscle distribution facilitates the forward motion.The average weight is 1,800 N. but the mass distribution provides a lower center of mass and a better attitude to run compared with humans.Even if ostriches can be classified as birds they have lost their ability to fly but they can still escape predators with their fast running at a maximum speed of 19.4 m/s during a 800 meter run.Main characteristics of the ostrich are reported in Table 1.It is worth noting that the stability of a body in the space can be guaranteed with a minimum of three points in contact with the ground.It this case, the walking stability is obtained if the projection of the center of mass of the body lays within the area obtained by connecting the contact points.However, the biped locomotion can provide only one or two limbs in contact with the ground.Thus, biped locomotion cannot be considered as statically stable.Indeed, bipeds do not fall down since they can control the posture of their upper body in order to keep the balance in dynamic conditions.This require clever control strategies based on the feedback of several vision, auditory, and tactile sensors.In addition, a control of the compliance through spinal cord, muscles, and feet is used for the compensation of dynamic effects.Beside the complexity of biped stable walking (but also run), the biped structure show the most flexible locomotion in term of obstacle avoidance and fast reshaping of walking mode.Figure 3 a) and b) show a young child as an example of quadruped locomotion and a simplified kinematics scheme of his four limbs, respectively.In this case, feet and arms are both used as limbs for the locomotion.A clever example of using arms like legs can be recognized in monks, which are often considered of inspiration for robotic systems with variable capabilities.This type of locomotion is used by human being at their first months of life since it is statically stable and requires less sensory feedback.It is worth noting that a young child can not achieve a very efficient quadruped motion since the human arms are not equipped with proper muscles.Thus, he starts to use a biped locomotion as soon as his body and brain are capable of keeping the equilibrium in dynamic conditions.More characteristics of a young child in terms of quadruped locomotion are reported in Table 1. Figure 4a) and b) show a horse and a simplified kinematic scheme of its four limbs, respectively.In the case of horses, the quadruped locomotion can be considered very efficient.In fact, they can be considered among the fastest legged animals with a maximum speed of 21.1 m/s during a 800 meter run.Moreover, they show a good payload capacity and attitude to jumping.More details are reported in Table 1.Figures 5 and 6 shows two examples of hexapod locomotion: a spider and a cockroach and a simplified kinematic scheme of their six limbs, respectively.The use of six limbs provide to these animals a very good ability of movements in rough terrain and also a surprisingly high payloads capability.For example, a giant cockroach can move a weight of even 800 times its weight.Nevertheless, the hexapod locomotion is mainly used by animals having small size and weight.This is probably due to the complexity of distribution of muscles for all the limbs and also for the complexity of the control strategy for walking modes.
Existing Walking Machines
The kinematic models of Fig. 1 to 6 have inspired and still inspire the mechatronic design and operation for several biped, quadruped and hexapod walking machines.
In Tab.2 main characteristics are reported with values that are indicative of design and operation performances.In the following, synthetic descriptions are discussed to outline basic problems and solutions that are available in the current state of walking machines.ASIMO (Advanced Step in Innovative MObility), Fig. 7a), has been built at Honda in the year 2000.It is a biped humanoid robot having a total of 26 degrees of freedom.Its size, weight and ranges of mobility have been conceived to mimic as much as possible a human child and move freely within the human living environment.ASIMO is able to In particular, it would be applicable to the welfare field as a walking wheelchair or as a walking support machine that is able to walk up and down stairs carrying or assisting a human.It is equipped with an on board Nickel Metal Hydride battery for a continuous operating time of 1 hour approximately.The RIMHO II walking robot, Fig. 9a), has been developed from the Industrial Automation Institute-CSIC and the CIEMAT in Madrid since 1993.It is a quadruped-walking machine of the insect type.Its four legs are based on a three dimensional Cartesian pantograph mechanism.The RIMHO walking robot can perform both discontinuous and wave gaits over irregular terrain including slopes and stairs, and has been tested also over natural terrain as shown in Fig. 9a).SCOUT II, Fig. 9b) has been developed at Ambulatory Robotic Laboratory in Montreal since 1998.It is composed of four legs.Each leg has one active degree of freedom only.A spring and a passive knee are added in order to provide two additional passive degrees of freedom for each leg.These passive degrees of freedom make the Scout II capable of achieving dynamic running similar to gallop and trot.SCOUT II is fully autonomous having on board power, computing and sensing.Other features include an on board pantilt camera system and laser sensors.Step size [m] Step Generally, legged systems can be slow and more difficult to design and operate with respect to machines that are equipped with crawlers or wheels.But, legged robots are more suitable for rough terrain, where obstacles of any size can appear.In fact, the use of wheels or crawlers limits the size of the obstacle that can be climbed, to half the diameter of the wheels.On the contrary, legged machines can overcome obstacles that are comparable with the size of the machine leg.Therefore, hybrid solutions that have legs and wheels at the same time have been also developed as shown for example in Fig. 8c) and d).This type of walking machines may range from wheeled devices to true walking machines with a set of wheels.In the first case, the suspensions are arms working like legs to overcome particularly difficult obstacles.In the second case wheels are used to enhance the speed when moving on flat terrain.The advantage of the legged systems over the wheeled systems can be understood by looking at their kinematic capability and static performance.They can be deduced by the schemes of Fig. 14 and 15 for legged and wheeled systems, respectively.In Fig. 14 a biped system is represented by taking into account its weight P, the weight PL of a leg, the forward acceleration a, the reaction force R at the ground, and the actuating torque CL for a leg.The geometry of the system is modeled trough the distances shown in Fig. 14 among which dL represents the step capability and h is for the step height.The walking capability is given by the size of the distance dL that avoids falling of the system, together with motion capability for each leg that is given by the mobility range and actuating torque.
In particular, looking at the instantaneous equilibrium gives the computation of necessary conditions for a stable walking and the evaluation of the maximum step height.In the sagittal plane, the equilibrium can be expressed by where R h and R v are the horizontal and vertical components of R; C inS is the sagittal component of the inertial torque due to waist balancing movement.The point Q is assumed as the foot contact point about which the system will rotate in the possible fall.In the front plane the equilibrium can be expressed by where R l is the lateral component of R; C inl is the lateral component of the inertial torque of waist balancing movement.The point S is assumed as the foot contact point about which the system will rotate in the possible fall.The components R h and R f refer to friction actions at the foot contact area.By using Eqs.( 1) and ( 2) is possible to compute conditions for design and operation in an environment with obstacles of height h.From geometric viewpoint the obstacle/step of height h can be overpassed when the leg mobility gives (3) in which l1 and l2 are the lenghts of leg links, whose angles 1 and 2 are measured with respect to a vertical line.Similarly, for a wheeled system the instantaneous equilibrium can be expressed as referring to Fig. 15 for the case a), in the form and for the case b) in the form with the condition for pure rolling (without sliding) where f is the sliding friction coefficient and f v (<<f) is the rolling friction coefficient; R is the reaction force at the contact point in the step wedge; with its horizontal and vertical components R h and R v ; Q is the load and weight on the wheel axis; C is the actuating torque due to force F to maintain forward velocity v.The geometrical limits for overpassing an obstacle-step of height h through rolling a wheel can be expressed by r > h (7) when the actuating torque acts only.Alternatively, a force will push upward the wheel or, as commonly used, the step is reshaped as an inclined plane, as shown in Fig. 15b).An intense activity is carried out not only for designing and prototyping walking machines but also for debating and exchanging information and experiences.The first activity is carried out in Research laboratories in Universities but also in Research Centres of governmental Institutions or Company Divisions.Very recently, some prototypes have been even commercialized and they are available in the market, like for example, AIBO robot for leisure/company applications or SDR-4X Sony robot for adverticing purposes.The success of these preliminary attempts of practical uses of walking machines stimulate more and more the development of systems for a variety of potential users.Similarly entusiasmatic activity is that one with circulation and publication of results and information both on research activity and practical applications.This activity is mainly concentrated in forum like congress events or journal publications that can be considered also source for further reading and continuous updating of the State-of Art.The topic of walking machines is discussed in specific Conferences and Journals, but also as important section in more general events.The following list is not exhaustive (also because new initiatives are continuously started) but it gives main sources and size of the publication activity on walking machines.Conference Event Series: International Conference on Climbing and Walking Robots (CLAWAR)
Gait Analysis and Design Problems
A gait is a pattern of locomotion characteristic of a limited range of speeds, described by quantities of which one or more change discontinuously at transitions to other motion patterns.A duty factor can be defined as the fraction of the duration of the stride, for which each foot is on the ground.In walking, each foot is on the ground for more than half the time, and in running for less than half the time.As speed increases, the duty factor falls gradually from about 0.65 in slow walks to about 0.55 in the fastest walks; but at the change to running it drops to around 0.35.Also the forces that are exerted on the ground can significantly change as speed increases and during the transition from a locomotion pattern to another.There are several types of gaits in nature that are suitable for walking machines.Feasible gaits can be considered the following: human-like walking behavior; -horse-like tolt behavior; -horse-like trot behavior; -horse-like pacing behavior; -horse-like canter behavior; -horse-like gallop behavior; -crab-like walking behavior.
In the human-like walking behavior each foot leaves the ground at a different time.This type of gait can be achieved with two or more legs.Figure 16 shows the movements of limbs in a four leg walking also with the footfall formula representation.In footfall formula representation the limbs that are in contact with the ground surface are shown as black circles in a table in which the entries represent the possible foot contacts with the ground.A similar behavior is achieved in the horse-like tolt behavior, which is a running walk used when covering broken ground.This gait can be achieved with four legs.
In the horse-like trot behavior of Fig. 17 the legs move in diagonal pairs, with a moment of suspension between each stride.
This type of gait can be achieved with four or more legs.Figure 17 shows the movements of limbs in a four legs trot also with the footfall formulas representation.
In the horse-like pacing behavior the legs of the same side move simultaneously.Thus, one has the two left legs or the two right legs in contact with the ground while the other two legs move forward simultaneously.In the horse-like canter behavior in Fig. 18 the legs move in the following 4 phases: an hind leg, the other hind leg together with the diagonal fore leg, the other fore leg, at this time there is a period where all four feet are off the ground.This type of gait can be achieved with four or more legs.Figure 18 shows the movements of limbs in a four legs canter also with the footfall formulas representation.In the horse-like gallop behavior in Fig. 19 the legs move in the following 4 phases: an hind leg, the other hind leg, the diagonal fore leg, the other fore leg, at this time there is a period where all four feet are off the ground.This type of gait can be achieved with four or more legs.Figure 19 shows the movements of limbs in a four legs trot also with the footfall formulas representation.
There are also other horse-like gaits in which there are lateral movements.Crab-like walking behavior are similar to the human-like behavior but it is possible to have a lateral walking instead of forward or backward.
Increasing the number of limbs increases the number of feasible gaits, increases the flexibility of the motion but at the same time increases the complexity of the control for the coordination of the movements of each limb.Also the choice of the most convenient gait and the transition from a gait to another can be a difficult task for systems having a high number of limbs.These aspects are crucial issues when one try to mimic animal or insect like gaits.
Low-Cost Designs at LARM in Cassino
A challenge for a practical use of walking machines both in industrial and non-industrial environments can be recognized in the development of design solutions and operation modes with low-cost easy-operation features.This is the approach that has been and still is used at LARM to develop projects, experiences, and teaching on walking machines.In this section those aspects are illustrated to show the feasibility of carrying out activity on walking machines at any level of expertise and fund compatibility.
At LARM activity on walking machines has been addressed to biped robots and modular leg designs with low-cost components and easy-operation via PLC programming.Three prototypes are illustrated: EP-WAR, 1dof leg, and leg module.The second action is performed for right turn of an angle a about vertical axis across the right foot.
Thus the left foot turns from (c) to (d) and then it is moved to the ground in (e).Successively the suction-cups of the left foot that is in (e) are operated and those of the right foot that is in (a) are switched-off so that the ground contact is moved from one foot to the other.Then the right foot turns from (a) to (f) and becomes parallel to the left foot.Successively it moves forward and up from (f) to (g) and finally to (m).This right turn module ends when the suction-cups of the right foot in (m) are operated again and those of the left foot in (e) are switched-off.Thus the EP-WAR reaches the start position again and a next walking module can be performed.The corresponding diagrams for a suitable flexible programming in a sub-routine for PLC is reported in Fig. 22.
The length L of a general straight path can be given by a q value so that * Lqp (8) and the subroutine represented by Fig. 7b) is repeated until the counter variable q is equal to q*.Similarly, the turning displacement of the walking robot can be performed to the right or to the left by using the corresponding Grafcet diagrams to be repeated r* and s* times.In order to obtain a right turn angle equal to * r r (9) or similarly for a left turn angle with s*, where is the module angular turn corresponding to a step displacement.The modular angular turn is determined by the rotation capability of the rotative cylinders so that the radius R of the turn is fixed and equal to R p sin 2 2 (10) Finally, a general programming for a generic trajectory can be easily obtained by using the abovementioned sub-routines and a user will only assembly a suitable number of each sub-routines to achieve a desired trajectory of the walking robot.The control of the walking robot in a digital environment has been obtained by using a commercial PLC.The central unit can be connected as a remote terminal of the PLC with a common Personal Computer through a serial port RS-232 for an off-line programming, even for updating the sub-routines when additional features are provided to EPWAR.At LARM the abovementioned approaches has been extended to design new leg systems with better features both in term of low-cost properties and easy operation programming.Basic considerations for a low-cost leg design can be outlined as follows: the leg should generate an approximately straight-line trajectory for the foot with respect to the body; the leg should have an easy mechanical design; if it is specifically required it should posses the minimum number of DOFs to ensure the motion capability.Among many different structures, at LARM the so-called Chebyshev-pantograph leg has been developed.The proposed leg mechanism is shown in Fig. 23.Its mechanical design is based on the use of a Chebyshev four-bar-linkage, a five-bar linkage, and a pantograph mechanism.For such a mechanism, the leg motion can be performed by using 1 actuator only.The leg has been designed by considering compactness, modularity, light weight, reduced number of DOF as basic objectives to achieve the walking operation.Numerical and experimental results show that good kinematic features can be obtained when points C and P in Fig. 23 are not coincident.The main characteristic of the proposed leg design consists in a fully-rotative actuation at point L to obtain the suitable trajectory of point B with one motor only that run continuously without any speed regulation.Furthermore, the trajectory of point B, and consequently, point A can be suitably modified by changing the design parameters shown in Fig. 23b).In particular, better features can be obtained if the transmission angles ?and 2 have suitable values.Dimension of the leg prototypes are 400 mm high, 40 mmx250 mm so that they have a maximum lift of 80 mm and the step is of 470 mm.In Figs.24 and 25 the basic operation features of the Chebyshev-pantograph leg mechanism are reported through simulation results to show the feasibility of the low-cost easy operation design that have been experienced successfully by using a commercial DC motor without motion control equipment.At LARM a so-called modular anthropomorphic leg has been obtained by defining a single link module that can be easily connected with other modules and can have inside all the needed actuators, transmissions and sensors.Figure 26 shows the proposed design for a single link module by using conic gears or timing belt transmissions.The main components of a single link module are: -the body of the module; -a dc motor with reduction gear train; -two conic gears or a timing belt transmission; -two mechanical switches.It is worth noting that the number of link modules can be decided according with the needed number of degrees of freedom.The link modules can be also properly oriented with respect to the others in order to achieve the required pitch, jaw or roll motions.A link module can be also easily modified in order to drive a wheel.Dimension of the built prototype leg that is composed by 3 modules and one wheel in the foot, is high 500 mm and has a cross-section of 60mm x 60mm.the built leg prototype in Fig. 26 has a maximum lift of 155 mm and the step is of 310 mm.Maximum rotation for each joint is +/-90 deg.In Fig. 27 the programming of walking is reported through the scheme of the analysis of elementary actions and corresponding Grafect to use a PLC that will control the operation of the actuators by using signals by suitable switches for the leg mobility.In Fig. 28 an example is shown of using the leg for an hexapod design that has been simulated and is under construction at LARM with low-cost easy-operation features.
Conclusions
The variety of currently available walking machines gives many solutions to the problem of artificial walking for many applications.Combining properly suitable designs of mechanical architectures, actuator systems, sensors equipment can give walking machines with very good characteristics and performances that can make prototypes very promising and even already available in the market.However, the challenge for future assessment of walking machines as convenient transportation machinery can be recognized in the development of low-cost easy-operation systems whose designs try to mimic the legged systems in nature but overpassing their performances and reducing their complexity in functionality.
Biped locomotion by an ostrich: a) a picture; b) a kinematics scheme Quadruped locomotion by a young child: a) a picture; b) a kinematic scheme a) b) Figure 4. Quadruped locomotion by a horse: a) a picture; b) a kinematic scheme a) b) Figure 5. Hexapod locomotion by a spider: a) a picture; b) a kinematic scheme a) b) Figure 6.Hexapod locomotion by a cockroach: a) a picture; b) a kinematic scheme
Figure 9 .
Figure 9. Examples of four legged walking machines: a) RIMHO2; b) Scout II Photo Figure 14.A scheme for performance evaluation of biped walking machines: a) sagittal view; b) front view
Figure 16 .
Figure 16.Movements of limbs in a four legs walking with footfall formulas representation (black circles stands for the limbs in contact with the ground surface): a) first beat; b) second beat; c) third beat; d) fourth beat
Figure 17 .Figure 18 .
Figure 17.Movements of limbs in a four legs trot with footfall formulas representation (black circles stands for the limbs in contact with the ground surface): a) first beat; b) first support phase; c) first suspension phase (all feet are off the ground); d) second beat; e) second support phase; f) second suspension phase
Figure 19 .
Figure 19.Movements of limbs in a four legs gallop with footfall formulas representation (black circles stands for the limbs in contact with the ground surface): a) first beat; b) second beat; c) third beat; d) fourth beat; e) preparing for the suspension phase; f) suspension phase (all feet are off the ground) Each leg mechanism of EP-WAR, shown in Fig.20, is composed by a pantograph and a double articulated parallelogram.The pantograph has the fundamental function to transmit the trajectory of foot point C to the actuation point H and reduce the movement size.The pantograph proportions give same kinematic proprieties between the points H and C of Fig. 20 b), with a scaling factor equal to 4. The parallelogram mechanism ensures a pure translation of the foot in a vertical plane.A complex control for legs co-ordination and dynamic stability has been avoided by using suction-cups beneath the feet to obtain static walking.Thus, EP-WAR can follow general polygonal trajectory in the plane of the motion because of a suitable mechanical design of each ankle joint with an axial ball bearing and a pneumatic rotation actuator.In order to mimic the human gait, a suitable path is required for C foot point.The foot path is imposed through the actuation point H by using the scale factor 4 of the pantograph mechanism.
Figure 20
Figure 20.EPWAR (ElectroPneumatic WAlking Robot) built at LARM in Cassino: a) a prototype; b) a diagram for the leg design
Figure 21 .
Figure 21.A walking analysis for a module of right turn trajectory of EP-WAR by using elementary actions: a) Sagittal view; b) Top view
Figure 22 .
Figure 22.Flowcharts for easy programming of the right turn walking module of Fig. 21: a) Sequences of the elementary actions; b) Grafcet diagram
Figure 24 .Figure 25 .
Figure 24.Simulation for the walking characteristics of the 1-DOF leg in Fig.23 with p=20 mm and h=-30 mm: a) Point C trajectory; b) Point A velocity; c) Acceleration a AX ; d) Acceleration a AY ; e) Acceleration a Ax ; f) Acceleration a Ay
Figure 26 .Figure 27 .
Figure 26.The so-called modular anthropomorphic leg developed at LARM: a) a built prototype; b) the mechanical design 7he reported examples in Figs.7to 13 give an illustrative view of the variety of walking systems that have been developed all around the world with different solutions for different applications.It is worth noting that most of them (with the exception of AIBO)are not yet available in the market but they are under further development in Research Labs.
|
2018-07-21T04:19:44.181Z
|
2005-07-01T00:00:00.000
|
{
"year": 2005,
"sha1": "def8b62da9fd9310ad1f30897ba855391cd3f6b1",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5772/4669",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6928064185ed3deb30a435c682d46e8b4fd1abda",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
271907267
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Total Endoscopic Ear Surgery and Microscopic Postauricular Canal-Wall-Down Approach on Primary Acquired Cholesteatoma
Background: This study aimed to compare total endoscopic ear surgery (TEES) and microscopic postauricular canal-wall-down tympanomastoidectomy (CWD) in cholesteatoma surgery in our clinic. Methods: This study included 59 patients, of whom 30 and 29 were operated on with CWD in 2016-2018 and TEES in 2019-2021, respectively and compared regarding intraoperative findings, hearing outcomes, long-term outcomes, and recidivism rates between groups. This study excluded patients in stage IV according to the European Academy of Otology and Neurotology/Japan Otological Society Staging System on Middle Ear Cholesteatoma, aged < 18, with congenital cholesteatoma, who underwent revision surgery. Results: Two patients in the TEES group had recidivism (6.9%), with recurrent disease observed in both patients and residual disease in none, whereas 3 patients in the CWD group had recidivism (10%), including recurrent disease in 2 and residual disease in 1 patient. Tympanic membrane perforation occurred in 2 (6.9%) and 1 (3.3%) patients in the TEES and CWD groups, respectively. The 2 groups revealed no significant difference in terms of recidivism and perforation rates (P = 1.000, P = .612). The CWD group had a longer mean operation time (225.54 ± 47.86 minutes) than the TEES group (160.55 ± 24.98 minutes) (P < .001). The 2 groups demonstrated no significant difference regarding pre- and postoperative air–bone gap (ABG) and ABG gain (P = .105, P = .329, P = .82, respectively). Conclusion: Total endoscopic ear surgery provides similar results in terms of hearing, recidivism, and long-term outcomes with the microscopic CWD approach. However, the CWD approach is still important, especially in patients in advanced stages.
INTRODUCTION
The pathogenesis of primary acquired cholesteatoma is presented in different theories but with no general agreement. 1,2Surgery is the only treatment option for cholesteatoma; however, consensus on the ideal surgical approach remains unestablished.
In the past, microscopic postauricular or transcanal approaches were classically used for the treatment of cholesteatoma; however, the endoscopic transcanal approach has been widely adopted recently because of its wider view, lack of a separate incision, and less invasiveness. 3Endoscopic ear surgery can be performed in patients with advanced stages and complications, and its limitations and indications have changed over time. 4On the other hand, the postauricular microscopic approaches remain of importance, especially for cases where the cholesteatoma extends beyond the lateral semicircular canal (LSSC).
MATERIAL AND METHODS
This study included 59 patients, of whom 30 and 29 were operated on with CWD in 2016-2018 and TEES in 2019-2021, respectively.Before 2019, CWD was routinely performed in all patients with cholesteatoma; however, after 2019, TEES was performed in all patients, excluding those with a limited number of extratemporal complications and those with cholesteatoma extensively spreading to the mastoid.In this study, the patients who underwent CWD before 2019 were compared with those who underwent TEES after 2019.This retrospective study reviewed patient records through our hospital's computer program (Probel, Izmir).The local ethics committee of İzmir Bozyaka Training and Research Hospital approved this study (Decision number: 2022/97).The study was conducted following the Declaration of Helsinki principles.This study included patients aged >18 years, who were operated on for cholesteatoma with CWD or TEES, and whose cholesteatoma was preoperatively confirmed by high-resolution temporal computed tomography (HRCT) and otoendoscopy.This study excluded patients in stage IV according to the European Academy of Otology and Neurotology/Japan Otological Society (EAONO/JOS) Staging System on Middle Ear Cholesteatoma, 5 aged <18, with congenital cholesteatoma, who underwent revision surgery, and without proper postoperative follow-up and adequate data.
This study used the EAONO/JOS Staging System to stage the included cases.The EAONO/JOS Staging System indicated that in stage I, the cholesteatoma is located in the primary site, either the attic (A) for pars flaccida cholesteatoma or the tympanic cavity (T) for pars tensa cholesteatoma, congenital cholesteatoma, and cholesteatoma secondary to a tensa perforation; while the cholesteatoma is found in 2 or more sites in stage II, extracranial complications or pathologic conditions associated with the cholesteatoma are found in stage III, and intracranial complications are present in stage IV.This study excluded individuals in stage IV to ensure greater similarity between the groups.
All patients in the study were operated on under general anesthesia.Endoscopic ear surgery was performed as total TEES, described by Cohen et al 6 as class 3, using 0º and 30º, 3 mm rigid endoscopes (Karl Storz), camera system, and high-resolution monitor (Karl Storz).Tragal cartilage was harvested as graft material, and cartilage and perichondrium were used.Superior and inferior canal incisions were made in the external auditory canal skin at 6 and 12 o'clock positions following local anesthetic infiltration.The epitympanum and mesotympanum were exposed by raising the tympano-meatal flap.The cholesteatoma was removed after the atticotomy with a curette or a drill if required, and ossiculoplasty was performed in case of ossicular chain destruction.The operation was terminated following the tympanic membrane reconstruction with the perichondrium and tympano-meatal flap repositioning.There was no need to switch to using a microscope in the TEES group.In cases where the cholesteatoma extended to the mastoid, transcanal inside-out mastoidectomy and atticotomy were performed in the TEES group to reach the cholesteatoma sac.Only 1 case required endoscopic CWD procedure.In that case, mastoid obliteration and posterior canal wall reconstruction with cartilage and fascia were performed after the removal of cholesteatoma.In other cases in which the posterior canal wall was intact but atticotomy performed, reconstruction was done with tragal cartilage according to the size of the defect.Figures 1A, B, and C show a right attic cholesteatoma and retraction pocket, oval window after the removal of the cholesteatoma, and ossiculoplasty with a total ossicular reconstruction prosthesis (TORP) during TEES.
In the CWD group, the graft was harvested from the temporal muscle fascia or tragal cartilage following the postauricular incision.Subsequently, tympanomeatal flap elevation, cholesteatoma sac exposure, and canal-wall-down mastoidectomy were performed with the microscope (Zeiss OPMI Vario 700, Jena, Germany), and the cholesteatoma was removed.Meatoplasty was performed after ossiculoplasty and tympanic membrane reconstruction with the temporal muscle fascia or tragal cartilage perichondrium.
Age, gender, side, graft material used, and tympanoplasty type were recorded for all patients, as well as postoperative complications, such as facial palsy, vertigo, otorrhea, perforation, and intraoperative features, such as facial nerve dehiscence, labyrinthine fistula, and tegmen defect.Patients' complaints, temporal bone preoperative HRCT images, pre-and postoperative otoscopic and endoscopic examination notes, surgery notes, and pre-and postoperative audiograms were examined.All patients were assessed with pure-tone audiograms and speech discrimination scores preoperatively and 6 months postoperatively.The 4-tone pure-tone average was used to determine hearing levels.The audiological outcomes were reported following the Committee on Hearing and Equilibrium criteria. 7iffusion-weighted magnetic resonance imaging (MRI) was used for patients with suspected recurrence and residual cholesteatoma at postoperative examinations.
Our study defined recurrent disease as tympanic membrane/attic retraction detection on clinical examination, and residual disease as cholesteatoma detection behind the intact tympanic membrane on diffusion MRI or second-look surgery, as described by Killian et al. 8 Recidivism was used to define the sum of recurrent disease and residual disease. 9Auditory success was defined as an air-bone gap (ABG) of ≤20 dB or 10 dB gain in air conduction in postoperative audiometry.Patients with facial paralysis were staged according to the House-Brackmann facial nerve grading system.
Statistical analysis was performed with IBM Statistical Package for the Social Sciences Statistics (SPSS), version 23.0 (IBM SPSS Corp., Armonk, NY, USA).The numerical variables were presented as mean ± SD, while categorical variables were described as numbers and percentages.The Kolmogorov-Smirnov and Shapiro-Wilk tests were used to determine normal distribution following the normality
MAIN POINTS
• Total endoscopic ear surgery (TEES) is a good alternative for microscopic approaches in appropriate cases, improving access to hidden areas such as sinus tympani.
•
Total endoscopic ear surgery is a safe approach for the removal of cholesteatoma with low recidivism and complication rates.
•
Even complicated cases may be operated on with TEES as experience increases.
•
Postauricular microscopic approach is still important, especially for cases where the cholesteatoma extends beyond the lateral semicircular canal.
assessment.Student t-test, Fisher exact test, and chi-square test were used to compare continuous and categorical variables between the groups as appropriate.Paired t-test was used for paired variables.P-values < .05were considered statistically significant.
Table 2 shows the intraoperative findings and clinical outcomes of patients.Facial nerve dehiscence was observed in 3 (10.3%)and 9 (30%) patients in the TEES and CWD groups, respectively, during the operation.Lateral semicircular canal fistula was present in 1 The recurrent disease and recidivism were encountered in a total of 2 (6.9%) patients in the TEES group, with no residual disease (0%) during the follow-up period.Recidivism was observed in 3 (10%) patients in the CWD group, including residual disease in 1 patient (3.3 %) and recurrent disease in 2 patients (6.6 %).In the CWD group, an epitympanic cholesteatoma was detected in 1 patient and tympanic membrane retraction occurred in 1 patient.No significant difference was observed between recidivism rates between the 2 groups (P = 1.000).Perforation occurred in 2 (6.9%) patients in the TEES group, while perforation occurred in 1 (3.3%) patient in the CWD group, with no significant difference (P = .612)(Figure 2).
Preoperative grade 5 facial paralysis of 1 patient in the TEES group completely resolved postoperatively.One of the 2 patients who had preoperative facial palsy in the CWD group was preoperatively grade 3 and completely recovered postoperatively.However, the other patient regressed from preoperatively grade 5 to postoperatively grade 2.
Table 3 shows the mean pure-tone preoperative and postoperative air conduction and bone conduction thresholds, ABG, and ABG closure (preoperative ABG-postoperative ABG) of the patients pre-and postoperatively.No difference was observed between the preoperative bone conduction pure-tone thresholds between the 2 groups (P = .135),but a significant difference was found between the preoperative air conduction thresholds (P = .031).A significant difference was found between the postoperative bone conduction and air conduction thresholds between the 2 groups (P = .038,P = .035,respectively).No significant difference was found in pre-and postoperative ABG and ABG closure (P = .105,P = .329,P = .82,respectively).A significant difference was found between the preoperative and postoperative word recognition score (WRS) between the 2 groups (P = .005,P = .005).
No statistically significant difference was found between preoperative and postoperative paired measurements of both groups' bone conduction thresholds, air conduction thresholds, WRS, and ABG (P > .05).Auditory success was achieved in 15 (51.7%) and 17 (56.7%)patients in the TEES and CWD groups, respectively, with no significant difference between the 2 groups (P = .703).
DISCUSSION
Our study revealed no difference between the TEES and CWD groups regarding hearing reconstruction and complication rates.Total endoscopic ear surgery is very useful and effective, especially for cases in the early stages.The postauricular and transcanal microscopic approaches were used extensively in ear surgery in our clinic before 2019.On the other hand, endoscopic ear surgery has been increasingly used for tympanoplasty, stapes surgery, and cholesteatoma since 2019.
Canal-wall-down tympanomastoidectomy effectively treats cholesteatoma, but it can disrupt normal anatomy and physiology.The posterior canal wall may be reconstructed, but it does not always give successful results.Canal-wall-down tympanomastoidectomy often causes a cavity that cannot clean itself and avoidance of water sports.Canal wall-up mastoidectomy (CWU) with a microscopic postauricular approach is another alternative, where a separate incision is needed, and posterior tympanotomy may be required to access hidden areas, such as sinus tympani and facial recess.The microscopic transcanal or endaural approach is effective in limited cholesteatomas, but its angle of view is less than that of endoscopes.Healthy mastoid cells and mucosa are also needed for opening the ventilation routes, and gas exchange can be disrupted in transmastoid approaches. 2 Healthy mucosa is protected and obstructions in ventilation routes can be opened during surgery with the endoscopic transcanal approach.
Another disadvantage of microscopic CWU approaches is their higher recidivism rates than CWD.0][11][12] Killen et al 8 compared microscopic CWU and EES and reported that residual disease was 17% in both groups and disease recurrence by clinical examination was 18% and 20% in the EES and CWU groups, respectively.Alicandriciufelli et al 3 revealed residual disease at 20% and recurrence at 12% and included the exclusive endoscopic approach and the combined approach (endo scopi c/mic rosco pic).The retrospective EES series by Glikson et al 13 reported 10% residual disease and 8.3% recurrence.Magliulo and Iannella 14 compared the endoscopic and microscopic approach consisting of 80 patients with attic cholesteatoma and reported no recurrence in either group.Our study revealed a 6.9% and 10% recidivism rate in the TEES and CWD groups, respectively.The recidivism rate was lower in the TEES group although with no significant difference.
Das et al 15 compared the transcanal microscopic group with the middle ear structural visibility index (MESVI) in terms of exposure and access to hidden areas with the endoscopic transcanal approach and revealed better exposure in the endoscopic group.Wu et al 16 also found higher MESVI in the endoscopic group.Better visualization of hidden areas, such as sinus tympani, which is difficult to see with microscopic approaches, with the endoscope provides better clearance of cholesteatoma.We managed to remove the cholesteatoma in a minimally invasive way in the TEES group, often without the need for bone removal.
The extent of cholesteatoma is essential in the choice of surgery.Microscopic approaches and canal wall-down tympanoplasty retain their significance, especially in cases with extensive cholesteatoma accompanied by complications, such as LSSC fistula.Conversely, publications in the literature state that EES can be applied in patients with complications at present. 4,17Our endoscopic case series revealed that 1 patient had preoperative grade 5 facial palsy, and 1 had an LSSC fistula.In general, EES is successfully performed in patients with limited attic or pars tensa cholesteatoma and complications without extensive cholesteatoma, even with LSSC fistula.
Magliulo and Iannella 14 compared EES and the microscopic approach in treating attic cholesteatoma and revealed that the microscopic approach is faster (87.8 min and 69.7 minutes, respectively).Similarly, Das et al 15 Additionally, the surgeon's experience stands out as an important factor in operation time.Operation time may be prolonged if good bleeding control is not performed due to one-handed manipulation in endoscopic surgery.The operation time may be extended due to the wider spread of cholesteatoma in patients with advanced stages.
Wu et al 16 revealed less postoperative pain in EES compared to the microscopic ear surgery (MES) group.Kakehata et al 18 compared TEES with MES and revealed that TEES was associated with less postoperative pain and less non-steroidal anti-inflammatory drug use.Choi et al 19 and Magliulo et al 14 indicated that the reduced pain in the TEES group can be attributed to the absence of drilling on the mastoid bone, with no external incision.
Das et al 15 revealed no significant difference in terms of hearing results between ABG closure in their study investigating endoscopic and microscopic transcanal approaches.Similarly, Bae et al 20 revealed the hearing improvement to be 6.67 dB in the EES group and 1.75 dB in the microscopic group.However, they did not reveal a significant difference between the 2 groups.Moreover, other studies in the literature revealed no significant difference in hearing gains between the EES group and the microscopic CWU group. 8,16Additionally, our study revealed no significant difference in the hearing gains between the 2 groups.
The one-handed technique, lack of 2-dimensional view, and depth perception are major disadvantages of endoscopic surgery. 21ndoscope-holding systems have been developed, but their use in cholesteatoma surgery is minimal. 21Bleeding is another problem, which is crucial to provide hemostasis.Hypotensive anesthesia by the anesthesia team, topical vasoconstriction, and the use of epinephrine-soaked cottonoids are effective in controlling bleeding.Our series demonstrated severe bleeding due to a high jugular bulb in 1 patient in the endoscopic group, and bleeding was endoscopically controlled.We had no bleeding-related complications, except for the previously mentioned patient.Thermal tissue damage is a complication that can be encountered at the beginning of the learning curve, especially in relation to experience.It can be prevented by intermittent washing and keeping the lens tip away from the tissue. 13,21,22r study has some limitations.First, the groups were not randomized.Second, the TEES group may experience more recidivism over time due to the varying follow-up periods between the 2 groups.Third, including the CWU group in the microscopic group would have yielded more accurate results.However, the study included cases that underwent CWD because microscopic cholesteatoma surgery was performed with CWD in our clinic.Last, this is a retrospective study.
Total endoscopic ear surgery has come to the fore recently for reasons, such as being more minimally invasive, preserving normal physiology, less postoperative pain, and shorter recovery time.However, the microscopic postauricular approaches and CWD remain important in cases with extensive cholesteatomas extending to the mastoid and in cases with complications.The use of the appropriate technique with the right patient will increase success rates postoperatively.
Prospective randomized studies with a larger number of patients are needed on this subject.
Figure 1 .
Figure 1.(A) Right ear attic cholesteatoma and retraction pocket eroding the long crus of incus.(B) Following the cholesteatoma removal from the oval window.p: promontorium.o: oval window.Arrow: granulation tissues on the facial canal and epitympanum.(C) Tympanic membrane reconstruction with tragal cartilage and perichondrium, and ossiculoplasty with a titanium TORP.Arrow: TORP.Arrowhead: tragal cartilage.TORP, total ossicular reconstruction prosthesis.
Table 1
shows the demographic and clinical characteristics of patients included in the study.The mean age of the TEES and CWD groups was 36.37 ± 14.82 and 42.06 ± 13.03 years, with 12 males (41.4%) and 19 females (58.6%) and 21 males (70%) and 9 females (30%), respectively.The groups demonstrated no difference in terms of mean age (P = .123)but with a significant gender difference (P = .027).The average follow-up period was 33.06 ± 10.80 months (minimum: 19, maximum: 58) and 48.16 ± 13.27 months (minimum: 21, maximum: 84) in the TEES and CWD groups, respectively.The follow-up period was significantly longer in the CWD group (P < .001).
Table 1 .
Characteristics of Patients a Stage according to the European Academy of Otology and Neurotology/Japan Otological Society Staging System on middle ear cholesteatoma.
Table 2 .
Intraoperative Findings and Clinical Outcomes 3.4%) and 4 (13.3%)patients in the TEES and CWD groups, respectively.Following the bone removal with curettes and burrs in the attic part of the external auditory canal, the LSSC fistula was seen with angled endoscopes and accessed with curved instruments in the TEES group.Intraoperative stapes dislocation and perilymph fistula occurred during the operation in 1 (3.3%) patient in the CWD group.In comparison, bleeding occurred in 1 (3.4%) patient in the TEES group due to the high jugular bulb as an intraoperative com- The bold fold indicates P < .05.CWD, canal-wall-down tympanomastoidectomy; LSSC, lateral semicircular canal; PORP, partial ossicular reconstruction prosthesis; SD, standard deviation; TEES, total endoscopic ear surgery; TORP, total ossicular reconstruction prosthesis.( indicated that the endoscopic approach is faster than the microscopic transcanal approach (122.833± 16.69, 143.94 ± 9.97, respectively).Our study revealed that the operative time was shorter in the TEES group (160.55 ± 24.98 and 225.54 ± 47.86, respectively).Performing external incision, meatoplasty, and posterior canal wall reconstruction in the CWD group may prolong the operation time.
|
2024-08-21T06:17:13.138Z
|
2024-07-01T00:00:00.000
|
{
"year": 2024,
"sha1": "3106875e5f68a69f67a4d50fde21ccdc8994c04d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5152/iao.2024.231405",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "46a5d0751671629e1794d12ca02aeb382ea0ed85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
174816081
|
pes2o/s2orc
|
v3-fos-license
|
Strontium promotes osteogenic differentiation by activating autophagy via the the AMPK/mTOR signaling pathway in MC3T3-E1 cells
Strontium (Sr) is an alkaline earth metal that exerts the dual effect of improving bone formation and suppressing bone resorption, resulting in increased bone apposition rates and bone mineral density. However, the mechanisms through which Sr exerts these beneficial effects on bone have yet to be fully elucidated. The present study aimed to reveal the underlying molecular mechanisms associated with Sr-induced osteogenic differentiation. The effects of Sr on cell proliferation and osteogenic differentiation were analyzed by MTT assay, RT-qPCR, western blot analysis, alkaline phosphatase (ALP) and Alizarin red staining assays. The extent of autophagy was determined by monodansylca-daverine (MDC) staining and western blot analysis of two markers of cellular autophagic activity, the steatosis-associated protein, sequestosome-1 (SQSTM1/p62), and the two isoforms of microtubule-associated protein 1 light chain 3 (LC3), LC-3-I/II. The expression levels of AMP-activated protein kinase (AMPK) and mammalian target of rapamycin (mTOR) were also detected by western blot analysis. Sr at a concentration of 3 mM exerted the most pronounced effect on osteogenic differentiation, without any apparent cell toxicity. At the same time, cellular autophagy was active during this process. Subsequently, autophagy was blocked by 3-methyl-adenine, and the enhancement of osteogenic differentiation in response to Sr was abrogated. Additionally, the phosphorylation level of AMPK was significantly increased, whereas that of mTOR was significantly decreased, in the Sr-treated group. Taken together, the findings of the present study demonstrate that Sr stimulates AMPK-activated autophagy to induce the osteogenic differentiation of MC3T3-E1 cells.
Introduction
Bone is a metabolically dynamic tissue that undergoes continuous renewal. The skeletal reconstruction process consists of bone resorption by osteoclasts, and bone formation by osteoblasts. The occurrence of resorption and formation ensures the basal bone metabolism, thereby maintaining bone homeostasis. Several factors, including systemic hormones, growth factors, minerals and trace elements, have been shown to routinely regulate the balance between these two processes. One of the trace elements involved in these processes is strontium (Sr), which has long been of particular interest due to its dual skeletal effects (1,2). Strontium ranelate [RanSr; a strontium(II) salt with ranelic acid] has been demonstrated to function as a medication for postmenopausal osteoporosis (3)(4)(5). This drug has been extensively used to inhibit massive bone loss (6)(7)(8). Strontium (II) exhibits a dual mechanism of action, inhibiting bone resorption and stimulating bone formation. Although the beneficial effects of Sr on osteogenesis in different models have been corroborated by numerous previously published studies (1,9,10), the mechanisms underpinning Sr action on bone reconstruction have yet to be fully elucidated; indeed, an incomplete understanding of the mechanism presents one of the major obstacles for the successful application of Sr in clinical practice.
Macro-autophagy (henceforth, referred to as autophagy) is known to be an ubiquitous intracellular degradation process through which cells protect themselves. during autophagy, the autophagosome, which contains dysfunctional proteins and futile macromolecules, fuses with a lysosome to form the autolysosome, where degradation occurs. In response to multiple stresses, such as nutrition deficiency, tumor formation or aging, autophagy tends to exert its function as a cell survival mechanism (11). In addition, accumulating evidence has demonstrated that autophagy is also involved Strontium promotes osteogenic differentiation by activating autophagy via the the AMPK/mTOR signaling pathway in MC3T3-E1 cells in osteogenesis and bone development (12,13). Furthermore, emerging evidence has suggested that AMP-activated protein kinase (AMPK) and mammalian target of rapamycin (mTOR) are crucial for autophagy (14)(15)(16). The present study aimed to investigate the interaction between Sr-inducing osteogenic differentiation and autophagy. The underlying mechanisms, and the connection of the AMPK/mTOR signaling pathway with this process, were also explored in this study.
Materials and methods
Cell culture. Mc3T3-E1 osteoblastic cells (subclone 14) were purchased from the National Infrastructure of cell Line Resource (no. 3131c0001000300015) and routinely cultured at a density of 10 5 cells/well in Hyclone™ α-modified Eagle's medium (α-MEM) (Thermo Fisher Scentific, Inc.) supplemented with 10% (v/v) Gibco™ fetal bovine serum (FBS) (Thermo Fisher Scientific, Inc.), 100 U/ml penicillin and 100 g/ml streptomycin (Sigma-Aldrich; now a brand of Merck KGaA) at 37˚C. To induce differentiation, the cells were cultured in osteoinductive medium comprising α-MEM, 10% FBS, 1% penicillin-streptomycin, 10 mM β-glycerophosphate and 100 µg/ml ascorbic acid. For the experimental group, 3 mM Srcl 2 (Merck KGaA) was dissolved in normal saline before being added to medium, and the cells were incubated for a different number of days (3, 7 or 21 days). cells grown in osteoinductive medium containing the identical components, but without Sr, were used the negative control.
MTT assay. cell proliferation was assessed by MTT assay, according to the manufacturer's protocol (Nanjing KeyGen Biotech co., Ltd.). The cells (5x10 3 /ml) were seeded into 96-well plates. Following 24 h of incubation at 37˚C, various concentrations of Sr (0, 3, 6, 12, 24, 48, or 96 mM) were added to the cells. After a further 72 h, the absorbance of the cells was measured at 490 nm on a microplate reader (Synergy™ 2; BioTek Instruments, Inc.).
Alkaline phosphatase (ALP) staining and Alizarin red staining.
For the analysis of mineralization, the Mc3T3-E1 cells (5x10 4 cells/well) were seeded into 6-well plates. Following 24 h of culture at 37˚C, various concentrations of Sr (i.e., 3, 6 or 12 mM) were added to the wells. ALP activity was determined after a further 7 days by staining with 5-bromo-4-chloro-3-indolyl phosphate (BcIP)/nitro blue tetrazolium (NBT/ALP) dyeing fluid for 30 min at room temperature (Beyotime Institute of Biotechnology, Haimen, china). Mineralized nodules were stained with Alizarin Red solution (Merck KGaA) for 30 min at room temperature after 21 days.
RT-qPCR.
Mc3T3-E1 cells (5x10 4 cells/well) were seeded in 12-well plates. Following 24 h of culture at 37˚C, various concentrations of Sr (3, 6 or 12 mM) were added to the wells, and the cells were cultured for 72 h. Total RNA was extracted using Invitrogen ® TRIzol™ reagent (Thermo Fisher Scientific, Inc.). A Nanodrop™ 2000c spectrophotometer (Thermo Fisher Scientific, Inc.) was used to quantity the concentration of total RNA. Aliquots (2 µg) of total RNA were employed in RT reactions using a FastQuant RT Super Mix kit (TianGen Biotech co., Ltd., Beijing, china). RT-qPcR using the Stratagene ® MX3005P system (Agilent Technologies, Inc.) was performed using SuperReal PreMix Plus (SYBR-Green; TianGen Biotech Co., Ltd.) according to the manufacturers' protocol. The thermal conditions were as follows: 95˚C for 15 min, followed by 40 cycles at 95˚C for 15 sec, 60˚C for 20 sec and 72˚C 20 sec. The expression levels were normalized to GAPdH. The data obtained were analyzed using the 2 -ΔΔcq method, where Δcq is the value from the threshold cycle (cq) of the treated sample subtracted from the cq value of control samples (17). The primers used in the present study were obtained from General Biosystems and were as follows: Runt-related transcription factor 2 (RUNX2) forward, 5'-GCT ATT AAA GTG ACA GTG GAC GG-3' and reverse, 5'-GGC GAT CAG AGA ACA AAC TAG G-3'; osteocalcin (OCN) forward, 5'-AAG CAG GAG GGC AAT AAG GT-3' and reverse, 5'-CAA GCA GGG TTA AGC TCA CA-3'; and GAPDH forward, 5'-CGT CCC GTA GAC AAA ATG GT-3' and reverse, 5'-AAT GGC AGC CCT GGT GAC-3'.
Monodansylcadaverine (MDC) staining. Mc3T3-E1 cells (10 5 cells/well) were plated in 6-well plates and treated with 3 mM Sr. The cells were cultured with osteogenic medium for 3 days at 37˚C, and were then stained with MDC for 45 min at room temperature (Nanjing KeyGen Biotech co., Ltd.) according to the manufacturer's protocol. The fluorescence of the wells containing the attached cells was measured using a fluorescence microscope (512 nm emission wavelength; Nikon corp.). The presence of acidic vesicles, indicating the activated autophagosome, was determined by measuring the level of green fluorescence.
Treatment with 3-methyladenine (3-MA). 3-MA (100 mM; Merck KGaA) was dissolved in DMSO and stored at -20˚C prior to use. The stock was heated to 65˚C in order to obtain a clear solution, and subsequently diluted with α-MEM. Prior to 3-MA treatment, the Mc3T3-E1 cells were cultured in 12-well plates at 37˚C until they reached ~80% confluence, and 10 mM 3-MA were added to the cells and continued to culture for 6 h. Following pre-incubation of 3-MA, osteoinductive medium with or without 3 mM Sr was added. The cells were subsequently incubated for an additional 3 days.
Treatment with dorsomorphin (compound C). AMPK inhibitor compound c was purchased from Selleck chemicals, dissolved in DMSO (1 mM), and stored at -20˚C prior to use. The stock was diluted to 5 µM with α-MEM. The Mc3T3-E1 cells were cultured in 12-well plates at 37˚C until they reached ~80% confluence, and were then pre-incubated with 5 µM compound c for 12 h. Following pre-incubation, osteoinductive medium with or without 3 mM Sr was added. The cells were subsequently incubated for an additional 3 days.
Statistical analysis. All the results are expressed as the means ± Sd for a minimum of 3 independently performed experiments. All data were analyzed using GraphPad Prism 7.0 software (GraphPad Software, Inc.). Statistical analysis was performed using either a two-tailed Student's t-test or one-way ANOVA followed by post-hoc Tukey's test for multiple comparisons. P<0.05 was considered to indicate a statistically significant difference.
Results
Effect of Sr on the viability of MC3T3-E1 cells. The effects of Sr at a wide concentration range on the viability of the Mc3T3-E1 cells was investigated by MTT assay. As shown in Fig. 1, there was no significant effect on cell viability observed when the cells were treated with 3-12 mM Sr for 3 days. However, the exposure of Mc3T3-E1 to >24 mM Sr markedly decreased the viability of the cells. As the viability of the Mc3T3-E1 cells was markedly decreased upon exposure to high levels of Sr, the safe range of concentrations of Sr to be administered were selected to be 3-12 mM for use in subsequent experiments to observe the osteogenic differentiation of Mc3T3-E1 cells.
Effect of Sr on the osteogenic differentiation of MC3T3-E1 cells. Several assays were applied to examine the effects of treatment with a low concentration of Sr on osteogenic differentiation. First, the expression of genes associated with osteogenic differentiation, namely RUNX2 and OcN, was assessed by RT-qPcR. As shown in Fig. 2A and B, treatment with 3 mM Sr significantly increased the expression of both genes. The protein level of OcN was correspondingly increased under 3 mM Sr treatment ( Fig. 2c and d). Although Sr at the concentrations of 6 and 12 mM exerted no toxic effects on cell growth in the previous experiments shown above, these concentrations had no marked effect on RUNX2 expression compared to the control and suppressed the expression of OcN. In addition, similar trends were observed based on the experiments involving ALP staining and Alizarin Red staining. Following 7 days of Sr treatment, the Mc3T3-E1 cells exhibited the highest ALP activity at the concentration of 3 mM Sr compared with the other groups (Fig. 2E). The Alizarin Red staining results revealed that the cells treated with 3 mM Sr exhibited the optimal quantity and intensity of color (Fig. 2F). Taken together, these experiments revealed that 3 mM Sr elicited the most pronounce effects on the osteogenic differentiation of Mc3T3-E1 cells; therefore, 3 mM Sr was selected for use in subsequent experiments.
Autophagy participates in the process of Sr-induced osteogenic differentiation. Two markers for cellular autophagic activity, the two isoforms of Lc3, Lc3-I/II, and SQSTM1/p62, were used to examine the effects of autophagy on Sr-mediated osteogenic differentiation. The essential autophagy-associated protein, SQSTM1/p62, functions as an ubiquitin-binding protein, and is degraded during selective autophagy progression. Upon the induction of autophagy, Lc3-I becomes acylated (i.e., it is converted into Lc3-II), and inserts itself into the autophagosomal membrane (18). In the present study, based on the western blot analysis experiments, the conversion rate of LC3-I into LC3-II was significantly increased, and this represents the most critical event in autophagosome formation ( Fig. 3A and B). Simultaneously, the expression level of SQSTM1/p62 was significantly decreased (Fig. 3A and C), and this protein is involved in autolysosome degradation (18). To corroborate the current results, the cells were also stained with MDC. These results confirmed that the fluorescent Sr-treated cells exhibited a more obvious punctate shape, which corroborated the protein expression results (Fig. 3d).
Inhibition of autophagy suppresses Sr-induced osteogenic differentiation. 3-MA has been shown to inhibit the progression of autophagy by blocking autophagosome formation via the inhibition of the type III phosphoinositide 3-kinase (PI3K) (19). Thus, in this study, to further confirm whether autophagy is involved in the Sr-induced osteogenic differentiation, the Mc3T3-E1 cells were incubated with 10 mM 3-MA for 6 h prior to the addition of 3 mM Sr. As shown in Fig. 3A-c, 3-MA successfully suppressed the autophagy of the Mc3T3-E1 cells with/without Sr. Subsequently, osteogenic parameters were measured in order to demonstrate the osteogenic conditions upon treatment with 3-MA. These results indicated that treatment with 3-MA alone exerted no significant effect on osteogenic differentiation. However, the osteogenic differentiation induced by Sr was inhibited in the presence of 3-MA. Additionally, no significant differences were noted with regard to the expression levels of RUNX2 and OcN, the activity of ALP, and the formation of mineralized nodules in the 3-MA + Sr experimental group compared with the control group (i.e., the cells cultured only with osteogenic media) (Fig. 4).
Sr-induced autophagy is activated by the AMPK/mTOR signaling pathway. Finally, the molecular mechanisms involved in the association between Sr-induced osteogenic differentiation and autophagy was explored. To meet this aim, the AMPK/mTOR pathway following Sr treatment was investigated. As shown in Fig. 5, compared with the control, the phosphorylation level of AMPK was significantly increased in the cells exposed to 3 mM Sr for 3 days. The higher ratio of phosphorylated AMPK to AMPK suggested the activation of autophagy. In addition, a low phosphorylation level of molecular mTOR downstream was observed in the 3 mM Sr-treatment group, which was consistent with the AMPK results. To further investigate the role of the the AMPK/mTOR signaling pathway in Sr-induced autophagy, the AMPK inhibitor, compound c, was used in the subsequent experiments. The results of western blot analysis revealed that pre-incubation with compound c markedly inhibited the phosphorylation level of AMPK and blocked the formation of Lc-3 II (Fig. 6A-d). The conversion of Lc3-I into Lc3-II, which represents inhibition of Sr-induced autophagy, was not observed. Moreover, the expression level of OCN was significantly decreased in the MC3T3-E1 cells upon treatment with compound c (Fig. 6E). Taken together, these data indicate that the AMPK/mTOR pathway is involved in the mechanisms through which Sr induces autophagy and the osteogenic differentiation of Mc3T3-E1 cells.
Discussion
In the present study, the toxic effects of various concentrations of Sr (3-96 mM) on Mc3T3-E1 cells were investigated, and the experiments confirmed that Sr at the concentration range of 3-12 mM exerted no marked effect on cell viability, whereas as the concentration increased, a toxic effect on the cells was noted. The results of the RT-qPcR, western blot analysis, ALP and Alizarin Red staining also confirmed the effects of Sr on osteogenic differentiation and mineralization, and these were consistent with recently published studies (20,21). Previous studies have reported that Sr is able to positively modulate osteogenic differentiation at concentrations ranging from 1-10 mM (22,23). In the present study, the RT-qPcR results of cells treated with 1 mM Sr exhibited no significant changes in osteogenic differentiation compared with the control (Fig. S1). Figure 5. Sr-induced autophagy is activated via the AMPK/mTOR signaling pathway. (A-c) Western blot analysis results for AMPK, p-AMPK, mTOR and p-mTOR. Quantitative analysis of p-AMPK to AMPK and p-mTOR to mTOR is also shown, expressed as the means ± Sd (n=3 for each group). * P<0.05 compared with the control group. Sr, strontium chloride; AMPK, AMP-activated protein kinase; mTOR, mammalian target of rapamycin. combining the PcR, western blot analysis and Alizarin Red staining results, it was possible to confirm that 3 mM Sr plays a positive role in the osteogenic induction of Mc3T3-E1 cells. In addition, the activity and differentiation of cells at concentrations of Sr of 6 and 12 mM were not as effective as that in the 3 mM Sr-treatment group. On the basis of these data, 3 mM Sr was therefore selected as the appropriate concentration of Sr, whereas concentrations of Sr >12 mM could be toxic to cells.
during the course of the past decade, an increasing number of studies have reported on the development of Sr utilization, including pharmacological induction and biomaterial substitution studies (3,24,25). Evidence from in vitro and in vivo studies have shown that Sr may promote osteogenic differentiation and mineralization in the dental pulp via PI3K/Akt signaling (26,27). In addition, Wnt/β-catenin signaling has been shown to mediate the protective effects of Sr in mice (28)(29)(30). Although the beneficial effects of Sr have been demonstrated in numerous studies (1,9,10), no drug comprising Sr has yet been approved by the Food and drug Administration for osteoporosis treatment in the USA (4,20,31). during the year 2014, Sr also lost its pre-eminent status in the European Medicines Agency owing to the mounting concerns regarding the occurrence of cardiovascular events associated with its long-term use (32). As previously reported by Atteritano et al a 12-month treatment with Strontium Ranelate did not alter hemostasis factors or markers of cardiovascular risk (33). The precise mechanisms of the drug-induced effect on the cardiovascular risk is complex and requires further investigation in the future. currently, Sr is only cautiously allowed to be administered in the treatment of severe osteoporosis. However, the balance between treatment benefits and side-effect risk should always be considered in drug application. Therefore, research into the molecular mechanisms of Sr is urgently required in terms of its clinical application.
Autophagy is an evolutionarily conserved cellular pathway mediating cell metabolism under different conditions (34,35). In addition to the function of autophagy in cellular metabolism, survival and death, emerging evidence has suggested an association between autophagy and cell development (35)(36)(37). Liu et al (36) demonstrated that the suppression of autophagy did lead to osteopenia in mice via the inhibition of osteoblast differentiation. The activation of autophagy by Forkhead box O3 (FOXO3) has been reported to regulate redox homeostasis during osteogenic differentiation (37). Kang et al (38) reported that a deficiency in autophagy may impair chondrogenesis via the PERK-ATF4-cHOP axis. Furthermore, lipopolysaccharide-induced autophagy plays an important role in osteoclastogenesis (39). An exploration of the observed effects of autophagy and Sr in the literature stimulated our investigation of the autophagy levels in Sr-treated Mc3T3-E1 cells in the present study. The results of Lc-3 I/II conversion, SQSTM1/p62 expression, and Mdc staining in our study suggested that the autophagic process may be activated during Sr-induced osteogenic differentiation. In addition, the expression levels of LC-3 II were significantly decreased in the 6-12 mM Sr-treated cells, indicating that autophagy was not activated (Fig. S2). These results may help to account for the osteogenic differentiation results determined previously with the high-dose Sr group. The osteogenic differentiation induced by Sr was attenuated when the cell autophagy was inhibited by 3-MA. Taken together, these data suggest that autophagic events in Mc3T3-E1 cells are essential in terms of the Sr-induced osteogenic differentiation process.
It has been well established that AMPK plays a critical role in the regulation of osteogenic differentiation (40)(41)(42). Several studies have demonstrated that pharmacological AMPK activators induce the osteogenic differentiation and mineralization of osteoblastic cell lines and bone marrow progenitor cells, whereas AMPK gene knockdown can reduce bone mass in mice (43)(44)(45). It is noteworthy that AMPK has also been shown to be a well-established regulator in autophagy via the inhibition of mTOR (14,(45)(46)(47). In vitro studies have reported crosstalk between the processes of osteogenic differentiation and autophagy in human mesenchymal stem cells mediated via the AMPK/mTOR signaling pathway (14,46). In the present study, the phosphorylation level of AMPK in the Sr-treated cells was observed to increase, suggesting that the activation of the autophagy process had occurred. In addition, low phosphorylation levels of molecular mTOR downstream were observed in the induction group, which remained consistent with the preliminary results. As it had already been observed that compound c could inhibit AMPK phosphorylation, the western blot analysis results revealed the occurrence of reduced autophagy and decreased osteogenic differentiation in cells upon treatment with both compound c and Sr. Therefore, it is evident that the AMPK/mTOR pathway is a pivotal regulator involved in Sr-induced autophagy and osteogenic differentiation. However, opposite results have also been reported, i.e. that mTOR1 signaling promotes the maturation and differentiation of pre-osteoblasts (47). The reason(s) for such a discrepancy remains unclear, although there are two types of mTOR complexes (mTORc1 and mTORc2) that possess different characteristics, and the mTOR signal pathway may play distinctly different roles during different stages of osteoblast differentiation.
In conclusion, the findings of the present study demonstrate that the AMPK/mTOR signaling pathway is involved in the mechanisms of the autophagy process associated with the Sr-induced osteogenic differentiation of Mc3T3-E1 cells. Further clarification of the Sr mechanism associated with autophagy may provide novel opportunities for both drug development and a proper clinical application for bone regeneration.
Acknowledgements
Not applicable.
Funding
This study was supported by the Natural Science Foundation of Tianjin (grant no. 15JcYBJc27400) and Natural Science Foundation of Tianjin (grant no. 14JcZdJc38500).
Availability of data and materials
All data generated or analyzed during this study are included in this published article or are available from the corresponding author on reasonable request.
|
2019-06-14T14:06:41.787Z
|
2019-05-30T00:00:00.000
|
{
"year": 2019,
"sha1": "262f3e7adf2efb5fafb1ece580ab1b7f075c32cb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2019.4216/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "262f3e7adf2efb5fafb1ece580ab1b7f075c32cb",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
55038882
|
pes2o/s2orc
|
v3-fos-license
|
Leaf Rust Resistance and Molecular Identification of Lr 34 Gene in Egyptian Wheat
Within the last twenty years, wheat has become the most important crop in Egypt. Egypt seeks to increase productivity and yields in order to meet the target of producing 75% of its own wheat needs [1]. Leaf rust or brown rust caused by Puccinia triticina (formerly known as Puccinia recondita f. sp. tritici) has been the most frequent disease in wheat producing areas [2]. Studies in Egypt estimated crop losses of up to 50% due to leaf rust infection [3].
Introduction
Within the last twenty years, wheat has become the most important crop in Egypt. Egypt seeks to increase productivity and yields in order to meet the target of producing 75% of its own wheat needs [1]. Leaf rust or brown rust caused by Puccinia triticina (formerly known as Puccinia recondita f. sp. tritici) has been the most frequent disease in wheat producing areas [2]. Studies in Egypt estimated crop losses of up to 50% due to leaf rust infection [3].
The cultivation of resistant varieties remains the most economic and environmentally preferable method to manage this disease. To date, 80 genes and alleles of leaf rust resistance genes in wheat have been mapped to chromosome location and given gene designations [4]. Some of the resistance genes are effective at seedling stage and they are race specific [5]. Several of these genes may become ineffective due to the emergence of new virulent races and also because of rapid evolution and adaptation of pathogen [6]. In contrast, others are effective through the adult plant stage and are referred to as slow rusting genes and they are race non-specific provide durable resistance or a broad spectrum of races. Therefore, a cultivar that only has slow rusting resistance to leaf rust will display susceptible infection type response throughout the entire lifecycle of the plant [7]. Slow rusting resistance can be measured in the field by recording disease severity at weekly intervals and then calculating the area under disease progress curve (AUDPC) [8].
One of these important race-nonspecific resistance genes is Lr 34. It is located on the short arm of chromosome 7D and it encodes ATP binding cassette (ABC) transporter [9]. Also, it is associated with leaf tip necrosis, adult plant resistance to stem rust, adult plant resistance gene Yr 18 to stripe rust, and tolerance to barley yellow dwarf virus [10].
Efficient incorporation of Lr 34 in adapted germplasm using traditional methods was difficult because of its quantitative inheritance nature. Thus, using of molecular marker technique is the best alternative methodology to identify and consequently to incorporate this important gene in economically important genotypes. Available information about Lr 34 gene sequence provided good tool to develop and to track its introgression in different genotypes and consequently its pyramiding in commercial varieties [9]. Therefore, development of molecular marker for Lr 34/7DS region has been a major objective for marker assist selection (MAS) for this important gene. [11] were able to utilize the available knowledge about this locus to develop a specific codominant marker namely; csLV34. This marker had the ability to diagnose the Lr 34 gene in diverse cultivar backgrounds [12]. It revealed a bi-allelic nature where 79 bp deletions in an intron sequence were accompanied by the presence of Lr 34 gene resistance. Several other markers that differentiate among the alleles of Lr 34 have been described [9,13].
Because of the superiority of molecular markers in MAS for genes in different genetic background even in highly bred cultivar under any environmental conditions, different types of molecular markers based on genetic variations have been developed in Egyptian wheat cultivars [14][15][16][17]. Also, haplotype polymorphism among Egyptian wheat varieties for Lr 34/7DS region had been studied using microsatellite markers [16]. Therefore, the objectives of the present investigations were: (1) to evaluate Egyptian wheat varieties to leaf rust at adult plant stage under field conditions and to identify the presence of Lr 34 gene with csLV34 specific marker in Egyptian wheat varieties. D = days between reading, Y1=First disease recording and Yk=Last disease recording
DNA extraction and PCR reaction
Young leaves were collected from two-week old plants of all genotypes and were subjected to CTAB protocol for genomic DNA Extraction, which is based on method of [23]. DNA concentration was estimated and used as PCR template. DNA samples were visualized on 1-2% agarose. Polymerase chain reaction (PCR) was conducted to detect specific Lr 34 gene fragment using specific primer namely; LrcsLV34 as described in [11]. The sequence of the forward primer is 5`GTTGGTTAAGACTGGTGATGG3`and the reverse primer is 5`TGCTTGCTATTGCTGAATAGT3`. Polymerase Chain Reaction (PCR) was undertaken in 50 μL total volume containing 5 μL of 10X PCR buffer, 4 μL (25 mM MgCl 2 ), 1 μL (10 ng) of DNA, 1 μL (100 ng, 125 picomole) of primer (forward and reverse), 1 unite of Taq DNA polymerase. PCR amplification conditions were initial denaturation at 95°C for 5 min, denaturation at 95°C for 1 min, annealing at 55°C for 30 s for 35 cycles, extension at 72°C 1 min, and final extension at 72°C for 5 min. The PCR products were analyzed by electrophoretic separation in a 1-2% agarose gel. DNA marker of 100 bp DNA ladder marker was added on one side of the gel to determine the size of the DNA pattern. Gel was stained with ethidium bromide.
Microsoft Excel 2010 (Microsoft Corporation, USA) computer program was used to draw the standard curve and to estimate fragment size.
Field inoculation of wheat genotypes and disease assessment
The tested wheat genotypes were sown in rows and were surrounded by spreader plants (Morocco and Thatcher) which were moisture by a fine spray with water and dusted with a mixture of leaf rust urediniospores and talcum powder at the ratio of. 1:20 (v/v). The inoculation of all plants was carried out at botting stage [18]. Adult plant response was scored as rust severity (%) for each genotype after disease on set till the early dough stage according to the scale proposed by [19]. Rust severity of each genotype was recorded every seven days after the appearance of initial infection, using the modified Cobbs scale [20]. Final rust severity (FRS) was recorded as outlined by [21], as disease severity (%) when the highly susceptible check variety was severely rusted and the disease rate reached the highest severity. Also, area under disease progress curve (AUDPC) was estimated to compare different responses of the tested genotypes using the following equation: .Y(k-1) ] as described by [22], where;
Variety
Pedigree Year of release
Season 2012/13
Results given in Table 3 showed that the tested wheat varieties showed different levels of final rust severity (%) ranging from 0 to 70 S at Shibin El-Kom location and from 0 to 80 S at Itay El-Baroud location. According to the response of the tested genotypes, they were divided into the same three groups of Table 2
Molecular marker detection
For further resistance evaluation of the wheat genotypes under investigation, the presence of the Lr 34 was investigated. The STS marker
Results
To assess leaf rust disease resistance of some Egyptian wheat varieties, final rust severity (FRS %) and area under disease progress curve (AUDCP) were determined.
Season 2011/12
Data presented in Table 2 showed that, the tested genotypes could be classified into three main groups on the basis of FRS (%) and AUDPC values. The first group included the wheat varieties with racespecific resistance which displayed the lowest values of FRS (%) and AUDPC. This group included the wheat varieties Sids 12, Misr 1, Misr 2, Shandaweel 1, Beni Sweif 4 and Beni Sweif 5 which were immune and showed zero percent rust severity and Sids 13 (Tr MR) at Shibin El-Kom location. Meanwhile, at Itay El-Baroud location the wheat varieties Shandaweel 1 and Beni Sweif 5 showed zero percent rust severity, Sids 12, Sids 13, Misr 1 and Beni Sweif 4 (each with Tr MR) and Misr 2 (5 MR) showed the lowest values of FRS (%). Moreover, these varieties showed the lowest values of AUDPC ranged from 0 to 49 at the two locations. The second group included the wheat genotypes which displayed low values of FRS (%) and AUDPC (less than 300). Therefore, they were characterized as slow rusting varieties or partially resistant varieties. This group included the wheat genotypes Lr 34
FRS (%) AUDPC Shebin El-Kom Itay El-Baroud Shebin El-Kom Itay El-Baroud
Group I: Varieties with race specific resistance (Figures 2 and 3) in addition of large fragment which it was unrelated to the gene Lr 34. However, the diploid Aegilops tauschii the progenitor of D genome in cultivated wheat was tested and the presence of csLV34a allele was demonstrated.
Discussion
A set of Egyptian wheat varieties released from 1979 to 2014 and an older variety namely Giza 139 were tested for the leaf rust resistance and for the variation in the locus Lr 34. Thus rust incidence as final rust severity (FRS%) was recorded for each of the tested genotypes. However, the wheat varieties Sids 12, Sids 13, Misr 1, Misr 2, Shandweel 1, Beni Sweif 4 and Beni Sweif 5 were very resistant during the two growing seasons 201/12 and 2012/13 at both locations i.e. Shibin El-Kom and Itay El-Baroud. Therefore it was concluded that the resistance in these varieties mainly due to race-specific resistance gene (s) against leaf rust.
Slow rust resistance at adult plant stage to leaf rust in the tested wheat varieties can be accurately measured by using area under disease progress curve (AUDPC) parameter, which considered the most convenient and a good reliable estimator for indicating the amount of rust infection occurred during an epidemic. Furthermore, AUDPC in particular is the result of all factors that influence disease development such as differences in environment, varieties and population of the pathogen [22,24] reported that disease development and AUDPC are the best estimators of partial resistance in wheat to leaf rust.
According to the obtained results and depending on the values of AUDPC, it could be stated that the wheat genotypes Lr 34, Giza 165, Giza 168, Sakha 8, Sakha 94, Sakha 95, Gemmeiza 5, Gemmeiza 7, Gemmeiza 9, Gemmeiza 10, Gemmeiza 11 and Sohag 3 have high level of slow rusting resistance under field conditions through the two growing seasons at both locations. These genotypes showed the lowest AUDPC values (less than 300), therefore this group of genotypes characterized as slow rusting resistant group. On the other hand, the wheat varieties Giza 160, Giza 163, Giza 64, Sakha 69, Sakha 93, Sids 1 and Giza 139 have been severely rusted, showing the highest values for Lr 34 gene was used to identify the presence of the resistance allele in genotypes under study. The csLV34 is a PCR-based marker and it was mapped 0.4 cM from this gene and validated in many genotypes from different parts of the world [11]. In other words, this marker is capable of differentiating among lines with/out this gene. The csLV34 primer amplified two fragments of 150 and 229 bp in positive and negative controls, respectively. The csLV34a allele (229 bp) was detected in the check cultivar Giza 139 and csLV34b allele (150 bp) was detected in the near isogenic line Thatcher Lr 34 (Figure 1) of AUDPC (up to 840). Consequently, these varieties classified as the highly susceptible or fast rusting varieties group [25] found that the wheat cultivar Agra Local showed the highest value of AUDPC (1300), the wheat cultivar Kundan showed least AUDPC value (217). While the wheat cultivars Trap (317), Galvez-78 (344), Mango (412), Chris (504) and PBW-348 (737) [26] reported that the wheat cultivars Chenab 70, WL 711, Pak. 81 were fast rusting cultivars, while the cultivars Pavon, FSD and INQ-91 were slow rusting cultivars [27] found that the wheat varieties Giza 168 and Gemmeiza 7 showed partial resistance which they showed lowest values of FRS (%) (did not exceed 250) and AUDPC (not more than 250). Marker-assisted selection offers the opportunity to select desirable lines on the basis of genotype rather than phenotype [28], especially in the case of combining different genes in a single genotype. Results of this study showed the usefulness of the SSR markers for identification of the leaf rust resistance gene Lr 34 in the tested wheat genotypes.
However, the evaluation of the tested genotypes for two seasons at two locations gave an evidence of the present of slow rusting resistance gene (s). Therefore, using marker-assisted selection to confirm the presence of the resistance gene Lr 34 was significant. The wheat varieties Giza 165, Giza 168, Gemmeiza 5, Gemmeiza 7, Gemmeiza 9, Gemmeiza 10 and Gemmeiza 11 did not show the 150 bp band but the AUDPC of these varieties showed that these varieties have slow rusting resistance gene (s). The resistance in these varieties appeared to be based on gene (s) other than Lr 34. This gene (s) may be the slow rusting resistance Lr 46 and/or Lr 68. [27] found that, partial resistance in the two wheat varieties Gemmeiza 9 and Giza 168 mainly due to the presence of the adult plant resistance gene Lr 46 which confirmed by genetic analysis. Moreover, [29] found that adult plant resistance to leaf rust (Puccinia triticina) in line Parula is governed by at least three independent slow rusting resistance genes i.e. Lr 34, Lr 46 and Lr 68 gene on 7BL. [30] found that the partial resistance in the wheat cultivar HD2009 is similar in expression to that conferred by the gene Lr 34, but cultivar HD2009 did not show leaf tip necrosis, a morphological marker tightly linked to the leaf rust resistance gene . On the other hand, the two varieties Sids 13 and Shandweel 1 showed to carry the gene Lr34. However they showed race specific resistance. The resistance in these two varieties may be due to resistance genes other than Lr 34 of slow rusting phenotype. Many leaf rust resistance genes showed race specific resistance in seedling stage and remained effective in the adult stage such as Lr1, Lr10 and Lr21 [33]. Since, these two varieties were recently released; therefore, they may contain one of these genes of this nature. Moreover, resistance to leaf rust in these varieties mainly due to race-specific resistance gene (s) [34] found that individual major genes for adult plant resistance to leaf rust can enhanced the effectiveness of resistance when combined in wheat cultivars. Therefore, presence of adult plant resistance gene (s) in the two varieties Sids 13 and Shandweel 1 may be masked the effect of the gene Lr 34.
The obtained molecular results by the cslV34 marker in combination with the knowledge of the origin of the varieties understudy, may be enabled the most likely the origin of the important gene Lr 34 in wheat Egyptian genotypes. Results of this research proved that Sakha 8 carried this gene. Also, previous results in our laboratory by [35] came with the same conclusion using genetic analysis. Sakha 8 was released in 1987. In this era 1970s, Akakomughi of Japanese origin appeared in the pedigree of all Egyptian cultivars released [36]. Akakomughi is a grandparent of spring wheat variety Frontana which was used widely as a source of Lr 34 [6]. Therefore, it may be concluded that the Lr 34 gene was first introduced to Egyptian varieties back in 1970s. Also, Sakha 8 may become the donor of this gene in subsequent derivatives of crosses which led to many recent varieties such as Sakha 94, Sakha 95, Sids 13 and Shandweel 1.
Finally, T. aestivum is hexaploid with a genome constitution of AABBDD, and was formed about 8,000 years ago from hybridization between T. turgidum (AABB) and A. tauschii (DD) [37]. Also, cslV34 marker is very specific for T. aestivum and D genome progenitor A. tauschii [11]. Therefore, it was investigated in A. tauschii diploid genome. The results of this research confirmed the presence of cslV34b allele and consequently the presence of the Lr 34 gene for resistance in diploid D genome progenitor. The presence of Lr 34 for resistance in the current A. tauschii suggests that this resistance gene may have arisen before hexaploid synthesis.
|
2019-04-02T13:06:00.534Z
|
2015-10-24T00:00:00.000
|
{
"year": 2015,
"sha1": "8e93809eb78e98f9dfa4fc533df4ebe4e1db34dc",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/leaf-rust-resistance-and-molecular-identification-of-lr-34-gene-in-egyptian-wheat-1948-5948-1000236.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f60a3095d0bbf7abb0b02c7c9733e0e9e634f835",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
118577426
|
pes2o/s2orc
|
v3-fos-license
|
Equal-Interval Splitting of Quantum Tunneling in Single-Molecule Magnets with Identical Exchange Coupling
The equal-interval splitting of quantum tunneling observed in simple-Ising-model systems of Ni$_{4}$ (3D) and Mn$_3$ (2D) single-molecule magnets (SMMs) is reported. The splitting is due to the identical exchange coupling in the SMMs, and is simply determined by the difference between the two numbers of the spin-down $n_{\downarrow}$ and spin-up $n_{\uparrow}$ molecules neighboring to the tunneling molecule. The splitting may be presented as $(n_{\downarrow}-n_{\uparrow})JS/{g\mu_{0}\mu_{B}}$, and the number of the splittings follows $n+1$ where $n=n_{\downarrow}+n_{\uparrow}$ is the coordination number. Besides, since the quantum tunneling is heavily dependent on local spin environment, the manipulation of quantum tunneling may become feasible for this kind of system, which may shed new light on novel applications of SMMs.
The equal-interval splitting of quantum tunneling observed in simple-Ising-model systems of Ni4 (3D) and Mn3 (2D) single-molecule magnets (SMMs) is reported. The splitting is due to the identical exchange coupling in the SMMs, and is simply determined by the difference between the two numbers of the spin-down n ↓ and spin-up n ↑ molecules neighboring to the tunneling molecule. The splitting may be presented as (n ↓ − n ↑ )JS/gµ0µB, and the number of the splittings follows n+1 where n = n ↓ +n ↑ is the coordination number. Besides, since the quantum tunneling is heavily dependent on local spin environment, the manipulation of quantum tunneling may become feasible for this kind of system, which may shed new light on novel applications of SMMs. Single-molecule magnets (SMMs) have been used as model systems to study the interface between classical and quantum behaviors, and are considered to be the most promising systems for the applications in quantum computing, high-density information storage and magnetic refrigeration [1][2][3][4][5] due to the quantum tunneling of magnetization (QTM) observed in these systems [6][7][8][9]. Recent researches in the impact of intermolecular exchange couplings upon the QTM have focused on whether the exchange coupling may change the quantum tunneling in SMMs. SMM dimer system is reported to have different quantum behavior from that of the individual SMMs, due to the intermolecular exchange couplings between the two components [10,11]. It is also reported that, in the SMM dimer with 3D network of exchangecouplings, the QTM is not suppressed [12]. In this letter, we demonstrate that, for the SMMs with identical exchange coupling(IEC), the quantum tunneling behavior is much simpler and the QTM might be conveniently manipulated by controlling of the magnetization.
In the following, we report a unique quantum tunneling effect observed in the single-molecule magnets of [Ni(hmp)(CH 3 CH 2 OH)Cl] 4 (hereafter Ni 4 ) [13,14] and [Mn 3 O(Et-sao) 3 (MeOH) 3 (ClO 4 )] (hereafter Mn 3 ) [15,16]. Ni 4 SMM is a crystal with 3D network of exchange coupling, in which each molecule is coupled with four neighboring molecules by Cl···Cl contact (which contributes to the exchange coupling) forming a diamondlike lattice. Ni 4 crystal has S 4 symmetry, which ensure that the four exchange couplings between each molecule and its four neighboring molecules are identical throughout the crystal. Mn 3 SMM is a crystal with 2D network of exchange coupling, in which each molecule is coupled with three neighboring molecules by hydrogen bonds (which contributes to the exchange coupling) in ab plain, forming a honeycomb-like structure viewed down along the c-axis. Mn 3 crystal has C 3 symmetry, which ensure that the three exchange couplings between each molecule and its three neighboring molecules are identical throughout the crystal. We notice that both Ni 4 and Mn 3 SMMs are crystals with IEC and the model systems of simple Ising model [17]. We have observed the equalinterval splitting of quantum tunneling induced by IEC in these two systems by ac susceptibility and hysteresis loop measurements.
Considering the low blocking temperature, we studied quantum tunneling effects of Ni 4 SMM by ac susceptibility measurements, with a home-made compensation measurement setup [18]. Fig.1 has demonstrated the temperature dependence of the quantum tunneling behavior in Ni 4 SMM. Apparently, the peak at zero field disappears at 0.75K and 0.5K, which consists with the missing step at zero field in magnetization hysteresis loops at 40mK [13]. As a result of different orientations, the step positions are different from those mentioned in Ref [13]. We measured the quantum tunnelings at different orientations and found the resonant fields along the easy axis of the sample are −0.21T −0.11T, 0T, 0.11T, 0.21T as shown in Fig.1. It is seen that the tunneling peaks appear with equal interval. The shift of the tunneling peaks from higher to lower field with the increasing T is due to the enhancement of the effect of thermal activation upon tunneling [19,20]. The higher blocking temperature allows us to study the hysteresis loops above 1.6K for Mn 3 SMM. Fig.2 shows the typical step-like hysteresis loops of Mn3 SMM at different temperatures. The blocking temperature estimated from ZFC (zero field cooling) and FC (field cooling) curves shown in the inset is around 3K. The sweeprate-dependent magnetization curves at 1.6K are shown in Fig.3, with only a dM/dH curve at the sweeping rate of 0.0005T/s presented for simplicity. A series of quantum tunneling peaks with an equal interval of 0.36T are observed in the dM/dH curves, which is similar to those observed in Ni 4 SMM.
With IEC taken into account, the molecules are not isolated, and the spin Hamiltonian of each molecule may be presented as: where D is the axial anisotropy constant, n is coordination number, J is the exchange interaction constant,Ŝ z andŜ iz are the easy-axis spin operators of the molecule and its ith exchange-coupled neighboring molecule. For Ni 4 , S = 4, D = 0.86K, g = 2 · 12 [13,14]; while for Mn 3 , S = 6, D = 0.98K, g = 2 · 06 [16]. In Ni 4 SMM, every Ni 4 molecule has four AFM exchange-coupled neighboring molecules, and hence for each molecule there are five different kinds of local spin environment (LSE), which may be labeled by (n ↓ , n ↑ ), where n ↓ and n ↑ represent the number of the neighboring molecules which occupy S z = −4 (hereafter | − 4 ) and S z = 4 (hereafter |4 ) spin states respectively (The excited spin states are not considered here, because most of them are not populated at our measurement temperatures). At negative saturated field, all the molecules initially occupy |−4 in the same LSE (4, 0) shown in Fig.4a (left). According to equation (1), |− 4 and |4 spin states in the LSE (4, 0) are degenerate when the field reaches 4JS/gµ 0 µ B , therefore those molecules which occupy the | − 4 spin state in the LSE (4, 0) ( Fig.4a) have the same probability to undergo tunneling at 4JS/gµ 0 µ B , leading to the resonant tunneling peaks at −0.21T as seen in Fig.1. Following this resonant quantum tunneling, some molecules will occupy |4 spin state, and the LSE of the molecules will not be identical any more. When the field reaches 2JS/gµ 0 µ B (corresponding to −0.11T as seen in Fig.1), the resonant tunneling takes place from | − 4 to |4 spin state in the LSE (3, 1) (Fig.4b). As a matter of fact, at zero field the tunneling of the molecules in the LSE (2, 2) (Fig.4c) will change neither Zeeman energy nor the exchange interaction energy, which gives rise to the macroscopic quantum tunneling observed at zero field at relatively higher temperatures shown in Fig.1 [21]. At temperatures obviously below T N , the spins of the molecules will be anti-parallel to its neighbors, i.e. the molecules are in the LSE (0, 4) and (4, 0) instead of the LSE (2, 2), which causes the missing of quantum tunneling at zero field at T ≤ 0.75K as seen in Fig.1. However, in the vicinity of the transition temperature, some molecules are still in the LSE (2, 2) due to the thermal fluctuation, thus there is still an evidence of resonant quantum tunneling at 0.85K at zero field shown in Fig.1. Fig.1, respectively. Other equivalent spin configurations are not listed here for simplicity. The tunneling molecule is marked in black with black arrow indicating its spin state, its four neighbors are marked in gray, with green and red arrows indicating spin-up and spin-down state respectively, the blue lines between molecules represents the exchange couplings.
In both Mn 3 and Ni 4 SMMs, with the presence of IEC, the tunneling between two ground spin states of | ± S is splitted by equal-interval field of 2|J|S/gµ 0 µ B . Generally, according to equation(1), the tunneling from | − S to |S − l is splitted by the same equal-interval field, and the splitted tunneling field may be simply expressed as The first term comes from the internal spin states in each molecule, and the second term is of the tunneling splitting induced by IEC. The splitting is simply determined by the difference between the two numbers of the spin-down( n ↓ ) and spin-up (n ↑ ) molecules neighboring to the tunneling molecule. According to equation (2), the number of splittings equals the number of different kinds of (n ↓ , n ↑ ) LSEs, and hence may be expressed as n+1 1 = n + 1 by combinatorics, where n = n ↓ + n ↑ .
According to equation (2), when D > n|J|S and |H z | < (D − n|J|S)/gµ 0 µ B , any quantum tunneling with l = 0 are not allowed; while according to equation (1), when the first excitation energy of a molecule D(2S − 1) ≫ kT , almost all molecules will occupy the two ground spin states of | ± S . Therefore, under the above conditions, equation(1) may be simplified aŝ which is just the Hamiltonian of simple Ising model [17]. For both Ni 4 and Mn 3 , D > n|J|S, thus Ni 4 and Mn 3 SMMs are good model systems of simple Ising model at low temperature and low field, which are important for the studies of quantum tunneling behavior and related applications. Since the intermolecular exchange couplings are identical in the system, the magnitude T of a tunneling may be simply factorized into intermolecular contribution N (n ↓ ,n ↑ ) and intramolecular contribution P |mi →|m f , where N (n ↓ ,n ↑ ) is the number of molecules with the LSE (n ↓ , n ↑ ), P |mi →|m f is the tunneling probability of the molecule from the spin state |m i to |m f , and α is a constant. N (n ↓ ,n ↑ ) strongly depends on the magnetization M and may be easily modulated, while P |mi →|m f is determined by the tunneling barrier between |m i and |m f inside molecules and is hardly to be controlled. Therefore, for SMMs-with-IEC, with the dependence of T on N (n ↓ ,n ↑ ) , the manipulation of quantum tunneling should be rather simple. The quantum tunnelings from the same initial states |m i to the same final states |m f but with different LSEs are referred to as a tunneling set. The five tunneling peaks of Ni 4 SMM in Fig.1, belong to the same set of | − 4 → |4 and has the same P |mi →|m f , thus the intensities of the five peaks is proportional to N (n ↓ ,n ↑ ) , which means that N (n ↓ ,n ↑ ) may be monitored by macroscopic measurements of the tunneling peaks. For Mn 3 SMM, the AFM exchange coupling constant J is calculated to be J = −0.041K according to the field interval of the | − 6 → |6 tunneling set (Fig.3). However the axial anisotropy constant D = 0.98K [16] of Mn 3 SMM happens to be close to 4|J|S, which results in the overlap of two adjacent tunneling sets demonstrated by the overlapped dotted lines shown in Fig.3. The tunneling steps at 0.18T and 0.54T are the combinations of the tunnelings from | − 6 to |6 spin state with the LSEs (1, 2) and (0, 3) (marked by red dotted lines) and quantum tunnelings from | − 6 to |5 spin state with the LSEs (3, 0) and (2, 1) (marked by blue dotted lines) respectively. Similarly, all subsequent tunneling steps are combinations of quantum tunnelings in different tunneling sets with different local spin environments. It may be worth a mention that the tunnelings are expected to occur at 1.62T and 1.98T (marked by green and orange dotted lines) at lower temperatures as well, although not observed in these curves.
Of the overlapped tunnelings mentioned above, due to the dependence of tunneling on the local spin environment, the contribution of the individual tunneling changes as the field sweeping rate varies. For example, the tunneling step at 0.18T is the combination of tunneling from | − 6 to |6 spin state with the LSE (1, 2) and tunneling from | − 6 to |5 spin state with the LSE (3, 0), therefore the tunneling magnitude is determined by N (3,0) P |−6 →|5 +N (1,2) P |−6 →|6 , where N (3,0) and N (1,2) strongly depends on the magnetization M. As shown in Fig.3, for the tunneling at 0.18T, M/M s is increasing with the decreasing of field sweeping rate, which suggests that N (1,2) is increasing while N (3,0) is decreasing, and hence the contribution of the tunneling from | − 6 to |6 spin state with the LSE (1, 2) is taking the dominance from the contribution of the tunneling from | − 6 to |5 spin state with the LSE (3, 0), eventually.
Due to the strong dependency of a tunneling on the N (n ↓ ,n ↑ ) based on equation (4), the subsequent quantum tunneling heavily depends on the the preceding quantum tunnelings in SMMs-with-IEC. As shown in Fig.3, tunneling at −0.54T (from | − 6 to |6 spin state with the LSE (3, 0)) is inherited by tunneling at −0.18T (from | − 6 to |6 spin state with the LSE (2, 1)), the tunnelings at −0.54T, −0.18T are further carried on by the next tunneling, and the process continues as the LSE changes. In fact, the history dependence is not prominent for Ni 4 SMM, due to that the measurements were performed at temperatures much higher than its blocking temperature, while thermal activated effect ruin the memory of history. Apparently, the subsequent quantum tunneling is more heavily dependent on the preceding quantum tunnelings in SMMs-with-IEC when the thermal activated effect is severely suppressed as the temperature drops adequately. This indicates a new way for manipulating quantum tunneling.
In summary, we performed detailed ac susceptibility and hysteresis loop measurements on Ni 4 and Mn 3 single crystals, respectively, and have observed the equalinterval splitting of quantum tunneling in both systems, the splitting of quantum tunneling is presented by (n ↓ − n ↑ )JS/gµ 0 µ B ; and the number of splitting follows n + 1, where n = n ↓ + n ↑ is the coordination number. Since the splitting is induced by the IEC between the molecules, the rules should be universally applicable to all single-molecule magnets with IEC. Besides, it is demonstrated that, the manipulation of quantum tunneling may become feasible for this kind of system, which may shed new light on novel applications of SMMs.
We thank Prof. Dianlin Zhang, Lu Yu, and Li Lu for helpful discussions. We also thank Shaokui Su for experimental assistance. This work was supported by the National Key Basic Research Program of China (No.2011CB921702) and the Natural Science Foundation of China (No.11104331).
|
2014-01-22T06:13:46.000Z
|
2014-01-22T00:00:00.000
|
{
"year": 2014,
"sha1": "9984db6f44c802121a253586c0cca1d4f59b078a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1401.5564",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9984db6f44c802121a253586c0cca1d4f59b078a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
189479376
|
pes2o/s2orc
|
v3-fos-license
|
Ascorbic Acid Retention in Fresh-Cut Broccoli Florets during Hyperbaric Storage
We investigated the efficacy of hyperbaric storing for preserving ascorbic acid (AsA) in fresh-cut broccoli florets. The samples were stored in a container pressurized at 0.3 and 2.1 MPa of air at 8 ℃ for 14 d. Florets stored under atmospheric pressure (0.1 MPa) were used as a control. We assayed AsA content, enzyme activities involved in AsA degradation and recycling, including ascorbate peroxidase (APX), dehydroascorbate reductase (DHAR), and glutathione reductase (GR), as well as antioxidant enzymes such as superoxide dismutase (SOD) and catalase (CAT). Changes in partial pressure of O 2 and CO 2 in the storage container were also determined. AsA content was successfully maintained for 14 d under both of our hyperbaric treatments and was approximately twice as high as the AsA content in the control treatment. Activities of CAT, APX, GR and SOD increased at 0.3 MPa, except DHAR, whereas florets stored at 2.1 MPa showed almost no enzymatic activity. The respiration was slowed down in florets stored under hyperbaric conditions. Our results suggest that the physiological response of fresh-cut broccoli florets to the hyperbaric condition varied with the magnitude of pressure applied, especially the enhancement of CAT enzyme activity leads to the AsA retention at 0.3 MPa.
INTRODUCTION
The market for fresh-cut produce has been growing continuously in response to an increase in consumer demand for convenience. At the same time, consumers have become aware of the importance of consuming a diet high in fresh fruits and vegetables. However, it is well known that fresh-cut produce has a short shelf life due to damage caused by minimally processing it (i.e., wounding during processing causes an increase in respiration rate and ethylene production); these changes lead to an acceleration of the senescence process and degradation of nutritional compounds (Abe and Chachin, 1995;Martiñon et al., 2014). Fresh-cut broccoli, an important minimally-processed vegetable, contains a high amount of bioactive compounds, especially ascorbic acid (AsA). However, a rapid decline in AsA content has been observed during storage (Raseetha et al., 2013). The decline in AsA content is the most important change associated with quality deterioration. Thus, proper postharvest treatments are essential for maintaining AsA content and thus extending the shelf life of fresh-cut broccoli.
Generally, low temperature can slow the reduction of AsA in intact and fresh-cut produce. The ability to preserve AsA content at low temperatures can be improved by combining with chemical treatment such as edible coating (Bal, 2013;Hassan et al., 2014;Sohail et al., 2015). However, the use of chemical treatments has been declining in response to consumer demand for safer foods. Rather than using chemical treatments, the physical postharvest techniques such as UV irradiation, hot water treatment, modified atmosphere packaging (MAP), and controlled atmosphere (CA) storage have been successfully applied to preserve AsA in fresh produce (Agar et al., 1997;Nunes et al., 1998;Barry-Ryan and O'Beirne, 1999;Moretti et al., 2003;Mirdehghan et al., 2006;Koukounaras et al., 2008;Zenoozian et al., 2011a;2011b;Sucharitha et al., 2012).
Recently, storing fresh produces under high O 2 conditions has been suggested as an alternative way to maintain AsA content, to reduce microbial growth and to inhibit enzymatic browning of intact and fresh-cut produce (Kader and Ben-Yehoshua, 2000;Jacxsens et al., 2001;Allende et al., 2002;Chunyang et al., 2010;Zhang et al., 2013;Banda et al., 2015). Hyperbaric treatment is also involves the use of O 2 at a partial pressure greater than atmospheric condition. This technique increases the partial pressure of O 2 by injecting high-pressure air into a storage container to increase the total pressure of its gases. The use of hyperbaric treatment with compressed air has been reported to inhibit microbial growth, reduce weight loss of produce, reduce CO 2 and ethylene production, and delay the ripening of fresh produce (Baba and Ikeda, 2003;Goyette et al., 2012;Liplap et al., 2013a;2013b;Fernandes et al., 2015). However, only a few studies have been conducted on the effects of hyperbaric treatment on antioxidant compounds, such as AsA. Moreover, no studies have ever been conducted on fresh-cut produce.
In this study, fresh-cut broccoli florets were continuously stored under high-pressure condition at 8 for 14 d. We investigated the usefulness of hyperbaric storage on AsA retention by measuring respiratory O 2 consumption and CO 2 production and the activities of enzymes involved in AsA degradation and recycling, such as ascorbate peroxidase (APX), dehydroascorbate reductase (DHAR), and glutathione reductase (GR), as well as antioxidant enzymes including superoxide dismutase (SOD) and catalase (CAT). Furthermore, we discuss the mechanism of AsA retention in light of these measured variables.
Plant material and sample preparation
Fresh broccoli heads (Brassica oleracea L.) were obtained from a wholesale market. We selected broccoli heads that were of uniform color and had no visual defects. The heads were then cut into individual florets with a sharp knife.
Broccoli florets were subjected to three different, pressurized air storage treatments, (all at 8 ): 0.1, 0.3 and 2.1 MPa, in this study. Under the two hyperbaric storage treatments, approximately 21 g of fresh-cut broccoli florets were put into 1.5ϫ10 Ϫ4 m 3 high-pressure container (TVS-1, Taiatsu Techno Co., Tokyo, Japan). The container's lid was equipped with a pressure meter and needle valve. To enable high-pressure air injection, the intake port of the container was connected to a gas injection tube connected to a high-pressure gas cylinder. Then, air was forced into the container until the internal pressure reached the required magnitude (0.3 or 2.1 MPa, respectively). The needle valve was then closed and the gas injection tube was removed from the intake of the pressure container. For the control treatment, the fresh-cut broccoli florets were put in a beaker covered with plastic wrap punctured with small holes to prevent water loss under atmospheric condition (0.1 MPa). All samples were moved to an incubator (MIR-154-PJ, Panasonic Healthcare Holdings Co., Ltd., Tokyo, Japan), set at 8 , for storage. After storage, the fresh-cut broccoli samples were immediately frozen in liquid nitrogen, and stored at Ϫ50 for analysis of AsA content and the enzyme activity. This experiment was replicated three times.
Determination of the partial pressure of O 2 and CO 2 before and after hyperbaric treatment Firstly, the total pressure in the high-pressure container was obtained from the reading of the pressure meter. Then, headspace gas in the container was collected in a glass bottle by using the water displacement method after 0 and 14 d of storage. A 0.2 mL sample of the gas was withdrawn from the bottle and injected into the GC analyzer (GC-14A, Shimadzu Co., Kyoto, Japan). O 2 , N 2 , and CO 2 in the gas sample were separated using a Molecular sieve 5A and Porapak Q column, and detected with a thermal conductivity detector. Helium was used as the carrier gas. Each O 2 and CO 2 concentration obtained by GC analysis was converted to the partial pressure according to the ideal gas law.
Estimation of the total amount of respiratory O 2 consumption and CO 2 production during storage For the fresh-cut broccoli florets stored at unpressurized condition (0.1 MPa), the rate of respiratory O 2 consumption and CO 2 production was measured by a closed system method after 14 d storage. Approximately 2.5 g (precisely weighted) of sample was put into a 5.5ϫ10 Ϫ5 m 3 hermetic glass bottle and moved to an incubator set at 8 . A 0.2 mL of headspace gas was withdrawn by a gastight syringe at 30-min intervals for 2 h, and was injected to GC for the determination of O 2 and CO 2 concentrations. The free volume of the bottle was estimated by subtracting sample volume which was measured by water displacement. The rate of O 2 consumption and CO 2 production were calculated by using Eq. (1). (1) where R O 2 ,CO 2 is the rate of O 2 consumption and CO 2 production (mmol kg Ϫ1 h Ϫ1 ), DC gas is the change of the gas concentrations in the bottle (O 2 and CO 2 ) (% h Ϫ1 ), V f is free volume (m 3 ), W is the weight of sample (kg), P is the atmospheric pressure (ϭ 0.1 MPa), R is the universal gas constant (ϭ 8.314 J K Ϫ1 mol Ϫ1 ), and T is the absolute temperature (K).
The total amount of respiratory O 2 consumption and CO 2 production under ambient pressure condition during storage was estimated by multiplying the obtained respiration rate at day 14 by total storage hours (ϭ 336 h) assuming that the respiration rate is constant during 14 d of storage. For the hyperbaric condition, it was calculated using Eq. (2) assuming that the change of O 2 and CO 2 partial pressures in the high-pressure container came from the respiration of the sample.
(2) where Q O 2 , CO 2 is the total amount of O 2 consumption and CO 2 production by the produce during storage (mmol kg Ϫ1 ), D p gas is the partial pressure of O 2 and CO 2 at storage period (i ϭ 0 d, a 14 d) (MPa).
Determination of ascorbic acid content AsA was extracted and analyzed, as described by Al-Ani et al. (2007) and Kapur et al. (2012), with some modifications. For the extraction of AsA, 2 g of frozen broccoli florets was placed into a 50 mL plastic centrifuge tube containing 8 mL of 5% (w/v) metaphosphoric acid, and then homogenized on ice using a physcotron homogenizer (NS-52K, Microtec, Chiba, Japan). The homogenate was then filtered through 5A quantitative filter paper.
To determine the total AsA in the extraction, 90 mL of 0.2% (w/v) 2,6-dichlorophenolindophenol was added into 1 mL of the extracted sample solution to transform Lascorbic acid (L-AsA) to dehydroascorbic acid (DHAA). Next, we added 1 mL of thiourea and 0.5 mL of 2% (w/v) 2,4 dinitrophenylhydrazine-sulfate solution. All samples and blank solutions were incubated in a water bath at 50 for 30 min. After incubation, they were cooled on ice for 30 min. Then, 2.5 mL of cooled 85% (v/v) sulfuric acid was slowly added into all test tubes and mixed with a vor- tex mixer. The solution was kept at room temperature for 30 min and then the absorbance at 540 nm wavelength was measured using a spectrophotometer (Shimadzu UV-1600, Kyoto, Japan). DHAA analysis was conducted using the same protocol, except that 2,6-dichlorophenolindophenol was not used. The difference between total AsA and DHAA content was calculated to obtain L-AsA. AsA content was expressed as mg L-AsA and DHAA per 100 g of fresh weight (FW) of fresh-cut broccoli florets. The AsA measurement was conducted in triplicate.
Enzyme activity assays To extract enzymes, 0.5 g of frozen broccoli florets were homogenized in 10 mL of cold 50 mM phosphate buffer (pH 7) with the homogenizer set at 20,000 rpm for 30 s. The resulting homogenate was passed through filter paper and then centrifuged (Kubota 1720 RA-48J rotor, Tokyo, Japan) at 15,500 ϫg for 25 min at 4 . The resulting supernatant was collected as crude extract and kept on ice to determine enzyme activities.
Ascorbate peroxidase (APX) activity was determined, according to Cakmak and Marschner (1992), with slight modifications. The reaction mixture contained 50 mM potassium phosphate with 1 mM EDTA disodium salt (pH 7.5), 10 mM L-AsA, and crude enzyme extraction in a total volume of 2.9 mL. The reaction was started by adding the 0.1 mL of 15 mM H 2 O 2 . The oxidation rate of L-AsA was measured at 25 by observing absorbance at 290 nm using a spectrophotometer (UV1600, Shimadzu Co., Kyoto, Japan) for 2 min at 10-s intervals. Specific activity was calculated using an extinction coefficient of 2.8 mM Ϫ1 cm Ϫ1 .
Superoxide dismutase (SOD) activity was determined according to Beyer and Fridovich (1987), with slight modifications. The reaction mixture contained 2.4 mL of sodium carbonate buffer (pH 10.2), 0.1 mL of 0.75 mM nitroblue tetrazolium (NBT), 0.1 mL of 3 mM EDTA disodium salt, 0.1 mL of 0.15% (w/v) bovine serum albumin (BSA), and 0.1 mL of crude enzyme extraction. The reaction mixture was then pre-incubated at 25 for 10 min, after which 0.1 mL of xanthine oxidase was added to the solution to start the reaction. Then, the reaction mixture was incubated at 25 for 20 min. To stop the reaction, 0.2 mL of 10 mM CuCl 2 was added. The reference solution contained all reagents except the tissue extract; a phosphate buffer was used instead of the crude enzyme extraction. Absorbance at 560 nm was measured to determine the rate of NBT reduction. One unit of SOD was defined as the enzyme activity that inhibited 50% of NBT reduction process. SOD activity was expressed as units of activity per milligram of protein (Unit mg Ϫ1 protein).
Catalase (CAT) activity was determined, as described by Aebi (1974), with some modifications. Enzyme activity was assayed by measuring the rate of elimination of hydrogen peroxide (H 2 O 2 ). The reaction mixture consisted of 1 mL H 2 O 2 (10 mM; pH 7) and 2 mL crude enzyme extraction, which was then diluted 1:4 in an extraction buffer. The rate of the reaction was determined by monitoring the decline in absorbance at 240 nm. Specific activity was calculated using an extinction coefficient of 0.04 mM Ϫ1 cm Ϫ1 .
Dehydroascorbate reductase (DHAR) activity was determined according to Kato et al. (1997), with slight modifications. The reaction mixture contained 0.8 mL of potassium phosphate buffer (50 mM; pH 7), 0.1 mL of 0.1 mM EDTA disodium salt, 1 mL of 2.5 mM reduced glutathione, 1 mL of 0.2 mM DHAA, and 0.1 mL of crude enzyme extraction. The absorbance was monitored at 265 nm. The rate of non-enzymatic reduction of DHAA was determined as a blank. Specific activity was calculated using an extinction coefficient of 14 mM Ϫ1 cm Ϫ1 .
Glutathione reductase (GR) activity was determined, as described by Hodges et al. (1997), with slight modifications. The reaction mixture contained 1.6 mL of 100 mM phosphate buffer (pH 7.8), 0.2 mL of 100 mM oxidized glutathione, 0.2 mL of 15 mM EDTA disodium salt, 0.04 mL of 10 mM NADPH in 1% (w/v) NaHCO 3 , and 0.6 mL of crude enzyme extraction. Activity was determined by following the oxidation of NADPH at 340 nm. Specific activity was calculated using an extinction coefficient of 6.2 mM Ϫ1 cm Ϫ1 .
Protein content was measured using a Pierce ® BCA protein assay kit (Thermo Scientific, USA). Bovine serum albumin was used as the standard protein for calculating specific enzyme activity.
Statistical analysis
The AsA and enzyme assays were replicated three times per storage treatment. The effect of hyperbaric storage on each response variable was tested with one-way ANOVA (followed automatically by a Tukey's post-hoc test at the 5% level of significance) in R software platform v.3.1.0 (R Foundation for Statistical Computing). Figure 1 shows the change in AsA content of freshcut broccoli florets stored at the various pressure treatments at 8 for 14 d. After 14 d of storage, the broccoli florets stored under 0.1 MPa (unpressurized condition) showed a 60% reduction in AsA content, but the broccoli florets stored at 0.3 and 2.1 MPa conditions exhibited only 20% and 21% reduction in AsA, respectively. There was no significant difference in the AsA content between freshcut broccoli florets stored at 0.3 and 2.1 MPa. However, only DHAA content was observed in the broccoli florets stored at 2.1 MPa. In order to understand the mechanism of AsA retention underlying hyperbaric treatments, the activities of enzymes involved in AsA degradation and recycling system, and antioxidant enzymes were assayed.
Effect of hyperbaric storage on activities of enzymes involved in AsA degradation and recycling system and antioxidant enzymes in fresh-cut broccoli florets
The activities of enzymes involved in AsA degradation and recycling system and antioxidant enzymes in fresh-cut broccoli florets stored under three different hyperbaric pressures are shown in Fig. 2. The activities of APX, DHAR, GR, SOD and CAT enzymes in fresh-cut broccoli florets stored at 0.1 MPa (unpressurized condition) did not significantly change after 14 d of storage. Conversely, the activities of APX, GR, SOD and CAT enzymes in broccoli florets stored at 0.3 MPa were 2.1, 1.3, 2.3, and 4.5 times higher than those at 0.1 MPa, whereas DHAR activity decreased by half. When the broccoli florets were stored at 2.1 MPa, the activities of APX, DHAR, SOD and CAT were almost lost and the activity of GR also decreased.
The induction of the enzyme activity in broccoli florets stored at 0.3 MPa could be resulted from storage at high partial pressure of O 2 condition. According to Duan et al. (2011), 0.1 MPa O 2 induces the activities of SOD, CAT, and APX in litchi fruit. Similarly, Liu and Wang (2012) found that a high activity of SOD and CAT occurred in mushrooms stored at 0.08 MPa O 2 . This induction of antioxidant enzyme activity is a protective reaction corresponding to the accumulation of reactive oxygen species (ROS) in a plant tissue caused by high O 2 condition. In fact, the rate of O 2 .Ϫ and H 2 O 2 production has been reported to increase when broccoli heads are stored at 0.1 MPa O 2 (Guo et al., 2013). On the other hand, the activities of all enzymes were suppressed when the broccoli florets were stored at 2.1 MPa, suggesting that there might have been an imbalance between ROS detoxification and ROS production, resulting in protein oxidation and enzyme inactivation.
In order to maintain substantial levels of L-AsA, both AsA degradation system including the antioxidant enzymes, and AsA recycling system including MDHAR, DHAR and GR enzymes play a crucial role as shown in Fig. 3. In our observation, the increased APX and the decreased DHAR activity were found in the broccoli florets stored at 0.3 MPa, however, these observations contradict the results of high L-AsA and low DHAA content shown in Fig. 1. These results lead to the hypothesis that MDHAR is also induced by the high partial pressure of O 2 . The elevated activity of this enzyme may rapidly convert MDHA to L-AsA comparing to the disproportionation of MDHA into DHAA. Other possible reason for the preservation of high level of L-AsA in broccoli florets stored at 0.3 MPa is the induction of CAT enzyme activity (approximately 4.5 times), that is more powerful to eliminate H 2 O 2 than the activity of APX (2.1 times) and the action of L-AsA itself. Even if all enzymes activity had been suppressed at 2.1 MPa, we still observed the remaining DHAA content in broccoli florets (Fig. 1). It is caused by the difference between the L-AsA oxidation rate and the DHAA decomposition rate. Under the pressurized condition, H 2 O 2 produced by high partial pressure of O 2 is non-enzymatically eliminated by L-AsA itself leading to the production of DHAA. Normally, DHAA is then hydrolyzed to 2,3 diketo-L-gulonate by L-dehydroascorbate lactonohydrolase. However, considering the suppression of all enzymes activities in broccoli florets stored at 2.1 MPa (Fig. 2), it can be assumed that the activity of L-dehydroascorbate lactonohydrolase is suppressed as well. In addition, DHAA cannot be converted back into L-AsA because of suppression of enzyme activity that involved in recycling system, resulting in the persistence of DHAA.
Based on our results, the response of enzymes to a given hyperbaric condition differs by the amount of pressure applied, which in turn leads to differences in retention mechanisms for AsA in fresh-cut broccoli florets.
Effect of hyperbaric storage on respiration of freshcut broccoli florets Table 1 shows the partial pressures of O 2 and CO 2 in the high-pressure container containing fresh-cut broccoli florets stored at three partial pressures, before and after 14 d of storage. A decline in the partial pressure of O 2 and an increase in the partial pressure of CO 2 were observed in the pressure containers at the end of the storage period. These changes in partial pressures were caused by respiration in the fresh-cut broccoli florets. Normally, after harvesting, fruits and vegetables are still alive. The harvested fruits and vegetables respire to obtain the energy for continuing their metabolic processes. During the respiration process, glucose is oxidized to CO 2 , while O 2 , which serves as the electron acceptor, is then reduced to H 2 O. Thus, CO 2 is released to the surrounding atmosphere, resulting in an accumulation of CO 2 in the container. In contrast, O 2 is reduced in the container.
The change in AsA content of broccoli after harvesting is related to the total amount of O 2 intake or CO 2 production by respiration (Techavuthiporn et al., 2008). Therefore, considering the relationship between respiration and AsA is also important for better understanding the mechanism for AsA retention under hyperbaric conditions. The total amount of O 2 consumption and CO 2 production of samples stored under the two tested hyperbaric conditions were significantly suppressed relative to samples stored at 0.1 MPa condition. Comparing between 0.3 and 2.1 MPa, there was no significant difference in the CO 2 production, whereas the O 2 consumption at 0.3 MPa was significantly lower than that at 2.1 MPa (Table 1). The decline in respiration of fresh-cut broccoli stored at 0.3 MPa is possibly due to an increase in the total pressure of the surrounding atmosphere. Inside the plant tissue, the intercellular spaces form a connected network and it is filled with the gas (Kuroki et al., 2004). These interconnected air spaces are necessary for the respiration of a plant tissue (Woolley, 1983). Under the hyperbaric conditions, the broccoli tissue is compressed by the pressure from all directions. It may cause a subdivision of gas-filled intercellular spaces in the broccoli tissue, causing limitation of diffusion of O 2 between the inside and outside cell and leading to lower rate of respiration. From this point of view, it can be imagined that the gas exchange rate between outside and inside is decreased with increase of pressure applied. But, in our observation, the total amount of O 2 consumption of broccoli florets stored at 2.1 MPa was higher than those at 0.3 MPa. This result cannot be explained by the subdivision of gas-filled intercellular space because of pressure. One possible reason might be an en- hancement of oxidative reactions such as auto-oxidation of lipid due to physical cell damage by excess pressure. However, overall it remains difficult to clearly explain the effect of hyperbaric pressure on the respiration rate. Apart from the pressure effect, the decreased respiration may have also resulted from the elevated partial pressure of CO 2 at both 0.3 MPa and 2.1 MPa. A reduction in respiration rate was observed in broccoli stored at 0.06 MPa CO 2 ϩ0.02 MPa O 2 ϩ0.02 MPa N 2 compared to broccoli stored under an ordinary air condition (Kubo et al., 1989). Gunes et al. (2001) also found that an increase in partial pressure of CO 2 from 0 to 0.03 MPa resulted in a suppression of respiration rate and ethylene production of fresh-cut apple slices. During the respiration process, it is well known that ROS is produced (Tripathy and Oelmüller, 2012) and it is subsequently scavenged by L-AsA. In our study, fresh-cut broccoli florets stored under hyperbaric conditions showed the decrease in amount of O 2 con-sumption and CO 2 production, suggesting that the total pressure and the elevated high partial pressure of CO 2 under pressurized air storage treatments slow down the respiratory activity of fresh-cut broccoli florets which is one of the major causes of AsA degradation in fresh produce.
CONCLUSION
In this study, we investigated the effect of hyperbaric storage on AsA retention in fresh-cut broccoli florets. AsA was successfully maintained by storing broccoli under pressurized air storage conditions, which also suppressed the respiration of the broccoli florets during storage. In contrast, AsA content was 60% less when the broccoli florets were stored under ambient pressure (0.1 MPa). The response of enzymes involved in AsA degradation and recycling, and antioxidant enzyme differed, depending on the magnitude of pressure applied. A hyperbaric pressure of 0.3 MPa has the potential to maintain L-AsA content by promoting the activity of CAT enzyme and reducing the respiratory activity. Furthermore, the advantage of hyperbaric storage may provide a means for the fresh-cut produce industry to improve the nutritional value of their produce.
N. LIAMNIMITR ET AL. The 0.1 MPa treatment was the unpressurized condition. Total amount of O 2 consumption and CO 2 production during 14 d of storage were estimated by multiplying obtained respiration rate by the storage hours (see the "material and method" for the explanation). Values represent the mean of three replicates with standard errors (SD). Values within columns followed by the same letter are not significantly different at P ϭ 0.05 (ANOVA with a Turkey's post-hoc test).
|
2019-06-13T13:17:44.078Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "6a2df9055fee82b8d8c21ba2024da3025ba58667",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/ecb/56/3/56_113/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "fa9032629a91d687b50231bf3457143094c8e5f3",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
126418794
|
pes2o/s2orc
|
v3-fos-license
|
A New Recurrence Formula for Efficient Computation of Spherical Harmonic Transform
A new recurrence formula to calculate the associated Legendre functions is proposed for efficient computation of the spherical harmonic transform. This new recurrence formula makes the best use of the fused multiply–add (FMA) operations implemented in modern computers. The computational speeds in calculating the spherical harmonic transform are compared between a numerical code in which the new recurrence formula is implemented and another code using the traditional recurrence formula. This comparison shows that implementation of the new recurrence formula contributes to a faster transform. Furthermore, a scheme to maintain the accuracy of the transform, even when the truncation wavenumber is huge, is also explained.
Introduction
In many global atmospheric general circulation models, e.g., OpenIFS (ECMWF 2017) and DCPAM (Takahashi et al. 2016), the spectral method with spherical harmonics is used in horizontal discretization. To evaluate the nonlinear terms in these models, the transform method, which was introduced by Orszag (1970), is used for computational efficiency. The transform method requires two kinds of transforms: Grid point data are transformed to coefficients for the spherical harmonics in the forward transform and the reverse is done in the backward transform. We call the two kinds of transforms the spherical harmonic transform hereinafter. Because the spherical harmonic transform must be computed many times per one time step in time-integrations of the general circulation models, it is a fundamental tool, but it is desirable to reduce the computational time for the transform. Note that the spherical harmonic transform is also used in other research areas, e.g., cosmic microwave background studies (Reinecke et al. 2006) and planetary dynamos (Wicht and Tilgner 2010).
The spherical harmonic transform consists of the discrete Fourier transform for the longitudinal direction and a transform from coefficients of associated Legendre functions to function values at specified nodes, which we call the associated Legendre function transform hereinafter, for the latitudinal direction. The discrete Fourier transform can be computed with the fast Fourier transform (FFT) algorithm, and its computational complexity is O (M 2 log 2 M ), where M is the truncation wavenumber of the spherical harmonic transform. The associated Legendre function transform has a computational complexity of O (M 3 ), and so, it is one of the most time-consuming parts in the computation of dynamics in a spherical spectral model when the horizontal resolution of the model becomes fine. For the associated Legendre function transform, no exact fast algorithm has been discovered. However, several fast algorithms have been proposed to compute the associated Legendre function transform approximately, e.g., Healy et al. (2003) and Seljebotn (2012). In particular, in Seljebotn (2012), a fast algorithm, the computational complexity of O (M 2 log 2 M ), was proposed, and the implementation detail was described. Although such approximate fast algorithms may be promising when M is large because of the asymptotic behavior of the computational complexity, they have disadvantages, for example, that the implementations are very complicated, that they conduct the transform approximately within a given accuracy, and that they require a huge memory area when M is large.
As a strategy to deal with the computational complexity for the associated Legendre function transform other than exploring a fast algorithm, optimizing corresponding numerical codes is a natural choice. Schaeffer (2013) showed that his highly optimizing numerical code for the spherical harmonic transform spent much less computational time than that spent by other competitive codes, which included numerical codes based on fast algorithms. While several tuning techniques have been incorporated into his numerical code, one of the most essential parts is that the associated Legendre functions are computed "on-the-fly" during the transform rather than computed and stored before the transform. The reason why this on-the-fly computation contributes to speed tuning is that it requires much less memory (O (M 2 )) than that required when computed in advance (O (M 3 )). Therefore, this takes advantage of the cache memory on modern computers. However, computing the associated Legendre functions on-the-fly has computational complexity of the same order as that for the transform itself (O (M 3 )). Because the associated Legendre functions are computed using a recurrence formula, a more efficient recurrence formula requiring less computation would be beneficial.
In this paper, we propose a new efficient recurrence formula for the associated Legendre functions. We show that the recurrence formula is effective especially on recent computers that have fused multiplyadd (FMA) operations. The recurrence formula also has the advantage of maintaining accuracy compared to the commonly used recurrence formula. To demonstrate that the adoption of the new recurrence formula contributes to the transform speed, we develop a numerical code that adopts the proposed recurrence formula and compare its speed with the speed of the code proposed by Schaeffer (2013). In developing the code, care must be taken to maintain the accuracy of the transform when the truncation wavenumber is huge. Accordingly, we also introduce a scheme to maintain the transform accuracy. Furthermore, we explain several tuning techniques incorporated into the developed numerical code.
The remainder of the present paper is organized as follows. After we describe the fundamentals of the spherical harmonic transform in Section 2, we propose a new recurrence formula for the associated Legendre functions and explain why the new formula is efficient compared to the commonly used recurrence formula in Section 3. In Section 4, a scheme for the transform accuracy to be maintained even when the truncation wavenumber is huge is introduced, and several tuning techniques implemented in developing a numerical code for the transform are described in Section 5. In Section 6, the speed and accuracy performance of the developed numerical code are compared with those of another numerical code based on Schaeffer (2013). A summary and a discussion are presented in Section 7.
Fundamentals of the spherical harmonic transform
In the spherical spectral method, dependent variables in the governing equations of the model are expanded using spherical harmonics as follows: (1) where f is a dependent variable, such as temperature, s n m is the expansion coefficient, λ is the longitude, μ = sin φ, φ is the latitude, and M is the truncation wavenumber. The spherical harmonics, Y n m , is defined as Here, P n m ( μ) is the associated Legendre function, which is defined as Note that P n m ( μ) is normalized to satisfy the following orthogonality relation: Here, δ nn¢ is the Kronecker delta. By this orthogonality, the following "inverse" of (1) holds: Because Y n -m = (Y n m )*, where ( )* indicates the complex conjugate, s n -m = (s n m )* must be satisfied if f (λ, μ) is a real function. In the following, it is assumed that this constraint is satisfied.
In most spherical spectral models, the expansion (1) is evaluated on grid points (λ k , μ j ) (k = 0,1, ¼, K-1; j = 1, 2, ¼, J ). That is, Here, μ j ( j = 1, 2, ¼, J ) are called Gaussian nodes, which are defined as zero points (sorted in ascending order) of P J 0 ( μ), and λ k = 2πk /K (k = 0,1, ¼, K -1). When both K > 2M and J > M hold, the discrete counterpart of (5), that is, the "inverse" of (6), is given as Here, w j ( j = 1, 2, ¼, J ) is called the Gaussian weight and is defined as In this paper, we call the transform from s n m to f (λ k , μ j ) as described by (6) the backward transform and the transform from f (λ k , μ j ) to s n m as described by (7) the forward transform.
The backward transform (6) and the forward transform (7) can be divided into two stages, respectively, as follows: The operation Re( ) indicates that the real part of the argument is taken. Although the computations of (10) and (11) can be done with a relatively low cost using FFT, for which highly optimized libraries are available, e.g., FFTW (Frigo and Johnson 2005), no exact fast algorithm has been found, and hence, optimization becomes important for the computations of (9) and (12).
A new recurrence formula
In the computation of the associated Legendre function transforms (9) and (12), the simplest implementation is to compute all of the P n m ( μ j ) needed in advance and store them in the computer memory to be used repeatedly. With this implementation, however, the amount of data that should be loaded is of the same order as the number of required operations; such an implementation fails to take advantage of the cache memory on modern computers. To overcome this problem, it is natural to try to compute all of the associated Legendre functions "on-the-fly", that is, to compute them simultaneously every time with the transform. This is one of the most essential points proposed in Schaeffer (2013), although the idea of computing the associated Legendre function on-thefly itself has been adopted previously in other studies, such as Ishioka et al. (2000), Rivier et al. (2002), and Reinecke (2011). In these on-the-fly computations, the following recurrence formula is adopted: 1 . This recurrence formula requires three multiplications and one addition per one step even if (- n m ) and 1/ n m +1 are computed and stored in advance. On the other hand, in computing (9) and (12), two multiplications and two additions are required per one n for m ¹ 0, because both real and imaginary parts of s n m and (w j g j m ) should be treated. That is, the traditional recurrence formula (13) requires the same number of operations as the transform itself per one n. Note that, a multiplication and an addition have the same computational cost on modern computers. Furthermore, modern computers have FMA operations, and on such computing platforms, a pair of multiplication and addition has the same computational cost as a single multiplication or a single addition. Therefore, on computers having FMA, the traditional recurrence formula (13) requires more computational cost than the transform itself because the recurrence formula requires operations in which the computation is equal to that of three FMAs while the transform itself requires two FMAs. In the following, we derive a new recurrence formula to reduce the number of required operations.
The traditional recurrence formula (13) can be rewritten as follows: By multiplying μ on both sides of (14) and applying (14) on the right-hand side recursively, we obtain, µ µ µ µι In this form of recurrence formula, the associated Legendre functions, which have even numbers in the lower suffix, are treated independently of those with odd suffixes. First, let us consider the Legendre functions P n m ( μ) for which (nm) is an odd number, which means that they are odd functions of μ. If we write n = m + 2l + 1 (l = 0,1, ¼), the recurrence formula (15) can be rewritten as That is, p l m ( μ) is an even function of μ. Substituting To simplify (18), we impose the following condition on α l m : This means that the coefficients multiplied on p m l +1 ( μ) and p m l -1 ( μ) on the right-hand side of (18) have the same absolute value but opposite signs. From (19), the following equation is derived.
Applying (20) The reason for this choice is explained later. Substituting (22) into (21) By multiplying (-1) l α l m on both sides of (18) and then using (23) we can rewrite (24) Using the new recurrence formula (26), the lower suffix l of p l m ( μ j ) can be increased by one with two FMA operations if μ j 2 , a l m , and b l m are computed and stored in advance; to store them, only O (M 2 + J ) memory area is required. That is, one FMA operation is required per one increase in the corresponding suffix n of P n m , and this FMA operation cost is onethird the cost of the traditional recurrence formula (13). One might think that the efficiency of the new recurrence formula (26) would be lost if it must also be used in the case that nm is even. However, this problem can be circumvented as follows. The backward transform (9) Note that, in the derivation of (28), we used the fact that (14) Therefore, the backward associated Legendre function transform can be computed using (29) As described above, the computational procedure of the backward and the forward associated Legendre function transforms with the new recurrence formula is completed. The coefficients α l m (l = 0,1, ¼) can be computed recursively using (21) from the starting point given by (22). To compute p l m ( μ) (l = 0,1, ¼) using (26), the starting points p 0 m ( μ) and p 1 m ( μ) must be given. From definition (3), the following holds: Therefore, by considering (17) and (22)
Maintaining accuracy
As described in Enomoto (2015), using the traditional recurrence formula (13) naively leads to an accuracy problem when the truncation wavenumber is large, such as M 1000. This is because the absolute values of the starting points for (13), P m m ( μ) and P m m+1 ( μ) given in (38), become too small to be expressed in finite digit precision arithmetic and this causes an underflow when m is large and | μ | » 1. This problem also appears even when the new recurrence formula (26) is used. To overcome this problem, we adopt the following strategy, while there exist other strategies, e.g. "enhanced exponent" approach (Reinecke 2011). First, we design the transform algorithms to require the on-the-fly computation of p l m ( μ j ) only when underflow does not occur. In (31), (32), (35), and (36), the computations for l = 2γ and l = 2γ + 1 (γ = 0,1, ¼) are paired with each other so that p m 2γ+2 ( μ j ) and p m 2γ+3 ( μ j ) for the next step can be computed from p m 2γ ( μ j ) and p m 2γ+1 ( μ j ) using (26). In this procedure, we introduce j γ m (γ = 0,1, ¼) so that j γ m £ J/2 is the maximum integer that satisfies the following: Here, ξ > 0 is a prescribed threshold value, which we set as ξ = 10 -20 with a margin in IEEE754 double precision arithmetic to determine which operations can be omitted. That is, for a γ, computations that include p m 2γ ( μ j ) or p m 2γ+1 ( μ j ) ( j < j m γ ) are omitted. For example, in (36), the computation for a γ is done practically as For the case of γ = 0, we define j m -1 = J/2 + 1 for convenience. The memory area required to store these values is only of O (J ) for each m. Furthermore, the absolute values of these starting points for the onthe-fly computation are large enough to not cause the underflow because of the design described above. Note that the omission of the computations near the pole described above leads to reduction of the computations required for the spherical harmonic transform, and such implementations were also proposed in previous works, such as Juang (2004) and Schaeffer (2013). This kind of idea to reduce the computations corresponding to the polar regions dates back to Hortal and Simmons (1991), where the reduced Gaussian grid proposed by Kurihara (1965) was used to reduce the computational cost for time-integration of spherical spectral models.
Second, to avoid the underflow in the computation of the starting points defined above, we use the following procedure. From (38) When m is large and | μ j | » 1, the absolute value of p 0 m ( μ j ) becomes very small and so leads to the underflow in the finite digit precision arithmetic. Instead, we introduce q 0 m ( μ j ) and q 1 m ( μ j ) as To avoid the overflow in the computation of (45), β γ m (γ = 1,2, ¼) is a scaling factor introduced as follows: In addition, we define β 0 m = 1 for convenience. We set η = 10 270 with a margin in IEEE754 double precision arithmetic. Because log ( p 0 m ( μ j )) can be computed without the risk of underflow as Although the use of the logarithmic and the exponential functions might seem costly in the computations (48), their use is not problematic because these computations must be done only once in advance to provide the starting points for the on-the-fly recurrence.
Third, although it is not directly related to avoiding the underflow, the computations that should be done in advance but are not required to be done on-the-fly can be done in a higher precision arithmetic. For example, computations of α l m are done with IEEE754 quadruple precision arithmetic to keep the accuracy as high as possible. The Gaussian nodes and the Gaussian weights are also computed with IEEE754 quadruple precision arithmetic using a simple Newtonian iteration in μ and using (8), respectively, which provides sufficient precision for the spherical harmonic transform within IEEE754 double precision arithmetic.
Implementation and optimization
On the basis of the new recurrence formula proposed in the previous sections, we provide an implementation of the spherical harmonic transform, (6) and (7), as Fortran77 subroutines in open-source software ISPACK (ispack-2.1.3; Ishioka 2017). In the implementation, several subroutines are written in an assembly language to fully utilize not only FMA operations but also single instruction multiple data (SIMD) operations in Intel CPUs. That is, in the computations of (31), (32), (35), and (36), the loops for suffix j are implemented so that SIMD operations are used. FFT is done using an originally developed compact FFT library, which shows a comparable performance to FFTW. Furthermore, to utilize recent many-core CPUs, the backward and the forward associated Legendre function transforms for m = 0,1, ¼, M are computed in parallel using OpenMP and by specifying the dynamic scheduling option.
Speed and accuracy comparison
To evaluate the performance of the numerical library ISPACK, we compare its speed and accuracy with those of SHTns (SHTns-2.8; Schaeffer 2017), which was developed on the basis of Schaeffer (2013). The comparison methodology, proposed in Schaeffer (2013), is as follows. In the spherical harmonic transforms (6) and (7), we first set the value of each of the real and the imaginary parts of each s n m (n = 0,1, ¼, M ; m = 0,1, ¼, n) to a uniform random number in [-1,1]. Next, we compute the backward transform (6). Then, the forward transform (7) is computed from f (λ k , μ j ), and we write the obtained result as (s n m )¢ because the input of the backward transform is not completely equal to the output of the forward transform in finite digit precision arithmetic. The speed of each of the libraries is evaluated by measuring the elapsed time required for each transform. The accuracy of each of the libraries is also evaluated by calculating the L ¥error and the L 2 -error, which are defined, respectively, as First, we compare the speed. The computational platform is a Linux (Debian 9.1) server that has two Xeon E5-2699v4 CPUs. Each of the CPUs has 22 cores and the total number of cores of the server is 44. The Fortran compiler and the compiling option for ISPACK are "gfortran 6.3.0" and "-O3 -march=native -fopenmp", respectively. The C compiler and the compiling option for SHTns are "gcc 6.3.0" and "-O3 -march=native -ffast-math", respectively. The tested truncation wavenumbers are M = 1023, 2047, 4095, 8191, and 16383. The numbers of the longitudinal and the latitudinal nodes are set as I = 2(M + 1) and J = M + 1, respectively. Table 1 shows the comparison of the elapsed times by the number of OpenMP threads set as 44. Although the elapsed times increase as M increases approximately in proportion to M 3 for both SHTns and ISPACK, the elapsed times for ISPACK are shorter than those for SHTns in each case of M for both the forward and the backward transforms. In particular, the superiority of ISPACK to SHTns in elapsed times is more significant in larger M cases. The superiority is thought to originate from the reduction of FMA operations for the recurrence computations in ISPACK, because SHTns adopts a similar on-the-fly computation strategy of the associated Legendre functions as is done in ISPACK, but the recurrence formula used is the standard one shown as (13). Furthermore, the elapsed time for ISPACK is shorter for the forward transform than that for the backward transform for each M, although the compu-tational complexity is the same between the forward and the backward transforms. The difference is due to the implementation of ISPACK, in which the memory access for the forward transform is less than that for the backward transform.
Second, we compare the accuracy. Table 2 compares the L ¥ -error and the L 2 -error for SHTns with those for ISPACK for M = 1023, 2047, 4095, 8191, and 16383. Although these error estimates depend on the chosen random numbers and they become large as M increases, the L ¥ -error and the L 2 -error for ISPACK are smaller than those for SHTns for each M. This is thought to be due to the reduction of the number of computations required for the recurrence formula adopted in ISPACK.
Summary and discussion
In the present paper, we presented a new recurrence formula to calculate the associated Legendre functions to efficiently compute the spherical harmonic transform. The new recurrence formula was derived to take advantage of the fused multiply-add operation implemented on modern computers. We compared the computational speed and the accuracy of a numerical code, ISPACK, which is based on the new recurrence formula for the spherical harmonic transform, with those of another numerical code, SHTns, which was developed based on Schaeffer (2013) and showed that ISPACK was superior to SHTns in both speed and accuracy. We did not compare ISPACK with any fast algorithms. However, in Schaeffer (2013), SHTns was compared with a fast algorithm (Healy et al. 2003), and SHTns was shown to be several times faster than the fast algorithm. Furthermore, Schaeffer (2013) 3.4 ´ 10 -12 1.2 ´ 10 -12 9.4 ´ 10 -12 5.5 ´ 10 -12 5.5 ´ 10 -11 1.6 ´ 10 -11 6.4 ´ 10 -11 3.9 ´ 10 -11 rms SHTns ISPACK 5.4 ´ 10 -14 4.6 ´ 10 -14 1.3 ´ 10 -13 9.4 ´ 10 -14 2.6 ´ 10 -13 2.0 ´ 10 -13 7.0 ´ 10 -13 4.5 ´ 10 -13 1.0 ´ 10 -12 8.3 ´ 10 -13 showed that SHTns was about four times faster than libpsht (Reinecke 2011) at M = 4095 andSeljebotn (2012) showed that his newer fast algorithm, Wavemoth, was about six times faster than libpsht in the case corresponding to M = 4095. On the other hand, as shown in Table 1, ISPACK is about 1.5 times faster than SHTns at M = 4095, considering the average elapsed time for the backward and forward transforms. Therefore, ISPACK is estimated to have comparable speed as Wavemoth at M = 4095, although this estimate is a rough one because the computing platforms used by these papers are all different. A faster numerical code for the spherical harmonic transform is desired not only for constructing global atmospheric general circulation models using a spherical spectral method, as mentioned in Section 1, but also for analysis of global data even when the data are generated by a finite-difference model or by observation. This is because the spherical harmonics are eigenfunctions of the horizontal Laplacian on a sphere and the spherical harmonic expansions are useful for analyzing global atmospheric dynamics. Hence, we believe that the new recurrence formula proposed in the present study will contribute to not only constructing a global spectral model but also a global atmospheric data analysis. In particular, when the horizontal resolution of global data is very fine, an onthe-fly computation strategy of associated Legendre functions is inevitable because of the limitations of computer memory. Thus, the new recurrence formula has the potential to become a fundamental technique. Furthermore, owing to the less memory usage, the on-the-fly computation strategy combined with the proposed new recurrence formula could enable use of a higher resolution global atmospheric general circulation model or a larger ensemble size on a wide range of machines from workstations to supercomputer systems.
|
2019-04-22T13:08:57.509Z
|
2018-01-23T00:00:00.000
|
{
"year": 2018,
"sha1": "26b47010def9bcacaff68292375ba88c974c74ee",
"oa_license": "CCBY",
"oa_url": "https://www.jstage.jst.go.jp/article/jmsj/96/2/96_2018-019/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5709cbc854983e51e9790b7a43f96ca972cc0bec",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
201839337
|
pes2o/s2orc
|
v3-fos-license
|
Home-prepared food, dietary quality and socio-demographic factors: a cross-sectional analysis of the UK National Diet and nutrition survey 2008–16
Background Evidence suggests eating home-prepared food (HPF) is associated with increased dietary quality, while dietary quality varies across socio-demographic factors. Although it has been hypothesised that variation in HPF consumption between population sub-groups may contribute to variation in dietary quality, evidence is inconclusive. This study takes a novel approach to quantifying home-prepared food (HPF) consumption, and describes HPF consumption in a population-representative sample, determining variation between socio-demographic groups. It tests the association between HPF consumption and dietary quality, determining whether socio-demographic characteristics moderate this association. Methods Cross-sectional analysis of UK survey data (N = 6364, aged≥19; collected 2008–16, analysed 2018). High dietary quality was defined as ‘DASH accordance’: the quintile most accordant with the Dietary Approaches to Stopping Hypertension (DASH) diet. HPF consumption was estimated from 4-day food diaries. Linear regressions were used to determine the association between HPF consumption and socio-demographic variables (household income, education, occupation, age, gender, ethnicity and children in the household). Logistic regression was used to determine the association between HPF consumption and DASH accordance. Interaction terms were introduced, testing for moderation of the association between HPF consumption and DASH accordance by socio-demographic variables. Results HPF consumption was relatively low across the sample (Mean (SD) % of energy consumption = 26.5%(12.1%)), and lower among white participants (25.9% v 37.8 and 34.4% for black and Asian participants respectively, p < 0.01). It did not vary substantially by age, gender, education, income or occupation. Higher consumption of HPF was associated with greater odds of being in the most DASH accordant quintile (OR = 1.2 per 10% increase in % energy from HPF, 95% CI 1.1–1.3). Ethnicity was the only significant moderator of the association between HPF consumption and DASH accordance, but this should be interpreted with caution due to high proportion of white participants. Conclusions While an association exists between HPF consumption and higher dietary quality, consumption of HPF or HPF’s association with dietary quality does not vary substantially between socio-demographic groups. While HPF may be a part of the puzzle, it appears other factors drive socio-demographic variation in dietary quality.
Introduction
Given its substantial contribution to the ever-growing burden of chronic disease, diet has become a public health priority. Evidence suggests that higher frequency of both cooking [1][2][3][4][5] and eating home-prepared meals [6] is associated with an improved dietary intake.
Policymakers and advocates have stressed the importance of home food preparation, and countries such as Brazil [7], Japan [8] and Canada [9] have included cooking and food and cooking skills in their dietary guidelines. Further downstream, cooking and food classes and workshops constitute popular public health interventions [10][11][12]. However, systematic reviews conclude that evidence of significant and lasting change in either dietary behaviours or related health outcomes as a result of these interventions is limited [10][11][12].
Cooking skills interventions often target groups known to have, in general, a lower dietary quality, such as men [13] and less affluent individuals [14], suggesting that worse dietary quality in these groups is suspected to be driven by different home food preparation behaviours. An implicit assumption that some groups either cook less, or that the meals they cook are somehow less healthy, seems to underpin this sort of intervention. Cultural and behavioural differences pertaining to class, ethnicity, gender and generation could mean that the meals prepared by some groups are less healthy than others. Alternatively, home food preparation may be less important to the dietary quality of more affluent groups, as the higher purchasing power wielded by these individuals may allow them broader choice in prepared and out of home food options, including some which may be healthier. However, this remains something of an open question: while research suggests healthier diets are more expensive, studies have generally focused on the relative cost of ingredients as opposed to prepared foods [15][16][17].
Definition and measurement issues surround home food preparation [18,19]. Most studies approach the issue by asking how frequently participants either make or eat a home-prepared meal [20]. Questions about how often participants prepare a meal at home target an individual behaviour, and, given the frequency of tasksharing in many households [21,22], this question does not represent a good proxy for intake. If intake is the exposure of interest, then questions about what participants eat seem more relevant. Still, the social desirability of home-prepared food (HPF) [23,24] may make individuals overestimate the number of home-prepared meals they consume. In addition, qualitative studies suggest that not everyone interprets terms like 'home-prepared' in the same way [25]. Food diaries with sufficiently detailed information might present an opportunity to derive a more 'objective', or, at least, internally consistent, measure of HPF consumption.
This study will answer the questions: 1. What is the proportion of total energy derived from HPF in the UK population, and does this vary by socio-demographic characteristics? 2. Is proportion of total energy derived from HPF associated with diet quality? 3. Do socioeconomic position and demographic variables moderate the relationship between the energy derived from HPF and dietary quality?
Methods
This study represents a cross-sectional analysis of dietary surveillance data from the UK National Diet and Nutrition Survey (NDNS) 2008-16 (May 2018 release) [26]. It is reported according to the STROBE-nut recommendations [27]. NDNS is an annual cross-sectional survey which collects information on food consumption and nutritional and health status of free-living individuals in the UK. Sampling, recruitment and data collection are carried out in a consistent manner, allowing data from different survey years to be combined for cross-sectional analysis. A detailed account of the NDNS recruitment and sampling protocol has been published elsewhere [28][29][30]. Individuals aged ≥19 years at the time of participation who completed three or 4 days of the food diary were included in the analyses.
Dietary assessment
Participants completed unweighed food diaries, including all food and beverages consumed both inside and outside the home. This process is described in detail elsewhere [31]. Participants also recorded where the food was eaten, for example at home, in a restaurant or café, or at work. This variable included a specific category for food eaten at work but brought from home.
Characterisation of food-related variables
As previously, food items listed in food diaries were classified by the authors as either requiring or not requiring home preparation [32]. All foods were classified as home-prepared except those listed in Table 1. Foods Table 1 Foods not classified as home-prepared Foods prepared and eaten outside the home (e.g. food eaten in a restaurant or café) Foods prepared outside the home and eaten in the home (e.g. takeaway and delivery foods) Foods eaten as purchased (e.g. crisps, sweets, granola bars, juice and soft drinks, store-bought sandwiches, prepared and whole pieces of fruit) Foods requiring the application of heat or the addition of hot water but no other preparation (e.g. frozen and refrigerated ready meals, tinned soup, instant noodles, instant oats) which should not be classified as being home-prepared were decided by the authors a priori. Definitions of 'cooking' have been discussed extensively and remain contested [18,33,34], with many definitions not deeming the application of heat to be a necessary part of this process [34,35]. As a result 'home food preparation' and 'home-prepared food' seem more accurate and are the concepts deployed here. Different, but related, conceptualisations exist, such as food 'prepared from scratch' [36]. or food that is not 'from outside the home' [37]. The conceptualisation of HPF used here reflects several conceptions of 'cooking', or home food preparation, drawn from qualitative studies [38,39] as well as behaviours which are habitually enquired about in studies of 'cooking', such as blending, mixing, boiling, chopping, roasting and pan frying [19]. From this conceptualisation of home food preparation, a set of behaviours, we defined foods which we would deem to be home-prepared as being the products of these behaviours.
Food classification was carried out using food diary variables as illustrated in Fig. 1, with foods which were not classified as home-prepared being successively removed until only food included in home-prepared dishes remained. The proportion of energy from HPF was then calculated for each participant by summing the energetic content of foods classified as homeprepared and dividing them by the participant's total energy intake.
Dietary quality was determined by quantifying accordance to the Dietary Approaches to Stopping Hypertension (DASH) dietary pattern using a method adapted for use with NDNS [40] from an existing index [41]. The DASH diet has been shown to lower blood pressure [42] and reduce low-density lipoprotein cholesterol levels [42], as well as being associated with a lower risk of stroke and coronary heart disease [41]. This score is based on food and nutrients emphasised or minimised in the DASH diet, and has eight components: high intake of fruits, vegetables, nuts and legumes, low-fat dairy products, and whole grains; and low intake of sodium, red and processed meats, and non-extrinsic milk sugars; all adjusted for total energy intake. The score is adjusted for overall energy intake. Components are evenly weighted, and three components (sodium, sugar, and red and processed meats) are reverse-scored, so that higher consumption would lower an individual's DASH score. The overall score ranges between 8 and 40, with higher scores indicating a diet which has greater accordance with the DASH pattern.
This study models DASH accordance as a binary variable, with participants in the top quintile of DASH score being considered the most DASH-accordant, a method which has been previously employed by a number of studies [40,43,44].
Socio-demographic variables
Age, sex, ethnicity, and the presence of children in participant households were determined using self-reported survey responses. Socioeconomic position was also assessed using self-reported survey responses, and was characterised using three markers: occupation (among employed participants; occupation was classified using the simplified three-class version of the National Statistics Socioeconomic Classification described by the UK's Office for National Statistics [45]), highest educational attainment, and quintile of annual household income equivalised for household composition. Evidence suggests these socioeconomic markers present different associations with dietary intake, and are not necessarily interchangeable [46].
Analysis
Analysis was conducted in 2018. Variables were weighted using weights provided by the NDNS study team, which sought to mitigate bias resulting from the survey design and from differential non-response by individual participants [47].
The mean proportion of energy from HPF consumed by participants was determined. Linear regression was used to determine how this proportion varied by sociodemographic characteristics, using socio-demographic characteristics as exposure variable and proportion of energy from HPF as an outcome variable.
Logistic regression was used to determine the association between proportion of energy from HPF and DASH accordance. Interaction terms were introduced to test for effect modification by socio-demographic characteristics. If any interaction terms were significant, models stratified by the socio-demographic variable in question were run to determine association between energy from HPF and DASH adherence in each population sub-group.
All regressions were mutually adjusted for all sociodemographic variables. All analyses were conducted using Stata (version 14; Stata Corp.). Alpha-level of 0.01 was used throughout to test for statistical significance in order to compensate for multiple testing.
The mean percentage of energy derived from HPF in the sample was relatively low (Mean (SD) = 26.5%(12.1%)). Table 2 describes the proportion of energy derived from HPF by population sub-group, and presents the results of a linear regression with socio-demographic characteristics as the exposures and proportion of energy from HPF as the outcome. Proportion of energy from HPF did not vary substantially by socio-demographic variables. A small increase was associated with being female v male (27.1 v 25.8%, p < 0.01), and a small decrease was associated with having 12-13 years of education or < 11 years of education relative to having a university degree (26.4 p < 0.01 and 25.6 p < 0.01 v 27.8% respectively). More substantial variation was associated with ethnicity, with Black participants (37.8%), Asian participants (34.4%) and participants belonging to other ethnic groups (34.6%) consuming substantially more HPF than White participants (v 25.9%, all p < 0.01).
Meanwhile, the expected associations between sociodemographic characteristics and dietary quality were found (methods and results reported in Appendix 2). Table 3 shows the results of a logistic regression with proportion of energy from HPF as the exposure and DASH adherence as the outcome before and after adjustment for age, sex, ethnicity, presence of children in the household, income, education and occupation (full reporting of adjusted model in Appendix 1). In the unadjusted model, there is a small but statistically significant association between the variables, with an increase in 10% of energy from HPF resulting in a 20% increase in the odds of being DASH-adherent. This remained unchanged after adjustment. Given the low mean value of energy from HPF, a 10% increase would represent a substantial change, slightly lower than a change of one standard deviation (12.1%). The interaction term for Asian participants relative to White participants was significant (p < 0.01), suggesting the association between proportion of energy from HPF and DASH adherence was different in this group. Although the interaction term for Asian ethnicity was statistically significant, stratified regression was not performed. Due to the small number of non-White participants in the NDNS sample (see Table 2), the interpretation of the interaction term was challenging, and running fully adjusted logistic regressions for each sub-group was impossible. While there may be a difference in the association between HPF consumption and DASH accordance in different ethnic groups, a more ethnically diverse sample would be required to properly examine it.
All other interaction terms were non-significant (p > 0.01); further analyses were therefore not performed.
Discussion
This study took a novel approach to quantifying HPF consumption, deriving estimates from 4-day food diaries. The proportion of energy from HPF was relatively low across the sample (Mean (SD) = 26.5%(12.1%)). Consumption of HPF did not vary substantially by any of the socio-demographic variables considered here, with the exception of ethnicity. Meanwhile, dietary quality varied extensively across socio-demographic variables, in ways similar to what has been seen in other studies, with women, older participants, more affluent participants and non-white participants displaying higher dietary quality than their counterparts.
An association between HPF consumption and dietary quality appeared across the sample: a 10% increase in energy derived from HPF was associated with a 20% increase in the odds of falling in the most DASH-accordant quintile. However, it must be acknowledged that a 10% increase is large given the low contribution of HPF to the energetic intake of most participants (close to one standard deviation, at 12%). Socio-demographic variables did not moderate the association between consumption of HPF and dietary quality, except potentially in the case of ethnicity.
Non-White participants consumed a greater proportion of energy from HPF, and had a higher dietary quality. In addition, moderation analysis suggested that the association between consumption of HPF and dietary quality may differ across ethnicities. However, it is difficult to ascertain this: small numbers in other ethnic groups precluded stratified analysis. This could be investigated through further research.
Weighted, NDNS is UK population-representative, giving this study broader generalisability. However, a similar analysis conducted in different national contexts might yield different results, particularly in countries where 'traditional' food patterns remain stronger than they seem in the UK, such as in countries where a substantial proportion of the population adheres to the Mediterranean diet pattern. Comparative research of, for example, the UK and France suggests that, while there are certain convergent patterns that emerge in both countries, such as an increased use of convenience foods, and a reporting of a lack of time to cook, there are also ways in which home food practices remain distinct between countries, such as the absence of totally pre-prepared ready meals among French participants, and an increased propensity to cook 'from scratch' [49]. Meanwhile, a comparative analysis of trends in time spent eating at home in five different countries found that time spent decreased in all countries except France [50]. It would be interesting to see how the association found here might differ in a range of contexts where food practices might diverge.
This study uses the DASH score, a well-evidenced and relatively comprehensive measure of dietary quality. The food-related variables in this study were derived from unweighed, self-reported food diaries. While evidence suggests that food diaries are a more accurate measure of dietary intake than other common measures such as food frequency questionnaires [51], misreporting in self-measured dietary instruments is a welldocumented limitation [48,52].
In addition, there is potential for residual confounding due to characteristics that were not adjusted for in this analysis, such as food insecurity or characteristics of the food environment. Although there is evidence that both of these factors are associated with dietary quality, the evidence on how they are related to home food preparation is more limited. One study of home food preparation in low-income, food insecure women in Canada found that households that were more food insecure reported less complex home food preparation, though not less frequent preparation of meals 'from scratch' [53], although it is not clear whether this is suggestive of a protective effect of home food preparation against food insecurity, or a decrease in home food preparation in response to the stresses attendant on becoming food insecure, or some further factor. Regarding food environments, a study set in urban regions across five European countries (including the UK) found that greater access to restaurants was associated with reduced selfreported frequency of cooking [54]. Both these exposures are also likely to be socio-economically patterned, and may associate with some of the socio-economic indicators examined here. Further work could consider how they might affect the association between HPF consumption and dietary quality. Finally, this analysis represents a cross-sectional analysis of the associating between HPF consumption and dietary quality. Further, longitudinal work could be done to verify how HPF consumption relates to diet-related health outcomes.
The relatively low proportion of energy from HPF is reflective of our measure: many common breakfast choices (such as toast or cereal) and lunch choices (sandwiches) are not classified as home-prepared. While our choices regarding classification could be debated, our measure has the advantage of internal consistency, with the definition of what is home-prepared being the same for all participants. In addition, our classification is informed by the literature, reflecting qualitative conceptualisations [33,39] and behavioural measures used in quantitative studies of home food preparation [19].
Many studies of dietary quality and food preparation have focused on home food preparation frequency [1,36,[55][56][57][58][59], and skills [56,[60][61][62][63] as opposed to HPF consumption. Some studies of HPF consumption and dietary quality exist, but it is difficult to compare results due to the diversity of measures of dietary quality in use. One study using a UK-based cohort examined the association between self-reported frequency of consuming home-prepared meals and several indices of dietary quality, including DASH score [6], estimating that eating a homeprepared main meal more than five times a week, as opposed to less than three times a week, was associated with an 0.61 increase in DASH score. Due to the relative nature of the DASH index used here [41], and the different approaches to modelling both DASH score and consumption of HPF, it is difficult to carry out an exact comparison, other than to say that both associations are statistically significant but moderate.
Quantitative studies of HPF consumption and sociodemographic variables are limited, although analyses of home preparation skill and frequency do exist [64][65][66]. Studies generally find that women cook more frequently than men [64,65], which may also be the case in this dataset. Two studies from the United States found households with lower household income and educational attainment were more likely to cook always or never, compared to more affluent households who were more likely to sometimes cook at home. [1,67] These analyses also found that Black households reported cooking less frequently, whereas the reverse is suggested by our data. However, the different historical, cultural and national origins of Black populations in the US and the UK make distinct dietary patterns unsurprising. Black British populations are dominated by individuals of Caribbean and West African ancestry, communities themselves have distinct dietary patterns [68], despite being grouped together within this study due to limited ethnic diversity in our study sample.
These results confirm an association between HPF consumption and dietary quality, although the association is relatively small. As interventions to increase home food preparation encounter issues of cost and scalability, as well as showing equivocal evidence of long-term impact in participants [10][11][12], it is unclear that this justifies further policy action in terms of improving dietary quality. Our previous work suggests that it is possible to eat healthily while consuming very little HPF [32]; while an association with home food preparation exists, so may other behavioural routes to high dietary quality. In addition, the small contribution of HPF to the energetic intake of most participants suggests that changing home food preparation practices might have more limited potential to impact overall dietary quality than might be assumed.
These results further suggest that differences in levels of consumption of HPF may not be key drivers of dietary inequalities along the socio-demographic axes examined here, and although this could be further explored, it does not appear that HPF consumption mediates the association between socio-demographic factors and dietary quality.
In addition, most socio-demographic variables do not appear to moderate the association between consumption of HPF and dietary quality, suggesting that different groups are eating HPF with similar nutritional properties, although other dietary components may be compensating in some systematic way.
Overall, it appears that neither the amount nor the nature of HPF consumed by different population subgroups is contributing substantially to the inequalities in dietary quality known to exist across these groups (and demonstrated again in this data). One exception to this may be in the case of variation across ethnicities, although the nature of this sample makes this difficult to comment upon.
This study presents a comparison between a nutritionbased characterisation of diet, DASH accordance, and a behaviour-based one, consumption of HPF. Other behaviour-based characterisations of diet exist, such as food 'cooked from scratch' or 'traditional recipes'. More might be developed through qualitative work delving into how individuals conceptualise the food they prepare and eat. In order to understand which behaviours are most important for dietary quality, it is worth continuing to think about diet not only in nutritional terms but in behavioural ones reflecting people's daily practices, and understanding how these drive dietary intake.
Although consumption of HPF shows a small association with dietary quality, it does not appear to drive dietary inequalities between population sub-groups. This suggests that the remaining components of the diet, food consumed outside the home, and food consumed at home that is not home-prepared, may be driving dietary inequalities, which could be examined through further research. Some interventions have already sought to target these food sources, including supermarket interventions aiming to promote purchases of healthier snacks [69], and restaurant menu labelling providing information on the nutrition and energetic content of various dishes [70].
Conclusion
This study suggests relatively low levels of consumption of HPF across the population-representative sample, and confirms a statistically significant but moderate association between consuming HPF and dietary quality. In addition, neither the amount nor the type of HPF consumed appeared to contribute substantially to inequalities in dietary quality across population subgroups. These results suggest that the potential of changing HPF consumption as a means of improving dietary quality overall, and particularly for addressing diet-driven health inequalities, may be relatively limited. Further research may help to determine which other dimensions of food practices make a more substantial contribution to dietary quality and dietary inequalities.
Appendix 2
Variation in dietary quality by socio-demographic characteristics Table 5 shows the results of a logistic regression with socio-demographic characteristics as the exposure and classification in the top quintile for DASH accordance as the outcome.
As in previous studies, DASH accordance varied extensively by demographic variables, with older people (OR 9.9(95% CI 4.7-21.0) for participants aged 65 and over relative to participants aged 19-24), women (OR 1.7(95% CI 1.4-2.1) relative to men) and Asian participants (OR 5.1 (95% CI 3.2-8.3) relative to white participants) being significantly more likely to be in the most DASH-accordant quintile. Participants with a lower educational attainment were less likely to be in the top quintile (OR 0.3 (95% CI 0.2-0.4) for participants with less than 11 years of education relative to participants with a degree-level education), as were participants in the lowest quintile of household income (OR 0.6 (95% CI 0.5-0.9) relative to top income quintile). Participants in intermediate roles were less likely to be DASHaccordant than their counterparts in professional or managerial roles (OR 0.7(95% CI 0.6-0.9)).
Appendix 1
Association between home-prepared food consumption and DASH accordance
|
2019-09-06T14:42:41.935Z
|
2019-09-06T00:00:00.000
|
{
"year": 2019,
"sha1": "caa10f90362ea8bf7c17a187559e2bbc1242a99f",
"oa_license": "CCBY",
"oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/s12966-019-0846-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "caa10f90362ea8bf7c17a187559e2bbc1242a99f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1534611
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of chromosomal and array-based comparative genomic hybridization for the detection of genomic imbalances in primary prostate carcinomas
Background In order to gain new insights into the molecular mechanisms involved in prostate cancer, we performed array-based comparative genomic hybridization (aCGH) on a series of 46 primary prostate carcinomas using a 1 Mbp whole-genome coverage platform. As chromosomal comparative genomic hybridization (cCGH) data was available for these samples, we compared the sensitivity and overall concordance of the two methodologies, and used the combined information to infer the best of three different aCGH scoring approaches. Results Our data demonstrate that the reliability of aCGH in the analysis of primary prostate carcinomas depends to some extent on the scoring approach used, with the breakpoint estimation method being the most sensitive and reliable. The pattern of copy number changes detected by aCGH was concordant with that of cCGH, but the higher resolution technique detected 2.7 times more aberrations and 15.2% more carcinomas with genomic imbalances. We additionally show that several aberrations were consistently overlooked using cCGH, such as small deletions at 5q, 6q, 12p, and 17p. The latter were validated by fluorescence in situ hybridization targeting TP53, although only one carcinoma harbored a point mutation in this gene. Strikingly, homozygous deletions at 10q23.31, encompassing the PTEN locus, were seen in 58% of the cases with 10q loss. Conclusion We conclude that aCGH can significantly improve the detection of genomic aberrations in cancer cells as compared to previously established whole-genome methodologies, although contamination with normal cells may influence the sensitivity and specificity of some scoring approaches. Our work delineated recurrent copy number changes and revealed novel amplified loci and frequent homozygous deletions in primary prostate carcinomas, which may guide future work aimed at identifying the relevant target genes. In particular, biallelic loss seems to be a frequent mechanism of inactivation of the PTEN gene in prostate carcinogenesis.
Background
Prostate cancer is a frequent and heterogeneous malignancy with few established prognostic markers. Increased knowledge on the genetic basis of this condition is expected to significantly improve the clinical management of these patients. Most of the genetic data currently available on this malignancy has been obtained using chromosomal comparative genomic hybridization (cCGH), a whole-genome screening methodology well established in the scientific field [1]. We have recently published a statistical dissection of the cCGH data available in the literature and proposed two main genetic pathways involved in prostate carcinogenesis, starting either with 8p or 13q deletions [2]. We showed that 8q gain and 13q loss were good predictors of progression into locally invasive disease and that losses of 6q and 10q were significantly associated with metastatic cancers. In addition, some of these genetic changes have shown prognostic value independently of tumor grade and stage [3][4][5][6].
The recent advent of microarray-based platforms for the detection of genome-wide copy number changes promises to uncover novel recurrent genetic aberrations and provide a more accurate delineation of genomic regions previously known to be altered in different cancer types. However, there is still no consensus regarding the scoring of array-based comparative genomic hybridization (aCGH) results, making it difficult to objectively compare findings obtained by different platforms and analysis tools. A few aCGH studies of prostate cancer cell lines have been reported [7][8][9][10][11], but most cell lines grow as stable, uncontaminated cell populations with clonal karyotypes. This makes the comparison of different platforms and scoring methods easier than for clinical samples, which often contain varying degrees of non-neoplastic cell contamination and thus fail to show the fluorochrome ratio intensities expected for low-level copy number changes. Whole-genome aCGH findings have been reported in small subsets of primary prostate carcinomas [12][13][14], and high-resolution platforms have been developed to study recurrently affected genomic regions [14,15]. However, Paris et al. were the first to use the aCGH methodology to study a larger series of clinical prostate cancer samples [16,17]. The particular scoring methodology used in those studies resulted in the detection of a large percentage of single clone alterations of unclear significance. Furthermore, the concordance between the previously established chromosomal CGH and the new array-based CGH platforms could not be conclusively evaluated, since genetic information obtained with the former method was available only for a small subset of the samples.
In the present study, we systematically compared aCGH and cCGH profiles of 46 primary prostate carcinomas and determined the best aCGH scoring methodology to delineate genomic copy number changes relevant for prostate carcinogenesis.
Quality control
Clones that failed to produce a result in more than 60% of the sample set were removed from further analysis, as were those displaying copy number changes in at least two negative controls. Clones with known polymorphic regions were not present in the array. Additionally, analysis of the dye-swap experiments and negative controls suggested a dye-specific affinity of several clones on chromosome X and Y, which are rich in repetitive sequences. As these seemed to produce copy number aberrations (not previously detected by cCGH) in all samples, we chose to remove them from the analysis. From the 3568 clones in the microarray, 2787 passed these stringent quality criteria. The median percentage of clones remaining per sample (out of 2787) was 97% in the negative controls, 96% in the biopsy samples and 89% in the prostatectomy series.
Comparison of scoring methods
Sample-specific fixed-thresholds, even when stringently determined, provided a fragmented genetic profile in which several low-level copy number changes (CNCs) were not scored and a large number of single clone aberrations (average of 5.8 per sample) as well as false positive findings (average of 3.7 CNCs per control) were obtained. The data segmentation approach provided by CGH-Plotter was also affected by low intensity ratios, resulting in most gains and several deletions, confirmed to be present using cCGH, being missed. On the other hand, the number of single clone aberrations (0.5 per sample), as well as false positive findings (0.1 CNCs per control), was greatly reduced. aCGH-Smooth, by focusing on the detection of contiguous groups of clones with similar mean intensities, was able to score a large number of gains and losses with intensities that did not reach theoretical ratios for a stroma-free tumor sample. This strategy thus detected twice as much CNCs than the previous ones, with the advantage of producing very few single-clone aberrations (average of 1.4 per sample) and virtually no false positive findings (average of 0.1 CNCs per control). Due to their uncertain significance, the few single clone aberrations were not included in the final scoring. Figure 1 provides a schematic representation of the individual profiles produced by the three aCGH scoring methodologies tested on a sample with known copy number aberrations.
Comparison between cCGH and aCGH findings
aCGH confirmed 95% of the 146 copy number changes detected by cCGH in the 46 prostate carcinomas ( Figure 2). Most of the non-confirmed aberrations involved single chromosomal bands located at chromosomal ends. Seven cases without copy number changes by cCGH were found to have genomic aberrations upon aCGH analysis, representing a 15.2% detection increase of abnormal cases. Regarding individual aberrations, aCGH detected 2.7 times more copy number changes than cCGH (347 versus 146). Forty-five percent of the gained regions spanned more than 50 clones, whereas a large proportion of lost regions involved 20 to 50 clones (33.8%). Overall, 73.2% of all gains and 70% of all losses involved at least 10 clones, which corresponds roughly to the 10 Mb resolution level estimated for cCGH. Up to 60% of the gains and 50% of the losses larger than 10 clones had been detected using cCGH. Specifically, deletions of 5q (p = 0.066), 6q (p = 0.065), 12p (p = 0.014), and 17p (p = 0.072) were particularly overlooked by cCGH, whereas deletions at 8p and gains of 7 and 8q were detected by both techniques in almost identical proportions.
Comparison of aCGH score results for sample "Bp22" using different automated scoring approaches
FISH and mutation analyses of TP53
Hybridization was successful in all 10 paraffin embedded core biopsies analyzed by dual-color FISH. These corresponded to samples with (n = 7) and without (n = 3) deletions at 17p13 detected by aCGH. FISH results confirmed the loss of one or more copies of the TP53 probe (compared to the control probe) in all but one of the cases with 17p loss ( Figure 4; the exception was a case deemed uninformative due to the small size of the paraffin section analyzed). The remaining three samples displayed a normal fluorescent pattern with two signals for both the centromeric and the 17p probes. Regarding TP53 mutation screening of the 51 samples, including the nine with 17p losses by cCGH, aCGH, or FISH, only one mutation (exon 5, codon 177, CCC->CTC, Pro-> Leu) and a known polymorphism (exon 6, codon 213, CGA->CGG, Arg->Arg, detected in four samples) were present, all in cases without a 17p13 deletion.
Discussion
In this work we used array-based CGH to assess the genomic profile of a large series of primary prostate carcinomas. As these samples had previously been analyzed using chromosomal CGH, we were able to compare the two techniques in terms of sensitivity and overall performance, and to test distinct automated scoring approaches for aCGH data. Whereas most scoring methods will achieve concordant results if a given sample is pure and the hybridization quality is excellent, clinical samples usually contain non-neoplastic cell populations that influence the interpretation of the results. In the particular case of the prostate gland, the enriched cellular content of the stromal component should not be underestimated. Combined with the variability within chromosome spreads (in cCGH), labeling efficiency, and hybridization behavior, a certain level of methodological noise/variability is expected that may seriously influence Genomic findings in 46 primary prostate carcinomas the analysis. In a recent paper by Lai et al. [18], several automated scoring methodologies were compared using datasets recreating distinct aberrations and background noise, and only a few were able to reliably score low-level copy number changes. Taking this information into account, and using our cCGH data as a starting point, we compared three freely available analysis tools representing common aCGH scoring methodologies. We found fixed-thresholds to be extremely affected by the quality of the hybridization and the presence of normal cells, which resulted in known alterations being missed completely or scored only partially. The large number of single-clone aberrations obtained also rendered the distinction between true copy number changes and false positive results subject to interpretation and additional validation. The data segmentation approach of CGH-Plotter produced minimal levels of single clone aberrations, but was unable to detect most low-intensity changes. Finally, aCGH-Smooth consistently detected low-level copy number changes with only residual levels of single-clone aberrations and false positive findings, thus providing a more sensitive and reliable approach to the scoring of our 1 Mb BAC array data.
Using this analytical tool, 95% of the changes detected using cCGH were confirmed by aCGH. The theoretical 10fold increase in resolution of aCGH resulted in an increase of 15.2% in the proportion of genetically abnormal prostate carcinomas and in the detection of 2.7 times more copy number aberrations. Strikingly, of the aberrations involving more than 10 Mb (the estimated resolution limit of cCGH), 45% had not been scored using cCGH. We believe this discrepancy reflects two limitations of cCGH, namely the lack of sensitivity in detecting lowintensity alterations (independently of the size of the aberration) and the inherent difficulty in scoring regions of metaphase chromosomes of smaller size and variable hybridization behavior (17p, 18p, 19, 20, 21, and 22). It is noteworthy that deletions at 8p and gains at 8q and 7 were equally detected by both methodologies, whereas deletions at 5q, 6q, 12p, and 17p were particularly overlooked using cCGH. Paris et al. [17] have previously reported such a comparison in a series of 20 formalinfixed paraffin embedded prostate cancers. In their work, 90% of the cCGH copy number changes were confirmed by aCGH, which detected ~3.4 times more alterations. As they used fixed thresholds to score their data, however, 44% of the aCGH findings consisted of single clone aberrations, thus requiring careful interpretation and validation.
The overall profile obtained for our prostate cancer samples was comparable to that described in previous aCGH studies of clinical samples [13,16,17]. It is noteworthy that most gains were detected in the overall more advanced group of carcinomas sampled by biopsy, whereas 77% of the alterations in the prostatectomy series corresponded to deletions, which according to the literature are the most common events in prostate carcinogenesis [2]. Interestingly, seven prostate cancers sampled by Aberrations occurring in less than 20% of the samples are not displayed. 1 Smallest region of overlap (often more than one per chromosomal arm).
prostatectomy (early staged tumors) did not display copy number changes even at this level of resolution, whereas losses at 8p and 16q and gain at 8q were already present in a considerable percentage of clinically confined carcinomas, indicating these alterations arose early during tumor progression. We and others have in fact shown that Examples of homozygous deletions revealed by aCGH (arrow heads) 8q gain is significantly associated with increased tumor grade and worse patient outcome [3,5,6,19], therefore suggesting that some early cancer foci already carry genetic features of bad prognosis, whereas others do not display copy number changes at all and possibly correspond to a subset of less aggressive or even latent lesions.
Loss of 17p, another alteration previously associated with poor prognosis and frequently overlooked by cCGH, often occurred together with 8q gain in our sample set. This led us to perform FISH and mutation screening for the most likely candidate at this location, the TP53 gene. Mutation frequencies for TP53 are extremely variable in prostate cancer studies (ranging from 3-45%), but overall it is consensual that most clinically confined tumors have no mutations, whereas metastatic and androgen independent cancers harbor a high frequency of TP53 mutations [20,21]. We detected only one mutation in our set of clinically confined carcinomas, which is in accordance with the literature and suggests that this genetic event is more important for the progression, rather than to the establishment, of prostatic carcinomas. Loss of 8p, on the other hand, is an early and frequent finding in prostate cancer with no significant differences between cCGH and aCGH. It involves a minimum region of overlap spanning 12 Mb (8p21.2 to 8p22) that encompasses over 50 confirmed genes with distinct cellular functions, making it difficult to pinpoint single candidate targets. Our and previous aCGH studies have been unable to find homozygous deletions at this chromosome arm, suggesting that many genes in this region may thus be working together on a dosage dependent manner to induce the initial stages of prostate carcinogenesis [22,23].
Deletions at chromosomal region 10q are also a frequent finding in prostate cancer cells, albeit associated with advanced disease. In 12 out of 15 cases with 10q loss in our series, a common region at 10q23.31 (~1 Mbp long) was affected. Strikingly, in seven of these 12 carcinomas the deletion was homozygous. The only cancer-relevant Genomic findings using cCGH, aCGH, and FISH in three selected biopsy samples gene from the few candidates at this location is PTEN/ MMAC1 [24,25], as it has already been shown that PTEN expression is reduced in a large subset of advanced prostate cancers [26,27]. Recent work on mouse models [28][29][30] suggests that the absence of functional PTEN confers proliferating cells the ability to overlook apoptosis even when subjected to apoptotic stimuli. Haploinsufficiency of PTEN seems to already have a dramatic influence on the cellular response to apoptosis [28], with the loss of the second allele being actively selected for during disease progression [30]. Interestingly, analyses of this multifunctional protein phosphatase generally describe very low mutations frequencies [31][32][33][34], which further indicates that homozygous deletions, rather than mutations or epigenetic silencing, are the major mechanism of gene inactivation at this locus. This hypothesis has been recently strengthened by the recurrent finding of homozygous deletions encompassing the PTEN region in several prostate cancer cell lines and xenografts [35,36], as well as in primary tumors [37]. Homozygous deletions affecting 5q were also relatively frequent in our series of primary prostate carcinomas, but these were heterogeneous and the potential target genes remain unknown.
Conclusion
We conclude that aCGH can significantly improve the detection of genomic aberrations in cancer cells as compared to previously established whole-genome methodologies, although stromal contamination may significantly influence the sensitivity and specificity of most automated scoring approaches. The increased resolution of aCGH revealed several previously undetected aberrations and refined the breakpoints of those already found by cCGH. The recurrent regions of copy number gains and losses in primary prostate carcinomas highlighted in this study, as well as the novel amplified loci and frequent homozygous deletions, may guide future work aimed at identifying the relevant target genes.
Prostate carcinoma samples
We have previously reported the genomic findings detected by cCGH in a series of prostatectomy specimens containing cancer [2] and in a series of fine-needle biopsies from prostate cancer suspects [6]. For the present aCGH study, 24 samples from the former and 22 samples from the latter series were selected, because we wanted to include early-staged tumors, as well as samples from more advanced, genetically complex cancers. From the prostatectomy series, in which all samples contained >70% tumor cells, cases were selected to equally represent different Gleason score categories. From the biopsy series, only samples with morphological evidence of tumor were used and selection was based mostly on DNA availability. From the selected samples, 6/22 biopsies and 9/24 pros-tatectomies had displayed no copy number changes using cCGH. The same DNA stocks were used for cCGH and aCGH. Additionally, a total of 51 carcinomas for which good quality DNA was available (46 samples from Ribeiro et al., 2006a, including the 24 selected for this aCGH study, and 5 biopsy samples from Ribeiro et al., 2006b, also included in the present report) were evaluated for TP53 gene mutations. Several paraffin-embedded tissue blocks corresponding to biopsy samples analyzed by aCGH were also selected for FISH validation studies.
Array-based comparative genomic hybridization Clone set
We used the Human 4 k Genome-wide 1 Mb resolution Arrays provided by the Norwegian Microarray Consortium (National technology platform supported by the functional genomics program of the Research Council of Norway [38]). Each slide consists of 3568 BAC/PAC probes positioned along the genome at an average resolution of 1 Mb, printed in duplicate onto two identical blocks in the array, for a total of four replicates per clone. Probe DNA was obtained from the 1 Mb BAC/PAC clone set kindly provided by Dr. Nigel Carter at the Wellcome Trust Sanger Institute, UK [39], amplified using DOP-PCR, and spotted onto CodeLink slides (Amersham Biosciences, Chalfont St Giles, UK) using a MicroGrid II arrayer (BioRobotics, Boston, USA). Mapping information (clone location and cytogenetic bands) was retrieved from the Ensembl Human Genome Browser v36, December 2005 freeze [40].
Labeling and hybridization DNA from the 46 prostate samples had been extracted using standard methods. The same commercially available male control DNA (Promega Corporation, Madison, WI) was used as reference for all samples. For each experiment, 500 ng of test and reference DNA were digested with Dpn II (New England Biolabs, Ipswich, MA), purified using the QIAquick PCR purification kit (Qiagen Inc, Valencia, CA), and labeled with Cy3-dCTP (test) or Cy5-dCTP (reference) (PerkinElmer, Boston, MA) in a random-primer reaction with the BioPrime Array CGH Genomic Labeling Kit (Invitrogen, Paisley, UK). Unincorporated nucleotides were removed using micro-spin G50 columns (Amersham Biosciences, Chalfont St Giles, UK). Labeled DNAs were combined, mixed with 135 μg of human Cot-1 DNA (Invitrogen, Paisley, UK), precipitated using ethanol and ressuspended in hybridization buffer containing 50% formamide, 10% dextran sulphate, 2 × SSC, 4% SDS, and 10 μg/μL yeast tRNA (Invitrogen, Paisley, UK). Samples were denatured at 72°C for 10 minutes and incubated at 37°C for 60 minutes before being hybridized onto the slides in a GeneTAC Hybridization station (Genomic Solutions Ltd, Huntingdon, UK). Hybridization took place over 36 hours, followed by auto-mated post-hybridization washes in 50% formamide/2 × SSC (45°C), 2 × SSC/0.1% SDS (37°C), and PN buffer (37°C). Slides were dried by centrifugation after a brief wash in 0.05 × SSC and scanned with an Agilent G2565BA microarray scanner (Agilent Technologies, Palo Alto, CA). Five control hybridizations (normal male versus normal female DNA) were performed, as well as five dye-swap experiments using randomly selected samples. Data from 11 additional negative controls run during the same period with different batches of reference DNA were kindly provided by the Microarray Core Facility to validate the clone set.
Image analysis and processing
Analysis of the microarray images was performed in Gene-Pix Pro 6.0 (Axon Instruments Inc., Foster City, CA), with the median pixel intensities for each channel (with background subtraction) being calculated for each spot. For each sample, Genepix results were exported as a TABdelimited "GPR" file into Normalisation Suite [41], where background-subtracted channel intensities were normalized (local linear normalization) and combined to produce the final intensity ratios for each feature. For the automated scoring of copy number aberrations, three methods were compared: sample-specific fixed-thresholds, calculated as 2.5 times the baseline noise levels for each sample (Normalisation Suite [42]); a data segmentation approach using K-means clustering (CGH-Plotter [41]), and breakpoint estimation (aCGH-Smooth [43]). The final choice for automated scoring fell upon aCGH-Smooth. Graphical visualization of the log-2 ratios for each sample and the overall results for all samples (clones indexed by their physical location along the genome) were generated in Normalisation Suite and Microsoft Excel, respectively. Amplifications were scored whenever log-2 intensity ratio was larger than 0.75. For determination of homozygous deletions, the average log-2 intensity ratios for deleted regions was calculated for each sample, and clones reaching at least twice this value were scored.
Fluorescent in situ hybridization
For ten selected biopsy samples, four-micron thick sections from a representative paraffin-embedded block were cut onto SuperFrost Plus Adhesion slides (Menzel-Glaser, Braunschweig, Germany). Sample processing, hybridization, and analysis were performed as described previously [6]. A locus-specific probe for the TP53 gene (17p13.1) and a control probe for the centromere of chromosome 17 (Vysis, Downers Grove, IL) were applied onto each sample, and fluorescent images corresponding to DAPI, SpectrumGreen (CEP17), and SpectrumOrange (17p13.1) were sequentially captured using the same equipment described for cCGH analysis. Only intact, nonoverlapping nuclei were scored. An abnormal population was considered representative when at least three nuclei within the same microscope field presented a given aberration and at least 40 nuclei presented that particular alteration in the whole sample.
TP53 mutation status
From the 51 samples subject to mutation analysis, direct sequencing (sense and anti-sense) was performed for each of exons 5-8 in 14 samples. The remaining 37 samples were screened for aberrant bands using the temporal temperature gradient electrophoresis (TTGE) method for exons 5, 6, and 8, whereas exon 7 was directly sequenced. The TTGE method has a better resolution level than sequencing, and aberrant bands may be detected in a sample with <5% mutated alleles [44].
Publish with Bio Med Central and every scientist can read your work free of charge http://www.molecular-cancer.com/content/5/1/33
|
2016-05-15T01:41:04.552Z
|
2006-09-04T00:00:00.000
|
{
"year": 2006,
"sha1": "1b141ae5886f348991bcf377df00ee2c14604090",
"oa_license": "CCBY",
"oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/1476-4598-5-33",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0293c7d6fbb4f850b29505a79ac613c161adba5e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
52294104
|
pes2o/s2orc
|
v3-fos-license
|
Inhibition of RON kinase potentiates anti-CTLA-4 immunotherapy to shrink breast tumors and prevent metastatic outgrowth
ABSTRACT The advent of immune checkpoint blockade as a new strategy for immunotherapy has changed the outlook for many aggressive cancers. Although complete tumor eradication is attainable in some cases, durable clinical responses are observed only in a small fraction of patients, underlining urgent need for improvement. We previously showed that RON, a receptor tyrosine kinase expressed in macrophages, suppresses antitumor immune responses, and facilitates progression and metastasis of breast cancer. Here, we investigated the molecular changes that occur downstream of RON activation in macrophages, and whether inhibition of RON can cooperate with checkpoint immunotherapy to eradicate tumors. Activation of RON by its ligand, MSP, altered the gene expression profile of macrophages drastically and upregulated surface levels of CD80 and PD-L1, ligands for T-cell checkpoint receptors CTLA-4 and PD-1. Genetic deletion or pharmacological inhibition of RON in combination with anti-CTLA-4, but not with anti-PD-1, resulted in improved clinical responses against orthotopically transplanted tumors compared to single-agent treatment groups, resulting in complete tumor eradication in 46% of the animals. Positive responses to therapy were associated with higher levels of T-cell activation markers and tumor-infiltrating lymphocytes. Importantly, co-inhibition of RON and anti-CTLA-4 was also effective in clearing metastatic breast cancer cells in lungs, resulting in clinical responses in nearly 60% of the mice. These findings suggest that RON inhibition can be a novel approach to potentiate responses to checkpoint immunotherapy in breast cancer.
Introduction
The ability of tumors to evade the immune system is achieved through a variety of mechanisms. Engagement of co-inhibitory T-cell receptors, also known as checkpoint molecules, is a common event in tumor immunoevasion. Two well-studied checkpoint receptors on T-cell surfaces are CTLA-4 and PD-1, for which clinically approved inhibitors are now available. 1,2 CTLA-4 and PD-1 can bind to CD80/CD86 and PD-L1/L2, respectively, to counteract activation signals initiated by the T-cell receptor (TCR). Blocking immune checkpoints with "checkpoint inhibitor" therapy was shown to be a powerful approach to release T cell inhibition in preclinical models and is now approved by the FDA for the treatment of certain cancers. 3,4 In some cases, long-term durable responses and complete remissions are achieved with checkpoint inhibition, but only a fraction of patients mount a productive anti-tumor immune response and benefit from the treatment. To this end, numerous approaches have been proposed to potentiate responses to immunotherapy including combination treatment with various immune-modulating drugs, or co-treatment with chemotherapy or radiotherapy. [5][6][7] A better understanding of the context in which checkpoint inhibitors are successful, and new strategies to make them more effective, are needed to fully realize the potential of these promising new drugs.
Breast cancer is the most common form of invasive cancer in women, and it is the second leading cause of cancer-related deaths. 8 Although the emergence of targeted therapies has resulted in improved clinical outcomes overall, more than 40,000 people succumb to this disease every year in the U.S. alone, highlighting an urgent need for developing better treatments. It is now appreciated that the immune system plays important roles in determining breast cancer outcomes. 9 Bolstered by the success of checkpoint inhibitors in other types of cancer, numerous immunotherapy trials have been launched for hormone receptor-positive and triple-negative subtypes of breast cancer. Preliminary results in trials involving inhibitors of PD-1 or its ligand PD-L1 suggest that some breast cancer patients can benefit from checkpoint immunotherapy, although response rates were only 10-20%. [10][11][12][13] In preclinical studies, immune-competent transgenic mouse models recapitulate many key features of human cancers and are most appropriate for immunotherapy experiments. The MMTV-PyMT transgenic mouse is a well-studied experimental model in which the Polyomavirus Middle T (PyMT) oncogene is expressed under the control of the tissue-restricted MMTV promoter in an immunocompetent background. MMTV-PyMT mice develop mammary tumors with high penetrance and mimic the molecular features of hormone receptor-negative breast adenocarcinomas. 14 This model has been instrumental for studying tumor-host interactions and anti-tumor immune responses. [15][16][17][18] The MMTV-PyMT model has been used to test CTLA-4 or PD-1 checkpoint inhibition in combination with other therapeutic approaches such as irradiation or inhibition of the tyrosine kinase Axl. [19][20][21] In these studies, combination approaches provided clinical benefit, whereas single-treatment with checkpoint inhibitors was not efficacious. These data suggest the utility of the PyMT model in discovering synergistic immunotherapeutic drug combinations.
RON (also known as Macrophage Stimulating-1 Receptor, MST1R) is an understudied receptor tyrosine kinase that shares similar structure with its well-studied relative c-MET. 22 RON is expressed in resident tissue macrophages, and in tumor cells of various origins. 23,24 RON can be activated by aberrant overexpression or by binding the active form of its ligand, macrophage stimulating protein (MSP), a constitutively secreted proprotein found in serum. In tumor cells, overexpression of RON results in proliferation, migration, and a more aggressive phenotype. [25][26][27][28] Particularly, overexpression of RON in breast tumors is associated with increased metastasis and poor clinical outcomes. 16,26 In macrophages, activation of RON by MSP results in attenuation of immune responses. [29][30][31][32][33] Previously, we and others have shown that host RON signaling negatively regulates antitumor immunity in mouse models of cancer. 31,34,35 Importantly, host RON was critical for conversion of micrometastatic lesions into overt clinical metastases, a step that is thought to be rate-limiting in the deadly metastatic cascade. 34 Inhibition of RON activity with a MET/RON dual kinase inhibitor BMS777607 (also known as ASLAN002) 36 also blocked metastatic outgrowth in a manner that was dependent on CD8 + T cells. 34 BMS777607/ASLAN002 has been in clinical trials and exhibited a good tolerability profile and demonstrated biological effects on RON-mediated activities. [37][38][39] Based on the known immunosuppressive role of RON in tumor models, we posited that RON inhibition might provide a novel therapeutic modality in combination with checkpoint blockade in cancer. In this study, we discovered control of checkpoint ligand molecules by RON, and determined that inhibition of RON functions in combination with checkpoint inhibitor immunotherapy to attain better clinical responses.
MSP-RON signaling activates an immunomodulatory gene expression signature
Within the immune system, RON expression is restricted to macrophages, where it regulates inflammatory phenotypes, 23,31,32 but a comprehensive understanding of the consequences of RON activation is lacking. To elucidate specific changes in gene expression downstream of MSP-RON signaling in macrophages, we performed RNA sequencing (RNAseq) analysis on magnetically-sorted naïve F4/80+ resident peritoneal macrophages from wildtype (WT) mice or syngeneic mice that lack the RON tyrosine kinase domain (RON TK-/-), 40 after 7 hours of treatment with recombinant MSP. As expected, RON TK-/-macrophages did not respond to MSP treatment and displayed gene expression that was very similar to untreated WT macrophages. Biological replicates of MSPtreated WT macrophages clustered together and exhibited a distinct gene expression signature, as shown by the RNAseq sample distance matrix (Figure 1a). With a threshold of a 2-fold change and FDR q < 0.01, 2934 genes were found to be differentially expressed in MSPtreated WT macrophages in comparison with MSP-treated RON TK-/-macrophages (1193 upregulated, 1741 downregulated). With a 4-fold change threshold (and FDR q < 0.01), this number was reduced to 823 differentially expressed genes (396 upregulated, 427 downregulated) (Figure 1b). Pathway enrichment analysis revealed that many of these differentially expressed genes belonged to immune cell trafficking and inflammatory response pathways (Fig S1a,b). When MSP-treated RON TK-/-macrophages were compared with untreated WT macrophages, the gene expression pattern was largely similar, confirming that our results were dependent on the intact kinase domain of RON (Figure 1a and S1c).
Macrophage polarization is conceptually divided into proinflammatory (M1) and immunomodulatory (M2) phenotypes. 41 RON activation has previously been reported to skew macrophages to the M2 phenotype and attenuate inflammation. 42 When we compared expression levels of differentially regulated genes in our study with previously published datasets corresponding to defined M1 versus M2 states, 43 we found that RON activation did not strictly correlate with either the M1 or M2 state ( Fig S2). However, we found that RON activity was associated with an overall immunosuppressive state. For example, macrophage genes known to be upregulated by the inflammatory stimulus lipopolysaccharide (LPS) 44 were downregulated in the presence of MSP stimulation (Fig S3a,b). Differentially expressed genes in our study were mostly in agreement with findings reported by Chaudhuri et al., who used microarray technology to investigate gene expression changes in macrophages 20 hours after MSP treatment 31 (Fig S4). In sum, data gathered to date suggest that MSP-RON signaling is not a simple molecular switch for an M2 macrophage state. Rather, it activates a complex immunomodulatory gene expression signature that ultimately suppresses CD8 + T cell activity to facilitate tumor progression and metastasis. 31,34,35 MSP-RON upregulates CD80 and PD-L1 expression through MAPK signaling In the course of our analysis, we noted that MSP-RON signaling resulted in upregulation of several immune checkpoint ligand mRNAs, with CD80 and PD-L1 being the most differentially expressed (9.7-and 3.6-fold, respectively; Figure 1c). We analyzed protein levels of CD80 and PD-L1, as well as the related ligands PD-L2 and CD86, with flow cytometry (Fig S5). MSP-treatment resulted in significant upregulation of surface levels of PD-L1 and CD80 on macrophages, whereas PD-L2 remained unchanged and CD86 was (Figure 1d,e). MSP-RON-mediated upregulation of CD80 and PD-L1 was blocked when either of two RON-selective kinase inhibitors, BMS777607 or merestinib (LY2801653), 36,45 were added (Figure 1f).
To investigate the kinetics and biological requirements for CD80 and PD-L1 protein upregulation downstream of MSP-RON, we performed a time-course analysis of MSP-treated macrophages. Upregulation of CD80 was evident by 6 hours, while increases in PD-L1 were not detected until 12 hours after treatment (Figure 2a). Short-term treatment of the macrophages with inhibitors of transcription or translation (actinomycin D or cycloheximide, respectively) prior to MSP stimulation completely abrogated MSP-mediated upregulation of CD80 and PD-L1 (Figure 2a). Since RON can signal through both MAPK and PI3K pathways to control gene expression, 26,31 we tested the activity of, and the requirement for, these two signaling nodes in the regulation of CD80 and PD-L1. Western blot analysis of MSP-treated macrophages revealed phosphorylated forms of ERK and AKT as surrogates of MAPK and PI3K pathway activation, respectively ( Figure 2b). MSP-RON driven phosphorylation of ERK and AKT was blocked when MEK1/2 or PI3K inhibitors were added to the culture ( Figure 2b). MAPK pathway inhibitors abrogated the upregulation of both CD80 and PD-L1, whereas PI3K inhibition was only effective to fully block the upregulation of CD80 (Figure 2c).
Activation of JAK-STAT signaling has been shown to upregulate PD-L1 and CD80 expression in various cell types including cancer cells and macrophages. [46][47][48][49] We assessed induction of STAT1/3/5 phosphorylation in MSP-treated macrophages in the presence of MEK1/2 or JAK inhibitors. Upon MSP stimulation, we detected higher levels of STAT1 serine-727 phosphorylation, a residue known to be a direct target of MAPK signaling, 50 (Figure 2d). Phosphorylation of Ser727 was unaffected by JAK inhibition, but was reduced to background levels in the presence of MEK1/2 inhibitor. MSPstimulation did not activate phosphorylation of STAT1-Tyr701, STAT3-Tyr705, and STAT5-Tyr694 residues, which are phosphorylated by JAK kinases 51 and were relatively faint ( Figure 2d). We then assessed whether inhibition of JAK and/ or MEK1/2 could block MSP-mediated upregulation of CD80 and PD-L1 in macrophages in culture. Interestingly, either the MEK inhibitor or the JAK inhibitor reduced the PD-L1 upregulation to near background levels, whereas the JAK inhibitor had a statistically significant but smaller effect on CD80 and CD86 regulation (Figure 2e). These data suggest that both JAK-mediated tyrosine phosphorylation and MAPK-mediated serine phosphorylation of STAT proteins may contribute to regulation of checkpoint ligands downstream of MSP/RONbut only the MAPK-mediated effects are a direct effect of RON activation.
Inhibition of RON cooperates with anti-CTLA4 to boost antitumor immunity
We previously demonstrated that host RON signaling suppresses CD8 + T-cell responses against breast cancer cells, resulting in metastatic progression in mice. 34 The ability of RON signaling to upregulate CD80 and PD-L1, amongst a large collection of other immunosuppressive molecules, led us to hypothesize that inhibition of RON might cooperate with approved checkpoint inhibitors such as anti-CTLA-4 (aCTLA-4) and anti-PD-1(aPD-1) to enhance anti-tumor responses. We utilized our previously described MMTV-PyMT model 16,34 to investigate this question. However, to be able to track antigen-specific CD8 + T-cell responses we also engineered the tumor cells to express a model antigen: a fragment of Lymphocytic Choriomeningitis Virus (LCMV) nucleoprotein that produces an immunodominant MHC-I associated peptide, NP118 (RPQASGVYM) in FVB hosts; 52 hereafter referred to as PyMT-NP tumor cells. We confirmed immunodominance of NP118 in FVB hosts by flow cytometric analysis of IFNγ production in peripheral blood CD8 + T-cells from LCMV-infected mice ( Fig S6).
We first studied the effects of Ron inhibition and immunotherapy on established tumors growing in the mammary fat pad. We transplanted PyMT-NP cells orthotopically into the mammary fat pad and waited until tumors reached 100 mm 3 before randomizing mice into four experimental groups: vehicle (DMSO), RON inhibitor (BMS777607/ASLAN002; hereafter referred to as RONi), aCTLA-4, and RONi+aCTLA-4 combination treatment. Treatments were applied for three weekly cycles in which RONi was administered orally five days of the week, and aCTLA-4 immunotherapy was delivered intraperitoneally on a bi-weekly schedule. The presence of NP118 did not cause problems with tumor growth in immunocompetent hosts, as evidenced by aggressive tumor growth in vehicle-treated mice (Figure 3a, black lines). Response to therapy was assessed using two metrics: tumor growth rate and the number and proportion of mice experiencing clinical benefit (complete or partial response to treatment; see Methods).
Mirroring findings in the clinic, 4,10 some subjects (mice) did not respond to immunotherapy at all, while others experienced slower tumor growth or eradication of the tumor altogether. RON inhibition alone (RONi single agent) did not result in appreciable clinical benefit, and it did not significantly reduce tumor growth rate compared to the vehicle group (Figure 3a-c). Treatment with aCTLA-4 as a single agent was more effective in controlling tumor growth, and it resulted in 46% of the mice having clinical benefit. 23% of the mice in aCTLA-4 single agent-treatment group experienced eradication of the tumor (i.e., complete response). Strikingly, combining the RON inhibitor and aCTLA-4 therapy doubled the frequency of complete responders to 46% and provided clinical benefit in 92% of the animals (Figure 3a-c). Moreover, the combination-treated group demonstrated tumor shrinkage in most mice, while the aCTLA-4 single agent-treated group exhibited positive tumor growth rate as a whole, albeit at a significantly lower magnitude than vehicle or RONi single agent treated groups (Figure 3c and S7a). We also tested whether RONi could cooperate with aPD-1 treatment in the same model. Treatment with aPD-1 as a single agent was mostly ineffective at reducing PyMT-NP tumor growth, and combining RONi with aPD-1 did not result in any enhancement of tumor control (Figure 3b,c and S7a,b).
To investigate the generality of RONi-mediated potentiation of aCTLA-4 immunotherapy, we utilized the MC38 colon adenocarcinoma model in the C57BL/6 genetic background. 53,54 In this model, single agent RONi or aCTLA-4 treatment did not affect subcutaneous tumor growth (Fig S8a-c). However, while tumor shrinkage was not observed in any of the mice, combination of RONi and aCTLA-4 significantly reduced the tumor growth rate (Fig S8c), suggesting RON inhibition can potentiate immunotherapy responses in other types of cancers.
It should be noted that BMS777607/ASLAN002 can also inhibit MET and a few other kinases, and/or could potentially affect tumor growth by acting on tumor cells directly. 36 To definitively determine whether the antitumor responses in combination with aCTLA-4 were specifically due to blockade of host RON signaling, we transplanted PyMT-NP cells (containing wild-type RON) into syngeneic RON TK-/-recipients 34,40 and followed the same treatment schedule. Treatment of RON TK-/mice with vehicle did not have a significant effect on tumor growth, which was consistent with our data in WT recipients treated with RONi (Figure 3d Therapeutic efficacy in roni+actla-4 treated mice is associated with improved CD8+ t-cell responses To investigate the immune landscape of mice in each treatment group, we analyzed secondary lymphoid organs and tumorinfiltrating lymphocytes. The frequency of CD8+ T cells in the spleen was significantly higher in mice treated with RONi +aCTLA-4 compared with vehicle or single treatment groups (Figure 4a and S9). Within the CD8+ population, the proportion of PD-1 expressing cells was also higher, suggesting that the cells were antigen-experienced. Percentages of activated PD-1 (+)CD62L(low) CD4+ and CD8 + T-cells were also greater in mice treated with RONi+aCTLA-4 ( Figure 4b,c and S9). At the experimental endpoint (day 24 post-treatment), intratumoral CD8+ T-cell infiltration was analyzed by immunofluorescence in cases where there was tumor remaining to analyze. Despite lack of data from the best responders due to tumor eradication, we found that intratumoral CD8+ T cell infiltration was significantly elevated when the animals were treated with aCTLA-4 single therapy or with RONi+aCTLA-4 combination therapy (Figure 4d and S10). The frequency of intratumoral CD8+ cells trended higher in the combination treated-mice compared to aCTLA-4 single treatment, although this difference was not statistically significant, perhaps due to the inability to acquire data from complete responders. It is important to note that highest CD8+ cell infiltration was observed in mice that responded to therapy in both treatment groups; there were two mice in the aCTLA-4 group that had very high CD8+ T cell infiltration into the tumors. These mice had partial responses (Figure 4d).
Cytotoxic T-cells eliminate tumor cells through perforin/ granzyme B-mediated lysis. 55 Intracellular staining for perforin also revealed significantly higher levels of perforin+ CD8+ T-cells in spleens of mice treated with RONi +aCTLA-4 (Figure 4e and S11). The proportion of perforin-expressing cells within the splenic CD8+ compartment inversely correlated with tumor size at the endpoint in the treatment groups (Figure 4f), particularly in the RONi +aCTLA-4 combination treatment group. PMA-ionomycin restimulation of splenocytes also revealed more TNFα-producing CD8+ T cells in the combination treatment group (Figure 4g). Importantly, tumor-infiltrating CD8+ T-cells from mice treated with RONi+aCTLA4 displayed higher levels of antigen-specific IFNγ production following restimulation with the NP118 peptide, demonstrating enhanced anti-tumor immune responses (Figure 4h). These findings demonstrate that efficacious RONi+aCTLA-4 combination therapy correlates with improved anti-tumor T cell responses in mice.
Inhibition of RON cooperates with aCTLA-4 immunotherapy to prevent metastatic outgrowth
Metastasis is the deadliest feature of aggressive cancers, due to drug resistance of metastatic tumors. Many tumors have spawned micrometastases to other organs before the primary tumor is diagnosed. These micrometastases can then grow into overt metastatic disease at distant sites, 56 and adjuvant chemotherapy is intended to eliminate microscopically seeded cells. Distant recurrence rates for breast cancer are still between 20-30%, 57 indicating the need for better therapies to kill previously seeded micrometastases. Our previous work revealed that host RON signaling suppressed anti-tumor CD8+ T-cell responses, allowing metastatic outgrowth of seeded tumor cells, but RON inhibitor therapy did not completely prevent outgrowth. 34 Our present results prompted us to examine the combination of RONi and aCTLA-4 in the micrometastatic setting as a potential therapeutic regimen for adjuvant therapy. MMTV-PyMT tumor cells were injected via the lateral tail vein into wild-type or RON TK-/-hosts to seed breast cancer cells in the lungs. Again, RON TK-/-hosts were used to control for off-target effects of RONi, and as a test for complete loss of host RON tyrosine kinase activity. To model established micrometastatic disease, tumor cells were allowed to grow for seven days before starting drug treatment. Mice were euthanized after three weekly cycles of treatment with RONi and aCTLA-4 as single agents or in combination. At the endpoint, analysis of the clinical response was performed by assessing tumor burden in the lungs and by quantifying the number of mice that had visible metastatic lesions versus those that had no apparent macrometastases. Histology was then performed to determine the extent of tumor clearance.
Wild-type animals in the vehicle treatment group exhibited aggressive metastatic tumor growth, with 100% of mice having overt metastasis in which approximately 40% of the lung area was covered with tumors (Figure 5a-c and S12a). Single-agent RONi treatment did not result in complete tumor clearance in any of the mice (Figure 5a,b and S12b). The metastatic tumor area was lower with RONi treatment, but this difference did not reach statistical significance (Figure 5b; p = 0.0555). Unlike our findings in the orthotopic tumor transplantation model, aCTLA-4 immunotherapy did not provide a significant clinical benefit in wild-type animals over the vehicle control, with only 1/12 (8%) of mice protected from metastasis (Figure 5a-c and S12c). In contrast, although combining RONi and aCTLA-4 was not sufficient to completely eliminate metastases in any of the wild-type animals, it significantly reduced the area occupied by metastases (Figure 5a-c and S12d). To assess the effect of complete RON kinase loss of function in this setting, and to control for any off-target effects of RONi, we carried out similar experiments with RON TK-/-host animals. Similar to our findings in wild-type mice treated with RONi, lack of host RON signaling alone (RON TK-/-hosts treated with vehicle control) was not enough to provide tumor clearance, although it reduced metastatic area significantly (Figure 5a-c and S12e). This observation is consistent with our previously published results. 34 Importantly, treatment of RON TK-/animals with RONi did not make a significant difference, indicating that off-target effects are not a concern in this model (Figure 5a-c and S12f). Strikingly, treatment of RON TK-/-mice with aCTLA-4 (dual inhibition of RON and CTLA-4) resulted in 42% of mice experiencing complete clearance of macrometastases, and significantly lowered tumor burden in the remaining mice, to under 2% of lung area (Figure 5a-c and S11g). Upon histological analysis, we could find occasional micrometastases in some of these animals (see Fig S13 for representative images), suggesting that, although dual inhibition of RON and CTLA-4 led to remarkable tumor control in most mice, some of the tumor cells were not killed. As expected, combination treatment with RONi and aCTLA-4 in RON TK-/-hosts did not significantly enhance responses compared to aCTLA-4 single-agent treatment in RON TK-/-mice (Figure 5a-c and S12h).
We proceeded to analyze systemic and local immune responses in these mice via flow cytometry and immunofluorescence. Similar to our findings in the orthotopic tumor experiments, intracellular staining of CD8+ splenocytes revealed the highest frequencies of perforin expression in RON TK-/-mice treated with aCTLA-4 (or with RONi +aCTLA-4) (Figure 5d), which correlated with tumor eradication (Figure 5a-c). Wild-type and RON TK-/-mice treated with vehicle had similar frequencies of activated, PD-1 expressing CD62L(low) splenic CD4+ and CD8 + T cells (Fig S14a,b). However, treatment with aCTLA-4 immunotherapy alone, or RONi+aCTLA-4, resulted in significantly elevated levels of PD-1(+)CD62L(low) CD4 and CD8+ splenocytes (Fig S14a,b). Interestingly, combination therapy in RON TK-/-mice resulted in slightly higher levels of these populations, suggesting potential off-target or on-target non-RON effects of BMS777607 inhibitor in the immune system (Fig S14a,b) that did not improve tumor control (Figure 5ac). When we analyzed the infiltration of CD8+ cells into the metastatic nodules (despite lack of data from the best responders due to complete tumor eradication), we observed higher levels of infiltration when the mice were treated with RONi+aCTLA-4 combination therapy (Figure 5e,f). The highest levels of CD8+ cell infiltration was observed in RON TK-/-animals treated with aCTLA-4 alone, or in combination with RON inhibitor. As expected, RON inhibitor did not provide an advantage over aCTLA-4 single therapy in these mice, suggesting off-target effects of RON inhibitor is not a concern in this model. To determine whether aCTLA-4 immunotherapy was working through effects on T cells as expected, we performed a similar experimental metastasis study using NOD-SCID mice, which lack functional T and B cells. As expected, aCTLA-4 was ineffective in these mice, and combination of aCTLA-4 with RON inhibitor did not provide a significant therapeutic advantage over RON inhibitor single-therapy (Fig S15a,b). In summary, combining the RON inhibitor BMS777607 with aCTLA-4 immunotherapy in the PyMT model of aggressive breast cancer resulted in remarkable control of metastatic breast tumor growth in the adjuvant setting.
Discussion
The considerable excitement around success with cancer immunotherapy centers on activating a dynamic and adaptable immune response against heterogeneous and rapidly evolving tumors, in order to provide a sustained positive clinical outcome. Indeed, patients who benefit from immune checkpoint inhibitor therapy often have long-term benefit and even cures, although they represent only a small fraction of treated patients. Significant effort is currently focused on improving responses to immunotherapy through combining various treatment approaches. 5 Combining two immune checkpoint inhibitors, ipilimumab (aCTLA-4) and nivolumab (aPD-1), was approved by the FDA for the frontline treatment of melanoma in 2015. Unfortunately, grade 3 and 4 treatment-related adverse events were observed in 55% of patients, twice as frequently compared to single-treatment arms. 4 These data emphasize that discovering cooperating immune pathways that can be safely modulated to provide synergistic clinical benefit will be a key step in improving immunotherapy outcomes.
In this study, we show that RON, a non-essential receptor tyrosine kinase with potent immunosuppressive functions, induces an immunomodulatory gene expression signature in macrophages upon ligand-mediated activation. Among many differentially upregulated genes were several immune checkpoint ligands, including PD-L1 and CD80, both of which required MAPK signaling for expression downstream of RON. Pharmacological inhibition of RON with BMS777607/ ASLAN002, or complete loss of host RON signaling through genetic means, improved clinical responses to aCTLA-4 therapy in an immunocompetent mouse model of breast cancer. Combining RON inhibition with aCTLA-4 immunotherapy provided clinical benefit for primary tumors in 92% of the animals (46% complete response), and was significantly better at preventing progression from micrometastatic to macrometastatic disease in the adjuvant setting. This was especially true when complete loss of RON kinase activity is achieved (RON TK-/-model + aCTLA-4), where there were remarkable effects on clearance of metastases. Positive clinical outcomes in RONi +aCTLA-4 treated animals were associated with higher frequencies of splenic and intratumoral CD8 + T-cells and higher effector cytokine production by these cells, suggesting improvements in tumor-specific immune activation. Importantly, RON inhibition also improved response to aCTLA-4 therapy in a separate colon cancer model, suggesting that our findings are not limited to a single model or to breast cancer.
RON's role in upregulating checkpoint ligands in a MAPKdependent manner raises an interesting possibility of combining checkpoint blockade with MEK inhibitors that are approved or currently in clinical trials. The MAPK signaling pathway is a key regulator of cell growth and survival, and it is overactivated in nearly 30% of human cancers. 58 Two MEK inhibitors, trametinib and cobimetinib, 59,60 are currently approved for the treatment of melanoma and more than 10 other MEK1/2 inhibitors are in various stages of clinical testing across a spectrum of cancer types including breast cancer. 61 In addition to its cell-intrinsic protumorigenic effects, MAPK signaling has also been implicated in the upregulation of checkpoint ligands such as PD-L1. 62,63 Therefore, combination of immunotherapy with MEK1/2 inhibitors has great potential to block both tumor-intrinsic and immunemediated protumorigenic effects of the MAPK signaling pathway. Although MAPK signaling is important for early steps of T cell activation, 64,65 several studies have shown that MEK inhibitors reprogram the tumor microenvironment and potentiate responses to checkpoint immunotherapy. [66][67][68] In these studies, tumor growth was reduced by the combination of MEK inhibitors and various forms of immunotherapy, but tumors were not completely eradicated. It would be interesting to test whether RON and MEK inhibitors can cooperate to provide a better therapeutic outcome in the context of checkpoint immunotherapy.
Our findings reveal that MSP-RON signaling not only upregulates PD-L1, but also upregulates CD80 while downregulating CD86. CD80 and CD86 can each bind to CTLA-4 to suppress T cell activation 69 or bind to CD28 to stimulate T cell activation. 70 These contrasting effects warrant further investigation into the mechanism of how RON inhibitors provide a therapeutic benefit when combined with aCTLA-4 immunotherapy. One possible explanation may lie in the fact that CD80 is the preferred molecule over CD86 in suppressing T cell responses. Although both CD80 and CD86 can bind to CTLA-4, they have distinct binding characteristics and biological effects on T cells. 71 CD80 was shown to have a higher affinity to CTLA-4, when compared to CD86. 70,72 Using in vivo models, others have shown that the molecular signals delivered by CD80 and CD86 are not necessarily redundant. For instance, CD80 expressed in leukemic cells was found to suppress T cell immunity while CD86 was unable to do so. 73 Similarly, CD80, but not CD86, was found to induce allograft tolerance, demonstrating that CD86 cannot replace all biological functions of CD80. 74,75 The profound upregulation of CD80 downstream of MSP-RON signaling, therefore, may potently drive CTLA-4 mediated immunosuppression even when its other known ligand, CD86, is concomitantly downregulated. Further, blocking RON would prevent upregulation of CD80 and PD-L1, and downregulation of CD86, potentially allowing CD86 to perform its co-stimulatory function to activate T cells in the presence of anti-CTLA-4 immunotherapy. Our ongoing studies focus on further delineating the mechanisms of MSP-RON mediated immunosuppression and this possibility will be formally investigated in our future studies.
Our results are largely consistent with previous investigations utilizing immune checkpoint therapy in the PyMT breast cancer model. Two other studies showed no effects of aPD-1 therapy on PyMT tumor growth. 20,21 However, in these reports, aCTLA-4 single agent treatment was also shown to be ineffective in controlling tumor growth, unlike our results, which showed 50% clinical benefit. This difference may be due to variations in the treatment regimens and/or the utilization of models of different genetic backgrounds (C57BL/6 mice as opposed to FVB mice used in this study). The latter possibility is especially intriguing since it was previously reported that host genetic makeup can affect tissue associated macrophage responses and antitumor immunity. 31 In this context, evaluating new immune-targeting therapies in various mouse models is essential for informing clinical development of new compounds and biologics.
Currently, several clinical trials are underway that aim to elucidate clinical safety and efficacy of immune-checkpoint inhibition in breast cancer. At least 15 of these trials involve aCTLA-4, and more than 50 others involve inhibition of aPD-1 as single agents or in combination with other treatment approaches (www.clinicaltrials.gov). Importantly, the RONselective tyrosine kinase inhibitor used in this study, BMS777607 (also known as ASLAN002), has completed a Phase I trial with a good tolerability profile in patients with advanced solid tumors. Analysis of serum samples from patients treated with this RON inhibitor in the Phase I study revealed reduced levels of bone turnover markers, which is a surrogate for RON-dependent activity in osteoclasts. 39 These results suggest that therapeutic doses of BMS777607/ ASLAN002 can be achieved without significant side effects, and support further clinical investigations in various cancer settings, now to include combination studies with aCTLA-4.
The mechanism by which host RON signaling regulates antitumor immunity still has not been fully elucidated. In the immune system, RON expression is restricted to terminallydifferentiated macrophages. RON is reported to be expressed by resident macrophages in the bone, peritoneal cavity, and the lungs, and also in tumor-associated macrophages (TAMs). 23,35 TAMs were reported to suppress anti-tumor CD8 + T-cell responses in a mouse model of prostate cancer via RON signaling. 35 On the other hand, RON can also attenuate inflammatory responses in alveolar macrophages and protect mice from mortality following lung injury, 30 suggesting potent control of the immune responses in a localized manner. Supporting this view, local expression of MSP by breast cancer cells has been shown to recruit TAMs and polarize them into an immunosuppressive phenotype, resulting in enhanced tumor growth in mice. 76 Therefore, while mechanisms of action have not been fully elucidated due to our inability to selectively inhibit or knockout RON in specific macrophage populations (e.g., TAMs vs. resident lung macrophages), it is clear that inhibition of RON has strong potential in the cancer setting through its dual roles in tumor cells and in the tumor microenvironment. 31,34,35 In particular, the present study shows that targeting RON signaling in combination with anti-CTLA-4 immunotherapy may slow or prevent breast tumor growth, including the emergence of metastatic disease.
Mice and tumors
All animal procedures were carried out in accordance with the University of Utah Institutional Animal Care and Use Committee approval. RON TK-/-mice in FVB background were described previously. 40 [4][5][6] week old wild-type (FVB) and FVB RON TK-/-female mice were used in immunotherapy experiments. For macrophage studies, 6-8 week old wildtype (FVB) and FVB RON TK-/-female mice were used.
Macrophage isolation and analysis
F4/80+ peritoneal macrophages were magnetically sorted (Miltenyi Biotec, GmbH) from peritoneal lavage fluid from female wildtype and RON TK-/-mice, into chilled tubes precoated with 5% FBS in PBS overnight. For RNAseq experiments, 750,000 F4/80+ macrophages per well were cultured in 24 well-plates in 500 μL of DMEM medium supplemented with 10% FBS and penicillin/streptomycin (Gibco). RON signaling was activated by the addition of 100 ng/mL recombinant human MSP (R&D Systems). After 7 hours of culture in MSP-or vehicle-containing medium, total RNA was isolated using the RNeasy Mini Kit (Qiagen, MD, USA) according to manufacturer's protocol. RNA quality was measured by using the TapeStation system (Agilent, CA, USA). RNAseq library preparation was performed with polyA selection (TruSeq stranded mRNA library preparation kit, Illumina) prior to 50-cycle single-end sequencing on the Illumina HiSeq 2500 sequencing platform. Sequence reads were mapped to the mouse genome, and DESeq2 algorithm 77 was used to assess differentially-expressed genes. Differentially-expressed genes in our study were compared with published gene sets by using Gene Set Enrichment Analysis (http://www.broad.mit. edu/gsea/). 78 Pathway enrichment was assessed by using Ingenuity Pathway Analysis software (IPA, Qiagen, MD, USA).
For assessment of checkpoint ligand protein regulation, macrophages were stimulated with 100 ng/mL MSP for 24 hours. To test the requirement for RON kinase activity in this setting, cells were pre-incubated with 1μM BMS777607 or merestinib (LY2801653) (Selleck Chem) for 1 hour prior to addition of MSP. To test the requirement for transcription or translation in ligand upregulation, cells were preincubated with actinomycin D (1 μg/mL) or cycloheximide (10 μg/mL) 30 minutes prior to addition of MSP and cells were collected for flow cytometry analysis at time points between 0 and 24 hours later. To assess effects of MAPK and PI3K activity downstream of MSP/RON, cells were lysed with Pierce IP buffer (Thermo Fisher) 15 minutes after stimulation with 100 ng/mL MSP. Western blot analysis was performed with antibodies specific for pAKT, panAKT, pERK1/2, panERK1/2 and GAPDH (1:1000 primary antibody dilution, 1:5000 HRPconjugated secondary antibody dilution) (Cell Signaling Technology). Antibodies recognizing phosphorylated forms or pan STAT1/3/5 were used at 1:250 and 1:500, respectively. To test the involvement of various signaling pathways downstream of RON, inhibitors BKM120 (PI3Ki), PD0325901 (MEK1/2i), and SCH772984 (ERK1/2i) were each added at 0.5 μM. Ruxolitinib (JAKi) was used at a final concentration of 1 μM. Cells were pre-conditioned with inhibitors for 1 hour prior to stimulation with 100 ng/mL MSP for Western blot and flow cytometric analysis.
Flow cytometry and histology
For macrophage experiments, cells were isolated from tissue culture plates by treating with 5 mM EDTA in PBS (pH 7.4) for 20 minutes at 37°C and by vigorous pipetting. For immunotherapy experiments, splenocytes were prepared directly from mice by physical dissociation via pressing between two microscope slides followed by red blood cell lysis with ammonium-chloride-potassium (ACK) buffer and filtering through a 100 μm nylon mesh filter. Tumors at the endpoint were collected, minced with razor blades, and then digested with 1 mg/ mL collagenase IV (Sigma) for 1 hour at 37°C. After enzymatic digestion, tumor-infiltrating lymphocytes were isolated by centrifugation in a 44%-56% discontinuous Percoll gradient with no brakes. Contaminating red blood cells were lysed with ACK buffer and samples were filtered through a 100 μm nylon mesh filter. Surface antigens were stained with fluorophore-conjugated antibodies on ice, in PBS supplemented with 2% FBS and 0.01% sodium azide. Intracellular antigens were stained after 4 hours of re-stimulation with PMA (50 ng/mL) and Ionomycin (250 ng/mL), or with NP118 (0.1 μM), in the presence of brefeldin A (Cytofix/Cytoperm Kit, BD Biosciences). Stained cells were analyzed using a LSRFortessa cytometer (BD Biosciences, USA) and FlowJo software (TreeStar, USA). Prior to staining with fluorophore-conjugated antibodies, samples were treated with aCD16/32 Fc-blocking antibodies to reduce non-specific binding. Antibodies were purchased from BD Pharmingen and eBioscience (Thermo Fisher) and were used at 1: Tumor-infiltrating CD8+ cells were assessed by immunofluorescence microscopy of tumor sections after 48-hour fixation in formalin-free zinc fixative (BD Biosciences). Tumor tissue mircoarrays (TMA) were generated by punching paraffin-embedded tumor blocks with a 1.5 mm hollow needle at intact tumor core (excluding necrotic areas and non-tumorous tissues as determined by a prior hematoxylin/eosin staining). Two representative regions from each tumor sample were selected to be transferred to recipient paraffin blocks as a TMA, unless the tumor was too small to be sampled twice. TMA blocks were then sectioned into 3 μm-thick sections. TMA sections were deparaffinized with CitriSolve solution, rehydrated with serial dilutions of ethanol, followed by heatinactivated epitope retrieval (HIER) with 10mM Sodium Citrate, pH 6.0. Non-specific blocking was performed by incubating the section with 5% bovine serum albumin and 10% goat serum, together with FcR blocking reagent (Miltenyi Biotec, GmbH). After the blocking, the sections were incubated with 1:400 CD8a primary antibody (clone 4SM16) (eBioscience, Thermo Fisher) overnight at 4°C followed by corresponding secondary antibody conjugated with Alexa Fluor 488 fluorophore. Autofluorescence blocking was obtained by immersing the slide in 0.1% Sudan black. Samples were counterstained with DAPI and mounted with 80% glycerol. Image acquisition was performed with an inverted wide-field microscope (ECLIPSE Ti-E, Nikon) integrated with an Andor Clara CCD camera, and the entire TMA punches were analyzed using ImageJ-based FIJI software. CD8 + cells were counted manually and quantified per number of DAPI+ nuclei in non-necrotic areas. Quantification of tumor infiltrating CD8 + T cells in experimental metastasis experiments was performed on 4 samples randomly selected from each treatment group, except in cases where treatment was so effective that any remaining tumor had to be specifically searched for and selected for quantification. Lungs were processed and stained similar to primary tumors and 5 separate tumor fields-of-vision containing the infiltrated CD8 + T cells were imaged via Leica SP8 white light laser confocal microscope at 20x magnification. Computer-assisted quantification of CD8+ signal was performed by setting a common threshold for all the images by using Fiji software. 79 Actla-4 and aPD-1 immunotherapy 20,000 PyMT-NP cells were transplanted unilaterally into the fourth inguinal mammary fat pads of 4-6 week old mice. When the tumor reached 100 mm 3 , mice were randomized to the following four groups in a rolling enrollment: 1) Vehicle (DMSO); 2) RONi (BMS777607; 50 mg/kg orally; 5 days on and 2 days off per weekly cycle); 3) aCTLA-4 (9D9; 10 mg/kg intraperitoneally; twice per weekly cycle), and 4) RONi +aCTLA4 combination (same dose regimens). In applicable experiments, 4 mg/kg aPD-1 (4H2) was delivered intraperitoneally three times a week until the experimental end-point. Tumors were palpated and measured every other day with digital calipers. Treatments continued for 4 cycles, or until the tumor reached the IACUC-approved ethical endpoint of 3000 mm 3 . Mice were generally sacrificed on the 24th day of treatment. Tumor burden at the endpoint was assessed in individual mice using two metrics: 1) clinical response classification (complete response/partial response/refractory), and 2) tumor growth rate. For the first metric, "complete response" was defined as eradication of tumor at the endpoint. "Partial Response" was defined to be up to 300% increase in tumor size compared to the start of treatment (this group includes partial responders with some tumor shrinkage, stable disease, and mice with slow-growing tumors). "Refractory disease" was defined as any tumor exhibiting more than 300% increase in size. In the vehicle-treated group, the average increase in tumor size was 2316% (±291%). To calculate the change in tumor growth rate, exponential tumor growth curves were log2-transformed to obtain linear tumor growth versus time. The slope of the linear regression was calculated for each mouse individually (ΔLog2 (tumor size)/Δdays of treatment) and averaged. Statistical differences were assessed by using one-way ANOVA and Tukey's multiple comparison correction.
For experiments with the MC38 colon carcinoma model, 100,000 cells were injected subcutaneously into 5-8 week old C57BL/6 mice. RON inhibitor and aCTLA-4 treatment was initiated when the tumor became palpable (~7 days post-injection) using the same treatment regimen as in PyMT experiments. Treatment continued for 2 weekly cycles and mice were euthanized at day 14.
Tumor growth rate was calculated using the same methods as in PyMT experiments, and statistical analysis was performed by using one-way ANOVA and Tukey's multiple comparison correction.
For metastasis studies, we modeled the adjuvant therapy setting by seeding tumor cells in the lung, waiting 7 days, and then initiating treatment. Mice were injected with 100,000 MMTV-PyMT cells suspended in HBSS via the lateral tail vein. After 7 days, treatment was initiated using the same doses and schedules that were used for the treatment of orthotopically transplanted tumors. Mice were euthanized on day 32 and lungs were harvested and fixed in formalinfree zinc fixative. Photographs of the metastatic lungs were taken and images were imported to Image J for quantification. Metastatic tumor area was quantified as a ratio of normal lung using Image J software.
Acknowledgments BMS777607, aCTLA-4 and aPD-1 were kindly provided by Bristol Meyers Squibb. Shared resources such as High Throughput Genomics and Bioinformatics and Biostatistics Cores were supported by the HCI Cancer Center Support Grant (5P30CA042014-24; the content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute or the National Institutes of Health). We are grateful for use of the Flow Cytometry Facility (James Marvin) and the Fluorescence Microscopy Core Facility (Mike Bridge) at the University of Utah Health Sciences, both funded by the National Center for Research Resources (NCRR) of the National Institutes of Health under Award Number 1S10RR026802-01. Microscopy equipment was obtained with a NCRR Shared Equipment Grant # 1S10RR024761-01. We thank Andrew Stephen Baessler for his assistance in Western blots. We also thank members of Welm Lab for their assistance in rapid and systematic tissue processing for large in vivo experiments.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
|
2018-09-24T14:56:15.433Z
|
2018-07-11T00:00:00.000
|
{
"year": 2018,
"sha1": "d6859e5bd4cf8a074bbcb3c1ab4102459b937d3a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2018.1480286?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6859e5bd4cf8a074bbcb3c1ab4102459b937d3a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259710003
|
pes2o/s2orc
|
v3-fos-license
|
Disaster Policy, Participation, and Horizontal Conflict: Case Study Reconstruction Aid Funds of The Bantul Earthquake
This investigation aims to show that the government’s policy without participation approach causes horizontal conflict in society. The case study is financial aid in the reconstruction process for the earthquake victims in Bantul Regency, 2006-2007. This research collects secondary data, such as government documents, journals, books, news, and other relevant sources. Based on data analysis, we find that the absence of participation in policymaking causes policy cannot to accommodate the social context of the target group. As the impact, community leaders and community members face a problematic situation that creates many problems during implementation. These problems, finally, lead society into horizontal conflict. This finding might contribute to public policy and disaster management discourse, theoretical and practical, and build a path for future research
INTRODUCTION
On May 27 th 2006, a great earthquake shook Bantul Regency, Yogyakarta Special Region. This earthquake had a magnitude of 5.9 on the Richter scale and caused a horrible impact on local people. (detiknews, 2006;Putri, 2021) Some sources recorded the earthquake's impact in detail. There were 26,569 people receiving the impact of this disaster, in which merely about 32% of the victims had minor injuries. As seen in Table 1, there were 3,968 or about 15% Bantul victims died and about 13,989 or 53% other victims got serious injuries. Meanwhile, the poor impact was also on infrastructure. Table 1 shows the condition of inhabitants' houses. There were 209,494 houses damaged, of which solely 31% or 66,359 of them in slight damage. The other 71,763 or 34% houses were completely destroyed, while 71,372 or 34% houses were heavily damaged.
The Bantul situation after the earthquake was highly serious where 17,957 of Bantul's people either die or in serious injuries and 143,135 houses were either completely destroyed or heavily damaged. This encouraged local and central governments to establish recovery policies, one of which was the reconstruction policy (Keputusan Presiden Republik Indonesia Tentang Tim Koordinasi Rehabilitasi Dan Rekonstruksi Wilayah Pasca Bencana Gempa Bumi Di Provinsi Daerah Istimewa Yogyakarta Dan Provinsi Jawa Tengah, 2006). Meanwhile, the Government of Yogyakarta implement this policy by releasing (Peraturan Gubernur Daerah Istimewa Yogyakarta, 2006) to help local people to rebuild and repair damaged houses. The government gave financial aid that the number of money was based on the destruction categories: completely destroyed, heavily damaged, and slightly damaged.
The policy's result was contradictory. On one side, the reconstruction process was very fast. The world recognized that this reconstruction process is one of the fastest in the world (Nuswantoro, 2021) attracting many foreign countries to learn. On the other side, there was a social issue, when the implementation of financial support distribution causes horizontal conflict in society. Several people were jealous of their neighbours due to the amount of aid that they had received, whereas other people felt the policy was not fair. Post distribution of reconstruction aid, the harmony in society was damaged. The number of community activities decreased, and some people did not talk to each other (Isnadi, 2011).
This research aims to explain the relationship between disaster policy and horizontal conflict by using a participation perspective. Simply, this research has the zeal to answer "How is the public policy of financial support for earthquake victims in Bantul Regency causing horizontal conflict?" This question is highly important to be answered. First, this issue should be solved to ensure the victim of a natural disaster will not receive another disaster: horizontal conflict. Second, horizontal conflict gives a poor impact on society, especially during the period when they should manage their life from the beginning. Last, but not least, the government must ensure its policy solving the problem, not either be part or the cause of the problem itself.
We use the 'participation' perspective in finding the answer. We discuss and elaborate on the theory of participation in public policy for disaster. Participation is an important concept in current decades (Saguin & Cashore, 2022). Hossain described participation as a situation where people are involved to solve their problems (Hossain, 2013). In this process, people identify the problems, find the solution and implement strategies (Hossain, 2013;Paton & Johnston, 2001).
Participation has a pivotal function in disaster. It encourages people to analyze their vulnerable, discover problems, develop solutions, and establish organizations dealing with disaster (Chen et al., 2006;Pearce, 2003). Pandey and Okazaki then, stated that the involvement of local people in the disaster policy process is crucial since the people are 'disaster front', those who first receive the impact of disaster (Pandey & Okazaki, n.d.).
Given the important roles of participation, disaster policy was encouraged to transform, and the logic of participation changed. (Pearce, 2003) Picture 1 above illustrates the transformation of policy disaster logic. It can be seen that the top-down approach has been changed to the harmony of the top-down and bottom-up approaches. There are participation principles in the new logic. There is no 'single agency' in dealing with disaster, it should be a partnership between government and society. 'Planning for the community' has been transformed into planning with the community, in which government and community have a similar chance to design disaster policies. The last, the logic of 'communicating to communities' has been moved to 'communicating with communities. In the new approach, government and community use their resource to design the disaster policy through egalitarian principles. Disaster policy is not the government's exclusive domain, society has the right to be involved. Then, Chen accentuated that these actors are not only creating the policy but also developing training and disaster scenario exercises.
Furthermore, some scholars explained the correlation between participation, policy satisfaction, and horizontal conflict. When people have not been involved in the policy process, there will be two main impacts, First, social values will be harmed (Dorcey & McDaniels, 2001). Participation means developing norms of trust, reciprocity, tolerance, inclusion, and activating networks. The absence of participation means people do not have a chance to develop these social values. Second, people will be frustrated (Rubin, 1991). People are going frustrated because their idea and notion are not accommodated by the policy. Ideas and interests of frustrated people might grow uncontrollably. In case there is no trust, reciprocity, tolerance, inclusion and networks, this 'uncontrollably' situation might lead to horizontal conflict.
Based on the discussion above, we develop an assumption. We assume that the policy of financial aid created without participation approach causes horizontal conflict. Without participation, the community does not develop social values for policies. In the implementation process, there might be various problems that make people frustrated. In this situation, the feeling of unsatisfied ends with horizontal conflict among community members.
METHOD
This investigation uses the case study method. We investigate horizontal conflict post the distribution of financial support for the victim of the earthquake 2006 in Bantul, by paying careful attention to its context. Theory, in this research, does not aim to be proved, yet it guides the researchers in finding the answer to the research question.
We used secondary data analysis. Government documents, journals, research reports, and news digital media have been used. We also collected the data from other relevant sources, such as books, manuscripts, and others. To make sure the validity of the data, we did triangulation (Denzin & Lincoln, 2018). Information from a source has been confirmed by other sources.
In the data collecting process, we mainly used the Universitas Gadjah Mada library's service, especially to access highreputation international journals. However, in some cases, we also use other open-access journals that have not been subscribed to by Universitas Gadjah Mada. News reports from credible mass media and NGOs were also used. The mass media were Kompas, Tempo, detik, dan KR (Kedaulatan Rakyat). Meanwhile, the NGOs were Indonesia Corruption Watch (ICW), Mongabay, and others. In certain cases, the NGO did independent research, yet in other cases, they re-publish the media's report.
Moreover, the analysis process has been conducted through coding, grouping, interpreting, and elaborating on the meaning and conclusion. Every single datum was given a code based on the issue. A similar issue was grouped. We, thus, gave interpretation to each group to gain meaning. The last, we elaborate on the meaning of every group to obtain a conclusion that directly answers the research question.
We, thus, communicate and elaborate the conclusion with the previous research to gain clearer and stronger insight. We found journal articles that investigated disaster policy, participation, and horizontal conflict in various cases in the world. Making comparison lead to some valuable insights that help us to gain a deeper understanding of disaster public policy, participation and horizontal conflict.
To help future researchers in enriching the disaster policy, participation, and horizontal conflict topics, we offer some interesting issues for future research. This might help those who have excited to reveal the new crucial aspects between the disaster policy, participation, and conflict among the society members.
RESULT AND DISCUSSION
Based on the analysis process, we conclude that the absence of participation in policy formulation causes horizontal conflict in society. Financial support policy has been developed through a top-down approach, without community participation. Accordingly, many problems in the implementation process make the receivers unsatisfied. This situation keeps running until the distribution of financial support finishes. The horizontal conflict emerged in most societies in Bantul Regency.
First and foremost, when the earthquake hit, as mentioned above, Bantul was in a horrible situation. The local government, Bantul Regency's government, was paralyzed because most of the government agencies were also receiving the disaster's effects. Yogyakarta Special Region's government asked for the central government's intervention. Considering the situation, the central government established recovery policies, one of which was financial aid for the reconstruction process. This policy was made by the central government and used the national budget, while its implementation asked the local people to involve.
In the implementation process, this policy has some guidance (Detiknews, 2006;Nuswantoro, 2021). First, local people must identify and divide the house damage into three categories, namely destroyed, heavily damaged, and slight damage. Second, each category receives a different amount of financial aid. Those whose houses were destroyed gained Rp. 15,000,000 or about $1,000. Meanwhile, for heavy damage and slight damage received Rp. 4,000,000 or $267 and Rp. 1,000,000 or $67. Third, the receiver in every hamlet must be organized into groups based on three categories: destroyed, heavily damaged and slightly damaged. Fourth, the community leaders submit the data of the receiver (number of people, name, address and their categories to the government). Fifth, the government did verification to make sure the data validity. The last, government delivered the financial aid to each group.
When this mechanism run, some issues emerged (Isnadi, 2011). First, fairness in measuring the damage level. There were many misperceptions and abuse of power that influences the measuring process. Social power, family relationship and illegal dealing were some of the many strategies to increase the destruction level of houses. In some cases, there was a case where the receiver of heavy damage destroyed his house to gain higher aid, from Rp. 4,000,000 (heavily damaged) to be Rp. 15,000,000. In another case, two neighbourhood families were conflicted because the poor family with bad houses receive the same amount of aid as the rich family with a stately home, namely Rp. 15,000,000 respectively. The rich family felt the policy was not fair. By using financial support, the poor can build a better house yet the rich build worst house than the previous.
Second, several community leaders decided the level of destruction independently. The local government had guided in measuring the house destruction, yet in some cases, the community leaders did not use the guidance due to some issues. Some of them use social issues as their consideration, while others tried to gain extra money from this 'project' (Kompas, 2010;Radio Star Jogja, 2013). Accordingly, in some cases, the close relationship with the community leaders fully determined the number of financial aid received. Protests from community members emerged.
Third, transparency in managing the reconstruction aid. There was disagreement between community members and community leaders in managing the aid, due to dilemma conditions. To illustrates, on one hand, the community leaders must manage this issue by spending their time and money, whereas they were also the earthquake victim with many limitations. On the other hand, the government has not provided a special budget for the community leader to manage the financial aid distribution process. As an impact, in some cases, community leaders took a certain of money from the financial aid that causes the amount of money received by the community members not proper (Syaifullah, 2010). These problems rose protests from the community members.
Fourth, double counting. The earthquake tragedy in Bantul Regency has invited international NGO aid. Some NGOs have not coordinated with the local government in distributing their financial aid. They directly came and communicated with the community leaders. The decision of "who gets what" was the exclusive domain of community leaders. Therefore, in some cases, the close relation to community leaders determined whether a family can gain aid from double sources (government and NGO) or not. This problem triggered social jealousy and protest.
Post-reconstruction fund distribution, there were many social issues. Many people in Bantul Regency lived in inharmony, indicated by the lower mutual work. Some people and community leaders were arrested due to corruption during the distribution of the financial support process. In other situations, some people recognized that they limited communication with some people in their neighbourhood. Simply, it can be stated that some societies in Bantul were hit by the social disaster. They were victims of two disasters, natural and social disasters (Isnadi, 2011).
What we have found from the findings above is that the financial aid policy was made without participation. To make a quick response, the central government used a top-down approach in shaping the policy, there is no chance for people to participate. It created special conditions for community leaders and community members. For community leaders, there is a contradictory situation. On one hand, their position became strategics, they can decide 'who gets what'. On the other hand, they did not have a special budget to implement the policy, whereas they were the earthquake's victims too. As the impact, in many cases, some community leaders abuse power, either intentionally or unintentionally. Some of their decisions were not objective. Meanwhile, for community members, without participation, there are no consensus and social values for the implementation of policy. The implementation process became complicated which makes people frustrated. Two families with different economic statuses receive the same amount of money, however, this triggered social tension. And, in some cases, the level of closeness to the community leaders determines whether a family can gain a higher and even double source of aid or not.
This investigation finds that participation influences the quality of public policy results, especially in distributing financial aid for earthquake victims in Bantul Regency. Without participation, community leaders and community members find a dilemma situation. Community leaders have pivotal roles, yet there is no special budget supporting them in implementing the policy. Community members previously live in different social and economic statuses and have different closeness to the community leaders, and some of them have a chance to gain more aid. Simply, without participation, the policy cannot accommodate the special context of society.
CONCLUSION
To sum up, it can be stated that the absence of ideal participation in financial aid policy causes horizontal conflict in society. Government (Xu et al., 2019). Ma and Yali have a similar conclusion where participation will provide a chance for stakeholders to design the priority, therefore conflict can be avoided (Ma & Wen, 2019).
Practically, participatory disaster policy is the main issue for the government. When government refuse to invite other stakeholders to shape the policy, there will many problems in the implementation process that lead to horizontal conflicts. Contrary, participation will give many benefits to the government, because the policy will accommodate the local context. This conclusion supports the previous research, such as Baptiste et.al, which states that participation will encourage the stakeholders to exchange their knowledge, influence the choice, and create an inclusive decision-making process (Bedessem et al., 2022). A similar conclusion was also proposed by Tresina et.al (Tresiana et al., 2022) and Ayala et.al (Cruz Ayala et al., 2022).
This conclusion develops a path for future research. There is a dilemma on the government's side. On the one hand, the government is insisted to make a proper policy fastly. On the other hand, participation is time-consuming. Making policy fastly with a participatory approach becomes two important matters. Investigating the case where the government can make policy fastly with a participatory approach becomes urgent in the future.
Moving to another point, the utilization of the internet, communication, and technology (ICT) in designing disaster policy might be useful in the future, especially social media. When the internet has grown gradually in the current decades, most areas in the world have favoured social media is also going up. In current years, some researchers have shown its benefit to increasing participation, Lin and Kant concluded that the penetration of social media impacted the inclusiveness of participation, the number of participants, interaction among different levels of citizens' power, and participation effectiveness (Lin & Kant, 2021). Lin confirmed this conclusion by stating that social media can improve the inclusiveness of participation (Lin, 2022). How the utilisation of social media in creating ideal participation in disaster policy is an attractive topic in future research.
|
2023-07-12T06:07:27.066Z
|
2023-06-23T00:00:00.000
|
{
"year": 2023,
"sha1": "1ed3558fc078a0e90ee5b09b42f0769ea256d440",
"oa_license": "CCBYNC",
"oa_url": "https://ojs.uma.ac.id/index.php/adminpublik/article/download/9117/5012",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1dcaddf612a406a049510d68ebf8c4e741594ece",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": []
}
|
235601297
|
pes2o/s2orc
|
v3-fos-license
|
Protocol: analytical methods for visualizing the indolic precursor network leading to auxin biosynthesis
Background The plant hormone auxin plays a central role in regulation of plant growth and response to environmental stimuli. Multiple pathways have been proposed for biosynthesis of indole-3-acetic acid (IAA), the primary auxin in a number of plant species. However, utilization of these different pathways under various environmental conditions and developmental time points remains largely unknown. Results Monitoring incorporation of stable isotopes from labeled precursors into proposed intermediates provides a method to trace pathway utilization and characterize new biosynthetic routes to auxin. These techniques can be aided by addition of chemical inhibitors to target specific steps or entire pathways of auxin synthesis. Conclusions Here we describe techniques for pathway analysis in Arabidopsis thaliana seedlings using multiple stable isotope-labeled precursors and chemical inhibitors coupled with highly sensitive liquid chromatography-mass spectrometry (LC–MS) methods. These methods should prove to be useful to researchers studying routes of IAA biosynthesis in vivo in a variety of plant tissues. Supplementary Information The online version contains supplementary material available at 10.1186/s13007-021-00763-0.
Background
Plant life is characterized by strictly regulated developmental events that achieve optimum growth and reproduction. This is accomplished through an extremely complex hormonal signaling network in which the plant growth hormone auxin plays a central and defining role. To this end, auxin helps regulate almost all aspects of plant growth and development including embryogenesis, tissue architecture and tropic responses [1]. Maintenance of auxin homeostasis involves multiple pathways for the biosynthesis of indole-3-acetic acid (IAA), the principal auxin in plants, and several regulatory pathways as well as subsequent catabolic events. These additional input/ output processes include conjugation and hydrolysis of sugar and cyclitol conjugates, amino acid, peptide and protein conjugates, formation and β-oxidation of indole-3-butyric acid as well as deactivation by ring oxidation of IAA and its amino acid conjugates [2,3]. Nevertheless, how much IAA is made and accumulates remains the critical regulatory event in many aspects of plant development [4].
Although several biosynthetic pathways for the bioactive auxin IAA have been proposed, many of them have not been well defined and flux information is largely lacking (Fig. 1). The predominant biosynthetic route to IAA in Arabidopsis thaliana is widely believed to be through the YUCCA pathway, in which the amino acid tryptophan (Trp) is converted to indole-3-pyruvic acid (IPyA), which is then converted to IAA by YUCCA flavin monooxygenase enzymes [5]. Species-specific evidence for the synthesis of IAA from Trp through indole-3-acetaldoxime (IAOx), which is converted to indole-3-acetamide (IAM) and sometimes an indole-3-acetonitrile (IAN) intermediate has been shown in Arabidopsis [6,7]. Other potential intermediates of IAA synthesis downstream of Trp have been proposed, such as indole-3-acetaldehyde (IAAld) [8][9][10] and tryptamine (TAM) [11], though their places within the web of auxin biosynthesis have not been well detailed. A Trp-independent route has also been proposed based on tryptophan synthase mutants, metabolic flux analysis and in vitro analyses, in which indole or another upstream compound serves as the IAA precursor [1,[12][13][14]; however, unbound chemical intermediates, if they are involved in this pathway, have as yet not been identified [15]. The purpose of this protocol is to describe improved techniques for characterization of the auxin metabolic network utilizing recently discovered chemical inhibitors and technical advances in mass spectrometry (Fig. 2). These tools will allow researchers to characterize auxin biosynthesis during specific developmental events or environmental responses.
Metabolic inhibitor approaches are complementary to genetic and biochemical studies and are particularly useful in studying IAA biosynthesis. While auxin biosynthesis mutants may have severe developmental defects that alter growth and confound comparisons to wild type plants [16], biosynthetic reactions can be turned off at specific developmental time points with chemical inhibitors. Additionally, genetic redundancy can be overcome by inhibiting an entire enzyme family with a single chemical treatment [17]. Such is the case with inhibitors targeting both steps in the YUCCA pathway. The YUCCA enzymes are encoded by multiple genes in Arabidopsis thaliana and mutations in small sets of these genes encoding the flavin monooxygenase proteins results in significant morphological defects [18]. A number of chemical inhibitors have been developed to inhibit the YUCCA pathway of auxin biosynthesis (Table 1), providing valuable tools to study the function of this pathway in different plant tissues and environmental conditions. Similarly, TAA1/TAR/ISS1/VAS1 (Tryptophan Aminotransferase of Arabidopsis 1/ Tryptophan Aminotransferase Related/ Indole Severe Sensitive 1 and reversal of sav3 phenotype 1) form a set of enzymes with overlapping biochemical functions that catalyze the penultimate step in the IPyA pathway [19]. Alternative aromatic amino acid substrates, such as L-kynurenine, can act as competitive inhibitors of tryptophan aminotransferase and a series of potent inhibitors have been developed to pyridoxal phosphate-dependent enzymes with enhanced specificity to TAA1 and related enzymes ("pyruvamines"; see Table 1) [20]. Major pathways for IAA biosynthesis. Solid arrows refer to pathways with enzymes identified in at least one species, and dashed arrows to undefined ones. AMI1, indole-3-acetamide hydrolase-1; ANT, anthranilate; CHA, chorismic acid; IAAld, indole-3-acetaldehyde; CYP79B2/3, cytochrome P450 (79B2/3); IAM, indole-3-acetamide; IAN, indole-3-acetonitrile; IAOx, indole-3-acetaldoxime; IGP, indole-3-glycerol phosphate; INS, indole synthase; IPyA, indole-3-pyruvic acid; ISS1, Indole Severe Sensitive 1; NIT, nitrilase; Ser, serine; TAA1, tryptophan aminotransferase of Arabidopsis 1; TAR, tryptophan aminotransferase-related; TAM, tryptamine; Trp, tryptophan; TSA, tryptophan synthase α; TSB, tryptophan synthase β; YUCCA, Arabidopsis flavin monooxygenase The issues of redundancy with tryptophan synthase (TS) are a bit different. Arabidopsis and maize have two copies of the genes that encode each of the two proteins that form the αββα heterodimeric complex that catalyzes the formation of tryptophan from indole-glycerolphosphate and serine in the plastids. In addition, maize has genes BX1 and IGL for TSα-like cytosolic enzymes that serve as sources of free indole [21]. Arabidopsis also has a cytosolic TSα-like enzyme encoded by the indole synthase (INS) gene [22]. TS is, however, a well-researched and highly conserved bi-enzyme complex [23] such that inhibitors are available ( Table 1) that target specifically TSα, TSβ as well as the 25-Å long tunnel to the β-subunit where indole diffuses in order to participate in the TSβ pyridoxal 5'-phosphatemediated β-addition reaction with serine. Determining the possibility of a tryptophan-independent pathway is largely dependent on having Trp auxotroph mutants, which are difficult to obtain due to redundancy of Trp synthase genes and the fact that mutations in both copies of TSβ are seedling lethal [12,13,24]. The protocols described here largely overcome these issues by Indoleacrylic acid trans-indole-3-acrylic acid Trp synthase β and α Allosteric inhibitor [56,57] (1-Fluorovinyl)glycine α-(1′-fluoro)vinyl glycine Trp synthase β PLP-enzyme mechanism-based inhibitor [58] Arylsulfide phosphonates Trp synthase inter-subunit interface Allosteric inhibitor [62] Aryl sulfonamides [F9]; N-(4'-Trifluoromethoxy benzenesulfonyl)-2-aminoethyl Phosphate Trp synthase β α-Site allosteric ligand [63] Benzamide N-(4-Carbamoyl benzyl)-5-(3chloro phenyl)-1,2-oxazole-3-carboxamide Trp synthase α α-Site ligand [64] employing chemical inhibitors, and can complement genetic studies. Mass spectrometry (MS) has historically been and continues to be an important technique in deciphering routes of auxin biosynthesis, enabling accurate quantitation of IAA and its precursors, identification of intermediates, and tracking of isotopic labels through distinct pathways. Quantitative methods for IAA and precursor analysis by MS have been invaluable tools in elucidating auxin biosynthesis pathways and have continuously evolved over time with advances in analytical sensitivity and resolution [4,[25][26][27][28][29]. Stable isotope tracing experiments also lend insight into auxin biosynthesis when plant tissue is supplied with one or more labeled precursors, such as indole and/or anthranilate [30][31][32], and label incorporation into suspected downstream intermediates is monitored to determine whether synthesis from the labeled precursor has occurred. This approach can also provide information regarding direction of flow and flux through different steps [6]. Additionally, labeled precursors that are unique to one pathway in particular can be applied to measure contributions of a specific pathway to the IAA pool [5,33].
Results and discussion
In this paper, we describe methods utilizing metabolic inhibitors coupled with a modified approach of isotope dilution/tracing and using liquid chromatography-high resolution-mass spectrometry (LC-HR-MS) for qualitative and quantitative analysis of a comprehensive set of IAA precursors and IAA itself to characterize auxin biosynthesis in Arabidopsis (see Additional file 1). A distinct advantage of this method is its ability to resolve potential precursor compounds by chromatographic retention, absolute mass and by elemental composition, enabling complex mixtures of different stable isotopes (for example, multiple labeled compounds with 13 C and 15 N can be resolved) to be used in the experimental procedures (see Additional file 2). Readers may also consult a complementary paper that was published while this manuscript was in preparation [34]. Growing seedlings on fully 15 N-labeled media as described here enables accurate quantitation of biosynthetic intermediates by reverse isotope dilution, using unlabeled internal standards which are typically more readily available than isotopically labeled standards [35]. The addition of one or more 15 N atoms at a mass addition of 0.9970 can be resolved from the more abundant natural occurrence of 13 C, which is 0.0034 heavier than 12 C, which improves the utility of this approach when using high resolution analysis. Seedlings are first germinated on nylon mesh and are easily transferred onto media containing chemical treatments at the desired developmental time point.
Next, stable isotope-labeled precursor compounds are fed to the plant. Labeled serine is used as a tracer for Trpdependent biosynthesis specifically [33], while labeled indole and anthranilate can feed into both Trp-dependent and Trp-independent pathways [19,31,32] (Fig. 1). The techniques described here offer several advantages over previously described methods in their ease of preparation, high level of sensitivity, capacity for monitoring many compounds at once (see Additional file 2), and the ability of high resolution analysis to distinguish between different 'heavy' atoms, as might be required with [ 13 C 1 ] IAA and [ 15 N 1 ]IAA labeling products. As shown in Additional file 1, the use of multiple labels makes it easy to see that the addition of the tryptophan monooxygenase inhibitor YDF increases the incorporation of labeled indole into IAA but decreases labeling from labeled anthranilate and to a lesser degree from labeled tryptophan. Furthermore, this IAA labeling pattern for labeled indole and anthranilate is not reflected in any of the proposed intermediates following YDF treatment.
We also describe a technique for identifying novel intermediates based on the characteristic quinolinium ion produced from MS fragmentation of 3-substituted indolic compounds. This method involves using a series of injections of the same sample with increasingly narrow mass ranges, similar to the methods utilized by Yu et al. [36] and Tang et al. [37] where they targeted and identified novel indolic compounds. By monitoring exact masses of [ 13 C 8 , 15 N 1 ]-and [ 15 N 1 ]quinolinium ions after treatment with [ 13 C 8 , 15 N 1 ]-and [ 15 N 1 ]indole, this method can identify unknown compounds synthesized downstream from indole. A similar approach would likely be applicable in investigations of other classes of compounds that form characteristic signature ions. High resolution accurate mass analysis significantly reduces factors such as false negative molecular ions, low abundance ions, multiple isomers, and matrix effects, which otherwise would make it difficult to confirm possible compound identities.
Growing, labeling, and collecting plant material
Wild-type Columbia-0 ecotype Arabidopsis thaliana seeds or specific metabolic mutant lines need to be surface sterilized sodium hypochlorite then imbibed for 5-10 days at 4 °C to promote uniform germination. Typically seeds would be sown in a single row onto 20 μm nylon mesh covering the agar growth medium.
IAA extraction
Homogenized samples are incubated on ice for 50 min to allow isotopic standard equilibration with the endogenous IAA. They are then diluted tenfold with water such that ion exchange will be effective, centrifuged to remove solid materials, and loaded onto two consecutive SPE micro spin column (TopTips) steps, first ion exchange on an amino phase and then on an epoxide support.
• Bondesil-NH 2 resin (Agilent, 12,213,020) suspended in water, 1:4 w:v Homogenized samples are incubated on ice for 50 minutes to allow isotopic standard equilibration with the endogenous compounds, diluted 10-fold with water to allow proper interaction with the solid phase, centrifuged to remove solids, and loaded onto a SPE micro spin column (TopTips) containing hydrophilic-lipophilic balanced (HLB) resin conditioned with acetonitrile followed by 20% acetonitrile in water. After loading, the spin columns are washed with 5% acetonitrile and compounds are eluted with 80% acetonitrile.
Indole extraction
Indole is a very lipophilic and somewhat volatile compound that cannot be purified using the techniques used for the other compounds. Thus, its purification involves a simple solvent partitioning. It was important to select an apolar solvent with a boiling point below the melting point of indole. We found pentane to be well-suited as its boiling point is 36.0 °C, well below the indole melting point of 52.5 °C.
LC-MS analysis
UPLC utilizes a column with an end-capped octadecylsilane fully porous 1.8 µm silica resin with high carbon loading (20%) in order to obtain highest sensitivity for indolic compounds (see Additional file 2).
Growing seedlings with inhibitor and stable isotope precursor treatments
Seedlings are grown in vitro on mesh squares, allowing them to be easily transferred to chemical inhibitor treatments at the desired timepoints. A liquid solution containing stable isotope-labeled precursors is then supplied to seedlings, and synthesis of isotopically labeled IAA and intermediates can be identified and distinguished by LC-HR-MS.
1. In a laminar flow hood, moisten sterile nylon mesh squares with sterile water and use forceps to place squares flat on germination media (see Note 7 and Table 2) in square Petri dishes. 2. Clean Arabidopsis seeds by shaking in 20% bleach solution for 5 min and rinsing 4 times with sterile water. 3. Sow seeds approximately 0.5 cm apart in single row on mesh. 4. Store plates at 4̊ C in the dark for 3-7 days to stratify seeds. Remove plates from cold and place vertically in growth conditions. 5. Transfer seedlings onto inhibitor media (Table 1) to begin auxin biosynthesis inhibition treatment (see Note 8). In a laminar flow hood, use forceps to gently lift mesh with seedlings from germination plates and lay flat onto plates containing inhibitor media. Cover plates and place vertically under growth conditions. 6. Begin isotopic labeling treatments by flooding plates with 3 mL of labeling solution (
Proposed IAA biosynthesis pathway intermediates: Anthranilate, Ser, IPyA, IAAld, IAOx, IAN, IAM
Samples are prepared for analysis of biosynthesis intermediates by SPE using an HLB resin. SPE is an effective sample preparation technique for these compounds because it provides a high level of recovery and is relatively easy to use with large sample sets. IAA can also be extracted using the following method, but with some loss of sensitivity compared to methods described in the previous section.
Unknown indolic compounds (double indole labeling samples)
An unbiased extraction method is used for discovery of unknown compounds synthesized from indole.
27. Transfer supernatant to a clean tube and centrifuge again at 25,000 g for 10 min. at 4 °C to remove all debris.
LC-MS analysis
Samples are analyzed using LC-HRAM-MS to chromatographically separate components of chemical matrix and obtain high resolution m/z data. Specific LC-MS methods are tailored for different sample types and analysis objectives (see method details in "Materials" section).
28. Carefully transfer each sample to a 50 μL glass insert so that no air pockets remain at the bottom of the insert. Assemble insert into autosampler vial with cap. 29. Inject 5-10 μL of sample for LC-MS analysis using methods described in the LC-MS analysis subsection of the Materials section.
Data analysis
IAA analysis Extracted ion chromatograms (EICs) of labeled and unlabeled quinolinium ions generated by fragmentation of labeled internal standard and unlabeled endogenous IAA are viewed (see Additional file 2). Narrow mass ranges are used to filter out background noise.
30. Under the "Ranges" tab in "Chromatogram Ranges" in Xcalibur, set the chromatogram viewing options to display two mass ranges: 130.0641-130.0661 (corresponding to unlabeled quinolinium ion), and 136.0843-136.0863 ([ 13 C 6 ] quinolinium produced from [ 13 C 6 ]IAA internal standard). Under "Display" tab, check "Peak Area. " Use "peak selection" tool to select and calculate area of peaks corresponding to unlabeled IAA and the internal standard. Endogenous IAA levels can be calculated using isotope dilution [25,39].
Targeted IAA precursor analysis Peak areas from EICs of multiple compounds are determined using a script. Mass ranges surrounding the exact masses of ions produced from the compounds of interest, as well as their labeled forms synthesized from the supplied labeled precursors, are kept within a narrow window to exclude background noise.
31.
Raw data files are converted to mzXML format using the msconvert tool from the ProteoWizzard software [40] prior to input into R. Quantitative data for each indolic compound is extracted using the Metabolite-Turnover script developed in the Hegeman lab (https:// github. com/ Hegem anLab/ Metab olite-Turno ver, [41]). In this script, the ProteinTurnover [42] and the XCMS package [43] are employed to extract EICs for each isotopmer of IAA and intermediates. This quantification approach using linear regression [44] is preferred over that using peak area [39] when the MS data has high background noise due to low analyte abundance. 32. Exact masses for isotopomers of interest are calculated using the University of Wisconsin-Madison Biological Magnetic Resonance Data Bank exact mass calculator (http:// www. bmrb. wisc. edu/ metab olomi cs/ mol_ mass. php). Isotopomers of proposed IAA biosynthetic intermediates derived from several isotopic labeling strategies are listed in Table 3.
(See Note 12) 33. In the data output csv files, the slope of each linear regression line represents the ratio of the respective isotopic trace to its monoisotopmer. This ratio is used to calculate the relative abundance of labeled compounds, allowing us to track label incorporation from upstream precursors into IAA intermediates through multiple pathways.
Double indole labeling data analysis Supplying plants two differentially labeled form of indole provides a way to identify indole-derived compounds, as downstream intermediates will incorporate both labels. These samples are analyzed in a series of LC-MS/MS injections, initially scanning broadly for formation of labeled quinolinium ions, and then narrowing in on precise ions in subsequent injections until a molecular ion can be identified and fragmented to provide further structural information. 12 We recommend using a mass range window of the calculated m/z value ± 0.003.
|
2021-06-23T13:41:01.015Z
|
2021-06-22T00:00:00.000
|
{
"year": 2021,
"sha1": "4a8fbf4b8ed333b0adf33b4abcb340261ac5bacc",
"oa_license": "CCBY",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8220744",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "476947eda89ecb59990d3cbbce1a63ae669c1ed9",
"s2fieldsofstudy": [
"Chemistry",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213305492
|
pes2o/s2orc
|
v3-fos-license
|
Statistically-estimated tree biomass, stem density, and basal area for the upper Midwestern United States at the time of Euro-American settlement
We present gridded 8 km-resolution data products of the estimated biomass, basal area, and stem density of tree taxa at the time of Euro-American settlement of the midwestern United States for the states of Minnesota, Wisconsin, Michigan, Illinois, and Indiana. The data come from settlement-era Public Land Survey (PLS) data (ca. 0.8-km resolution) of trees recorded by land surveyors. The surveyor notes have been transcribed, cleaned, and processed to estimate biomass, basal area, and stem density at individual points on the landscape. The point-level data are then aggregated within grid cells and statistically smoothed using a statistical model that accounts for zero-inflated continuous data with smoothing based on generalized additive modeling techniques and approximate Bayesian uncertainty estimates. We expect this data product to be useful for understanding the state of vegetation in the midwestern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data products (including both raw and statistically smoothed estimates at the 8-km scale) are being made available at the LTER network data portal as version 1.0.
Introduction
Terrestrial vegetation in midwestern North America changed drastically at the time of Euro-American settlement (McAndrews 1988;Rhemtulla et al. 2009). Before settlement, the midwestern United States was the location of a major ecological transition between the grasslands of the Great Plains and the forests of eastern and northern North America (Transeau 1935, Grimm 1984, Danz et al. 2013). These grasslands have now mostly been replaced by agriculture or pastoral land use, except in areas of prairie conservation or restoration. Forested areas were also heavily affected by clearance for agriculture and logging during and after settlement (Rhemtulla et al. 2009;Schulte et al. 2007. Historical datasets from this time period, collected during the time of land surveys and allotment, provide critical context for understanding terrestrial ecology, the carbon cycle, and vegetation-atmosphere feedbacks (Caspersen et al. 2000, Rhemtulla et al. 2009, Lawrence et al. 2016. They allow researchers to define 'baseline' conditions for purposes of conservation planning, to understand ecosystem processes at decadal and centennial scales, to track how vegetation changes with changing climate, and to understand changes in ecosystems after widespread land use change.
Here we present methods for and statistical estimates of biomass, basal area, and stem density on an 8 km grid across a set of states in the midwestern United States.
Euro-American settlement and subsequent land use change occurred over many decades across North America. During that time, land surveys were done to demarcate land for land tenure and use, usually involving recording and marking trees adjacent to survey corners. These data provide vegetation information that can be mapped and used quantitatively to represent forest composition, and, sometimes, structure at the period of settlement. In the northeastern United States, early surveys provide only data at the township level (Cogbill et al. 2002, Thompson et al. 2013, which cannot be used to estimate biomass, basal area, or stem density, but which we have used to estimate composition . Later surveys after the establishment of the U.S. Public Land Survey System (PLS) by the General Land Office (GLO) provide point-level (i.e., corner-level) data along a regular grid, every one-half mile (800 m) spacing, for Ohio and westward during the period 1785 to 1907 (Bourdo 1956, Pattison 1957, Schulte and Mladenoff 2001. At each point 2-4 trees were identified, and the common name, diameter at breast height (dbh), and distance and bearing from the point to the trees were recorded. Using statistical techniques (Cogbill et al. 2018), these data allow us to estimate biomass, basal area, and stem density at each point. These point-level data are quite noisy, but can be aggregated to coarser spatial resolution to more robustly estimate spatial patterns of vegetation . At the 8 km grid resolution, the estimates are still noisy, and there are some spatial gaps in the available data, so in this work we employ a spatial statistical model to smooth over the noise and impute in missing grid cells. The result is a statistical data product that provides statistical estimates of biomass and density with quantitative estimates of uncertainty.
In contrast to Paciorek et al. (2016) we estimate biomass and density rather than composition and we use an extended dataset with additional data cleaning steps that were applied more consistently across the region. Relative to Goring et al. (2016) we use a spatial statistical model to smooth over the noisy grid-level estimates; we extend the domain to include southern Michigan, Illinois, and Indiana; we use updated allometric scaling factors from Chojnacky et al. (2014); and we apply additional and more consistent data cleaning steps across the domain.
In Section Methods, we describe the procedures used to obtain and clean the PLS survey data at the survey points, followed by processing to homogenize the data across the states of interest, and finally the statistical methodology used to estimate biomass, basal area, and stem density, first at the individual survey points, and then on an 8 km by 8 km grid, stratified by taxon and in total. In Section Results we present the results of cross-validation work carried out to determine the optimal statistical smoothing approach, and present basic summaries of biomass and stem density. In Section Data Product we describe the various data products we have produced and archived. Finally, in Section Discussion we discuss the uncertainties estimated by the statistical model and the limitations of the model.
PLS data collection and cleaning
The PLS was developed to enable the division and sale of federal lands from Ohio westward. The survey created a 1 mile 2 (2.56 km 2 ) grid (sections) on the landscape. At each half-mile (quarter-section) and mile (section) survey point, a post was set or a tree was blazed as the official location marker. PLS surveyors then recorded tree stem diameters, measured distances and bearings of the two to four trees adjacent to the survey point, and identified tree taxa using common (and often regionally idiosyncratic) names. In the Midwest, PLS data thus represent measurements by hundreds of surveyors from 1786 until 1907, with changing sets of instructions over time (Stewart 1935, White 1983. Survey procedures varied widely in Ohio and distance, diameter, and bearing information are not systematically available, so Ohio is not included in this work. The work presented here builds upon prior digitization and classification of PLS data for Wisconsin, Minnesota, and Michigan, with extensive additional cleaning and correction of the Michigan data and extensive additional digitization of Illinois and Indiana by the authors. Digitization of PLS data in Minnesota, Wisconsin and Michigan's Upper Peninsula and northern Lower Peninsula is essentially complete, with PLS data for nearly all 8 km grid cells. Data for the southern portion of Michigan's Lower Peninsula include the section points, but the quarter-section points have not been digitized yet except for the Detroit region, which is complete. Data in Illinois and Indiana represent a sample of the full set of grid cells, with survey record transcription ongoing at Notre Dame (see Fig. 1 for data availability).
As discussed in Paciorek et al. (2016), the surveys in our domain occurred over a period of more than 100 years (starting in 1799 in Indiana and ending in 1907 in Minnesota) as settlers from the United States and Europe settled what is now the midwestern United States. Our estimates are for the period of settlement represented by the survey data and therefore are time-transgressive; they do not represent any single point in time across the domain, but rather the state of the landscape at the time just prior to widespread Euro-American settlement and land use (Whitney, 1996;Cogbill et al., 2002). These datasets do include the effects of Native American land use and early Euro-American settlement activities (e.g., Black et al., 2006), but it is likely that the imprint of this earlier land use is locally concentrated rather than spatially extensive (Munoz et al., 2014).
We used expert judgment (co-author CVC) and prior work to determine the current common names of surveyor-recorded vernacular terms and abbreviations of settlement-era common names. We then aggregated into taxonomic groups that are primarily at the genus level but include some monospecific genera. We use the following 20 taxa plus an "other hardwood" category: Ash ( Juglans spp. ). Note that because of several cases of ambiguity in the common tree names used by surveyors (black gum/sweet gum,ironwood, poplar/tulip poplar, cedar/juniper), a group can represent trees from different genera or even families.
In Appendix A, we describe the specific data cleaning steps we applied to each sub-dataset as well as a variety of steps to standardize the dataset across states and minimize the potential effects of surveyor bias upon estimates of vegetation. Note that the division between northern and southern Michigan is caused by obtaining the data from different sources and can be seen in the black to grey transition in Fig. 1. In the remainder of this section, we briefly describe some of these key cleaning and standardization steps.
Following Goring et al. (2016), we excluded line and meander trees (i.e., trees encountered along survey lines as compared to trees located at section or quarter-section corners). Surveyor selection biases for tree size and species appear to have been more strongly expressed for line trees. Meander trees were used to avoid obstacles, such as water-bodies, and so have non-random habitat preferences (Liu et al. 2011).
We attempted to exclude points in water including points with information indicating wetlands without trees present; the specifics of how we did this varied by state as described in detail in Appendix A. Relative to Goring et al. (2016) we excluded some additional points in Wisconsin and Minnesota based on information indicating the presence of standing water. Note that points with a single tree might be in areas with low tree density such that the second tree was too far for the surveyors to mark it. However, in some cases a single tree may be marked because three of the quadrants were inaccessible, generally due to wet conditions. We generally excluded one-tree points based on information suggesting this was a result of water rather than because of low tree density, but in many cases it was hard to distinguish between these cases.
Relative to Goring et al. (2016) we carried out extensive additional quality control of the northern Michigan data, based on which we detected and subsequently fixed anomalies in the data. We also excluded a number of additional points with no tree data where the data were judged to be unreliable rather than indicative of low density.
As part of the overall PalEON project, we have been digitizing the PLS data from southern Michigan, Illinois, and Indiana. Goring et al. (2016) did not analyze southern Michigan, Illinois, or Indiana, while Paciorek et al. (2016) did analyze these areas, but used an earlier version of the PLS dataset only to estimate composition. The southern Michigan data were digitized from Mylar maps and found to have a variety of errors. For this work, based on the original field notes for southern Michigan, we fixed errors in some points and excluded some points with data judged to be unreliable. For Indiana and Illinois, we now have additional digitized PLS data that was unavailable in Paciorek et al. (2016)
Estimation of point-level density
We estimated stem density at each point with a Morisita plotless density estimator that uses the measured distances from each survey point to the nearest trees at the point location (Cogbill et al. 2018). The standardized approach for the Morisita method is well-validated. However, over time the survey design used by PLS surveyors changed as protocols were updated, which affects how we estimate density from the information at each point. Appendix B summarizes the changes in the information recorded and how we developed and applied correction factors to the Morisita estimator , Cogbill et al. 2018 to account for these changes when estimating stem density at a point.
We limited estimates of density to trees above 8 inches dbh because that is approximately the size below which surveyors tended to avoid sampling small trees. However, in many cases smaller trees were reported by surveyors. We included all trees that were surveyed in our initial density estimate (including those with missing diameters), giving a raw stem density estimate whose meaning (in terms of the implicit diameter threshold) varies spatially based on how surveyors selected for tree size in a given area. We then used a correction factor (see and Cogbill et al., in progress) to scale the raw density estimates based spatially-varying estimates of the diameter distributions of PLS trees. This gives us a corrected stem density estimate for trees greater than 8 inches dbh.
Distances from the tree to the survey point were taken to be the distance from the survey notes plus one-half the diameter of the tree.
We used bearing angle information to screen and correct for points where surveyors may not have followed the PLS instructions. Specifically, we searched for and found 9602 four-tree points where the two nearest trees fell in the same quadrant. We excluded these points as this indicates the surveyors did not follow the survey instructions, and there is no rigorous way to use the Morisita estimator to estimate density for these points. In cases where information on the quadrant was missing for one or both trees we assumed they fell in different quadrants and did calculate stem density.
We removed 3629 points with one tree at a distance of zero or missing distance as it is unclear what density to estimate for such points. Many of these corners have a single corner tree with a zero distance (presumably a "corner tree" used as the corner post), which our density estimator would assign a density of infinity. We removed 1821 points with two trees and either distance missing. We also removed 131 points with two trees at distance zero. Note that points with one of two trees at distance zero do allow estimation of density using the Morisita estimator and were included.
We estimated density for one-tree points as 0.3146 stems per hectare. This density estimate is equal to one tree in a circle of radius equal to 500 links (approximately 100 m), which approximates how far a surveyor might have gone to find a second tree. Surveyors were instructed to find two trees, so the presence of only one tree generally indicates low density. While 500 links is arbitrary, our results should be insensitive to the exact value of the near-zero density that we use in such cases.
We truncated estimated densities at 10,000 stems per hectare (one tree per square meter) to reduce the influence of outlying high density values, truncating 139 points when estimating stem density itself and 246 points when estimating stem density for the biomass (and basal area) estimation (i.e., omitting the scaling to trees greater than 8 inches dbh as discussed below).
After all removals we estimated stem density at 66,648 Illinois points, 67,072 Indiana points, 113,801 Michigan points, 226,047 Minnesota points, and 159,058 Wisconsin points (Fig. 1).
Estimation of individual tree biomass
We use the aboveground biomass (AGB, component 2, dry, live stump, stem, branches and foliage relationships provided in Chojnacky et al. (2014)) to estimate aboveground biomass . The assignment of allometric coefficients (for simple linear regressions of log biomass (kg) on log dbh (cm)) to taxa is provided in our Github repository ( https://github.com/PalEON-Project/PLS_products ) and in Table 1. Note that some of the 21 taxa use the same allometric equations. Our original goal was to make use of the full set of allometric information in Jenkins et al. (2003) and Chojnacky et al. (2014) to incorporate uncertainty in scaling dbh to tree biomass, using the Bayesian statistical methods provided in the PEcAn software (Dietze et al. 2013) allometry module. However, even at the taxonomic aggregation inherent in our 21 taxa, there are often few allometries available for a given one of the taxonomic groupings and in many cases the allometries come from locations outside of our midwestern US spatial domain. Furthermore, although there are more allometries for stem biomass (component 6; note that this excludes branches) than for aboveground (component 2) or total biomass (component 1), most research focuses on aboveground biomass rather than stem biomass. As a result we felt that we could not robustly estimate the aboveground biomass allometries with uncertainty and have omitted this.
Estimation of point-level biomass and basal area
Here we describe how we biomass at each PLS point. Calculations for basal area are equivalent.
In the usual case of having two trees, we calculated the point-level biomass using one-half the stem density multiplied by the estimated biomass of each tree. When the two trees were of different taxa, this produces point-level biomass for two taxa that were added to estimate total biomass . When of the same taxa, this is equivalent to averaging the tree-level biomass for the two trees and multiplying by stem density.
For simplicity we excluded all 3221 points with any tree-level missing biomass values (i.e., missing diameters), although we note that it is possible to estimate (1) total biomass based on having one of two trees with available biomass and (2) taxon-level biomass from the available tree. The exclusion puts two-tree points on a similar footing with one-tree points (for which missing biomass prevents estimation) with the goal of limiting bias at the grid cell level.
When estimating biomass, we used the original density estimate without using correction factors that scale to the density of trees greater than 8 inches dbh. This is necessary since the trees' biomass can only be calculated based on theoriginal density. Thus the original density combined with biomass estimates for all individual trees (including those less than 8 inches dbh) gives an unbiased biomass estimate without an explicit size threshold. We recognize this introduces some imprecision, but we note that given the limited contribution of smaller trees to total biomass, the presence or absence of a diameter threshold should have minimal effect. In contrast, for density estimation it is critical to define a threshold in order to have a meaningful quantity.
Grid-level estimation
Before doing the statistical modeling at the 8-km grid scale, we aggregated the point-level data to the 8-km grid by averaging over point-level biomass, basal area, and stem density values for all points in a grid cell. In addition, for our statistical modeling to best account for the high abundance of points with either no trees or (for taxon-specific analyses) no trees of a given taxon, we also calculated the proportion of points in each grid cell with no trees (for taxon-specific analysis, the proportion of points with no trees of the taxon of interest).
Note that given heterogeneity in density values within a grid cell (i.e., density varies by stand), our estimates at the grid level must account for the species-density relationship . Traditionally, basal area has been calculated as the product of the mean density and mean tree basal area, but because of their negative correlation, this overestimates the average values (Bouldin 2008(Bouldin , 2010Kronenfeld 2015). Therefore, our estimates of biomass and basal area in a grid cell is the mean of the point-level multiplication of density and tree size (Cogbill et al. 2018). Similarly our estimate of density for a given taxon is equal to the average of the point-level density estimates for that taxon, not to the taxon proportion of stems in a grid cell multiplied by the grid cell estimate of total density. Furthermore, the biomass of each taxon is not estimated from the taxon proportion multiplied by the overall biomass, but the mean of the point-level biomass estimates.
Statistical smoothing
The major challenge of modeling biomass, basal area, and stem density data is that these quantities are both non-negative and continuous, with a discrete spike at zero; few statistical distributions are available for this type of data. The description below is specifically for biomass for concreteness and clarity of presentation, but the modeling structure is the same for basal area and stem density.
There are many zero-inflated models in the statistical literature, most focusing on count or proportional data (Lambert 1992, Hall 2000. In early efforts we considered a Tweedie model (Tweedie 1984, Jørgensen, 1987 to deal with our zero-inflated continuous data. However, computational difficulties affected model convergence and the Tweedie model resulted in poor fits. Given this we developed a two-stage model to address the challenge of zero inflation in non-negatively valued distributions. Our model was motivated by the biological insight that local conditions may prevent a taxon from occurring in an area even though the taxon may be present at high density nearby. Thus we combine a model for "potential biomass", which reflects the large-spatial-scale patterns in biomass with a model for "occupancy", which reflects the propensity for a given forest stand to contain the taxon. This model allows for zero inflation because a low value of the probability of occupancy can easily produce observations that are zero at the grid cell aggregation. potential (log) biomass process and be the occupancy process, both evaluated at grid cell (s) θ p . The biomass in a grid cell can then be calculated as , namely s (s) (s) (m (s)) b p = θ p exp p weighting the average biomass in "occupied points" by the proportion of points that contain the taxon.
First consider the occupancy model. The likelihood is binomial, . Note that (s) in(N (s), (s)) n p~B θ p the occupancy model represents the occupancy of points within a grid cell for taxon and that p because two taxa will often "occupy" the same point, since most PLS points have (s) ∑ p θ p > 1 two trees. Next consider the (log) biomass process. We considered modeling potential biomass both on the original scale, , and on the log scale, , where the scaling of the variance by is the usual variance of an average. Note that this (s) n p likelihood accounts for heteroscedasticity related to the number of points at which the taxon is observed (not the number of PLS points in the cell). Finally, for total (non-taxon-specific) biomass, above is simply the number of points with any trees. (s) n p However, based on the delta method, the correct approximate distribution when working on the log scale is . There is no clear means of accounting for the extra in the denominator when fitting the potential biomass on the log scale using generalized (s) m p additive modeling software (see below). Despite this we found that working on the log scale produced more accurate point and uncertainty estimates based on cross-validation. This improved performance likely results from i) downweighting the influence of outliers and ii) the log-scale model inherently having the variance scale with the mean (when both are considered on the original scale), which we observe empirically in the raw grid-level data.
This two-stage model is able to account for the zero inflation produced by structural zeros (the taxon is not present because local conditions prevent it) through the use of the occupancy model. Through the potential model, it is also able to capture the smooth larger-scale variation in biomass. And by having both component models, we can account for the differential amounts of information in the face of the large number of zeros and different numbers of sampling points in each grid cell.
Note that is likely to be quite smooth spatially, at least for the PLS data, because when a (s) m p point is occupied by a given taxon, the tree is likely to be of adult size, regardless of whether the tree is common in the grid cell. So most of the spatial variation in biomass may be determined by variability in occupancy. The potential biomass is meant to correct for the fact that density and tree size may vary somewhat, but probably not drastically, across the domain.
We fit the two component models using penalized splines to model the spatial variation, with the fitting done by the numerically robust generalized additive modeling (GAM) methodology implemented in the R package mgcv (Wood 2017), using the GAM implementation intended for large datasets encoded in the bam() function (Wood et al. 2015) in place of the usual gam() function.
We accounted for the heterogeneity in the number of occupied points per grid cell by setting the 'weights' argument in the bam() function to equal to . We also considered scaling all (s) n p weights by dividing by 70, where 70 is the approximate number of points in a cell that was fully-surveyed. This treats a fully-covered cell as having one 'unit' of information and scales the contribution to the likelihood from cells with a different number of points relative to that. However, the results with and without the division by 70 were identical for the point estimates and very similar for the uncertainty estimates, so our final results omit this scaling.
We did not use covariates as predictors in our statistical model for several reasons. First we have fairly complete coverage (see Fig. 1), such that the use of covariates is expected to provide limited additional information. Second, covariates such as climate for the settlement time period are not available and we were reluctant to make assumptions that present-day values are sufficiently similar to values in the past. Finally, without developing complicated statistical models that allow the effect of covariates to vary spatially (so-called varying coefficient models), using regression coefficient estimates that are constant spatially can cause biases, such as inferring the presence of a taxon outside of its range boundary. For these reasons, we chose to rely only on spatial smoothing of the raw data. Future researchers could use our raw data products in combination with covariates.
Finally, to estimate total biomass, we fit the model above to raw total biomass values from the survey points, aggregating in the same fashion as described above for individual taxa, but including data from all trees.
Quasi-Bayesian uncertainty estimates
As discussed in Wood (2017), one can derive a quasi-Bayesian approach and simulate draws from an approximate Bayesian posterior by drawing values of the spline coefficients based on the approximate Bayesian posterior covariance provided by gam() or bam() and, for each draw, calculating a draw of and similarly a draw for for the biomass process. We (s) θ p (s) m p combined 250 draws from the occupancy and potential biomass processes (assuming independence between the processes) to produce biomass draws for each taxon and for total biomass. The procedure for stem density was analogous.
Note that one major drawback of this methodology is that individual taxon estimates are not constrained to add to the total biomass values estimated from using our model on raw total biomass values because the taxa are fit individually. Further, as was the case in our related modeling of composition ), we do not capture correlations between taxa, in part to reduce computational bottlenecks and in part to avoid inferring the value of one taxon based on the value of another. While there are real correlations, the correlation structure likely varies substantially over space (e.g., two taxa that covary strongly can have different range boundaries such that the presence of one beyond the boundary of another does not indicate the second taxa is present). Since information is present on all taxa at any location with any data, there is little need to borrow strength across taxa (unlike the need to borrow strength across space to fill in missing areas and smooth over noise caused by limited data in each grid cell). This means that any downstream use of the results should avoid making use of the posterior covariance as an estimate of the correlation across taxa in the uncertainty of the estimates. Also note that one might scale the taxon-level point estimates to add to the total estimates, but there is no clear way to do this at the level of the posterior draws since the draws are computed independently between the total and all taxon fits.
In the GAM fits, we noticed some anomalies in the quasi-posterior draws for the occupancy model that were likely caused by numerical issues. In particular, for some taxa, there were draws of the occupancy probability that were more than five times as large as the probability point estimate. Most of these occurred for very low occupancy probabilities in areas outside the apparent range boundary for the taxa. There were also cases where draws of total (non-taxon-specific) occupancy probability were near zero even though the probability point estimate was essentially one. As ad hoc, but seemingly effective solutions, we made the following adjustments to the draws: 1. set draws where the taxon-specific occupancy probability is greater than five times the point estimate to be equal to the point estimate, and 2. set all draws where the total point estimate was greater than 0.999 to be equal to 1.
Choice of smoothing and scale of averaging
We used cross-validation at the grid scale to: 1. choose between estimating potential biomass on the log-scale or original scale, and 2. determine the maximum number of spline basis functions, denoted as . k With regard to the maximum number of basis functions, while the generalized additive modeling methods of Wood (2017) choose the amount of smoothing based on the data, using a large number of basis functions can result in slow computation. We chose to limit the number of basis functions (and thereby impose an upper limit on the effective degrees of freedom of the spatial smoothing estimated from the data), with that number informed by cross-validation. However, the imposed upper limit to the number of possible basis functions was large enough to have little effect on the amount of smoothing, although possibly imposing slightly more smoothing than without the limitation.
We used 10-fold cross-validation, randomly dividing the grid cells into 10 sets and holding out each set in turn. This allows us to assess the ability of the model to estimate biomass for cells with no data (and also gives us a good sense of performance for cells with very few points). Note that even with our incomplete sampling in Indiana and Illinois (Fig. 1), most unsampled grid cells are near to other grid cells with data.
The metrics used in cross-validation were squared error loss for the point predictions relative to the grid-level raw data and statistical coverage of prediction intervals of the grid-level raw data.
We calculated squared error weighted by the number of PLS points in the held-out cell and truncated both held-out values and predictions to maximum values of 600 Mg/ha to avoid having very large values overly influence the assessment. This also allows us to work on the original (not log) scale in our evaluation, as we don't want to accentuate small differences at low biomass values.
We calculated 90% prediction interval coverage using a modified version of the quasi-Bayesian uncertainty procedure described previously. The modification to the sampling procedure involves drawing a random binomial value based on each draw of the occupancy probability, multiplied in turn by a random normal draw (exponentiated when fitting the model on the log scale) centered on the draw of the potential surface with variance equal to the residual variance from the potential model. The addition of the binomial draw and the residual variance produces a prediction interval for the data rather than the unknown process and allows us to assess coverage relative to an observed quantity. We calculated 90% prediction intervals using the 5th and 95th percentiles of the 250 draws for each held-out cell. Coverage was determined as the proportion of cells for which the observation fell into the interval, considering only grid cells with at least 60 PLS points. We also calculated the median length of intervals (and median log-length) to assess the sharpness of the intervals, as high coverage can always be trivially obtained from overly-wide intervals.
Unfortunately the coverage results cannot directly assess the uncertainty estimates provided in the data product, which provides intervals for gridded biomass. This is because the true biomass is unknown and thus cannot be used to judge coverage. We can only judge coverage of prediction intervals for the data. Thus, under-or over-estimation of uncertainty for the true quantities may be masked by compensating over-and under-estimation of the residual error of the data around the truth.
Cross-validation was done for total biomass and density as well as on a per-taxon basis.
Model selection using cross-validation
The cross-validated weighted absolute error values for the biomass estimates can be seen in Table 2 for the model on the original scale and Table 3 on the log scale. With regard to coverage, the models that fit the outcome on the original scale without log transformation for the potential model had a poor tradeoff of coverage and interval lengths. For k=2500 for occupancy and k=3500 for potential, for a 90% uncertainty interval, the coverage was 94.8% with a median interval length of 188, compared to coverage of 85.5% with a median interval length of 97 for the potential model on the log scale. While the 85.5% coverage is less than the desired coverage of 90%, we judge that the modest undercoverage is acceptable in light of the much shorter interval lengths. In addition, the uncertainty was roughly constant regardless of the value of the point estimate when working on the original scale, while the models using the log scale had uncertainty that increased with the size of the point estimate. This scaling of variance with mean (similar to that in a Poisson distribution) when using the log scale makes intuitive sense given the lower bound of zero.
Cross-validation results for total stem density are qualitatively similar to those for biomass with regard to how values vary with the number of basis functions (not shown). With regard to comparing results on the original and log scales, for k=2500 for occupancy and k=3500 for potential, the median absolute error was 41.5 and 40.1 for the original and log scales, respectively. Coverage was 91.5% and 85.4%, respectively, and the median interval lengths were 205 and 185, respectively.
Cross-validation results for taxon-level estimates are harder to interpret because there is one value per taxon. Also for grid cells outside the range limit of a taxon, estimates and intervals are generally very close to zero. As a result it is difficult to know how best to aggregate across taxa for summarization. The variation in cross-validation results with respect to the number of basis functions is qualitatively similar to the results for total biomass (not shown). For k=2500 for occupancy and k=3500 for potential, the average coverage (across taxa) of 90% uncertainty intervals was 97.8% for the original scale and 93.6% for the log scale, with a mean (across taxa) of median interval lengths (across cells) of 11.1 and 3.9 for the original and log scales, respectively.
Based on the cross-validation results we chose to fit models on the log scale. We also chose k=2500 for the occupancy models (for biomass, basal area, and stem density, and for total and taxon-level fitting) and k=3500 for the potential models. While values of k>2500 for occupancy reduced the estimated absolute error loss (i.e., improved the fits) slightly (see Table 3), larger k values increased computational time, so we chose to use k=2500. We did not assess k>3500. Based on the diminishing reductions in the loss as k increases beyond 2000 or 2500, it is unlikely that larger values of k would produce substantively important improvements in prediction.
Estimated biomass and stem density
In Fig. 2, we show our estimated aboveground biomass and stem density, compared against the raw grid-level data (averaging over the point-level estimates), and with statistical uncertainty. Fig. 4 shows estimated biomass for all 21 taxonomic groups.
Data product
We provide the following data products via the LTER Network Data Portal. We will also soon provide point-level raw data for Indiana, Illinois, Michigan, and Minnesota. Point-level raw data for Wisconsin can be obtained by contacting co-author David Mladenoff.
The project Github repository ( https://github.com/PalEON-Project/PLS_products ) provides code for processing the point-level data and producing the data products above in the subdirectory named 'R'. In the subdirectory 'data/conversions', we provide: • our translation tables for translating surveyor taxon abbreviations to modern common names, including aggregation for the raw gridded values and statistical modeling done in this work, • correction factors for the subregions of the domain for estimating point level tree density, and • our assignments of allometric relationships for the PalEON taxa, based on Chojnacky et al. (2014) (also provided in Table 1).
Discussion
We have presented high-resolution estimates, with uncertainty, of biomass, basal area, and stem density at the time of Euro-American settlement for a large area of the midwestern United States. These estimates can be used to answer ecological questions, as inputs for other analyses, and as a baseline for understanding changes in ecosystems, including carbon storage, under anthropogenic change.
While our estimates have a variety of strengths, including relatively high resolution, relatively uniform data density, coverage of a large area, careful data cleaning, and the use of statistical methods tailored to the data, there are of course limitations. The 8-km grid resolution prevents one from understanding variation at finer scales, including the stand level and variations from smaller scale effects such as local topography, including the effects of small fire breaks. For example, our total biomass and stem density estimates show a portion of the Minnesota River valley in southwestern Minnesota (see Fig. 2), but cannot resolve riparian forest (relative to grassland or upland forests) in smaller valleys. Our estimates smooth over the local variation, which can include sharp ecotone boundaries. In future work in this and other domains, we plan to make use of the point level data without initial gridding to try to estimate finer-scale variation, though one will always be limited by the natural resolution of the PLS survey points.
Our statistical model cannot represent range boundaries as it models variation in abundance as a continuously-valued spatial field with strictly positive (but often negligibly above zero) predicted biomass, basal area, and stem density, compounded by the smoothing mentioned above. Of course except in cases of distinct boundaries in environmental drivers that cause distinct range boundaries, range boundaries are generally fuzzy.
Our statistical model fits each taxon separately, for computational convenience and to limit the complexity of the spatial statistical models. Thus the uncertainty estimates do not capture any correlated uncertainty across taxa and analyses that aggregate estimates across more than one taxon (such as comparing two taxa or summing across multiple taxa) will not be able to correctly characterize uncertainty. For sums, one could, as we have done for total biomass, basal area, and density, sum the raw values and then apply the spatial statistical model. Finally, the sum across taxa of the taxon-specific estimates for a grid cell do not add to the estimate of total biomass, basal area, or stem density for that grid cell.
In this work, as in Paciorek et al. (2016) we chose not to use environmental covariates, such as soils, firebreaks, and topography (Grimm 1984, Shea et al. 2014, when estimating biomass, basal area, and stem density. Instead we limited our model to capture variation solely based on smoothing the data using Gaussian process techniques that rely on spatial distances. This avoids dependence on the environmental drivers of pre-settlement forest composition that might cause circular reasoning in subsequent analyses that use our data products. In addition, use of covariates could also lead to prediction that a taxa is present well beyond its range boundary in places where data are sparse.
The estimates and raw data are available as public data products, and our methods are fully documented with code available in our Github repository.
Author contributions
CVC and CJP developed the statistical procedures for point-level estimation and spatial smoothing, respectively. JAP and CVC led the data cleaning and processing for Michigan, Illinois, and Indiana, while DJM provided the cleaned WI, MN and initial northern Michigan data. SJG and JWW led the initial processing of southern Michigan. JAP, CVC, and CJP did subsequent cleaning of data from both northern and southern Michigan. CJP and JAP wrote the code for data cleaning, data gridding and statistical estimation, with initial contributions from SJG and code/workflow review by AD. AD co-developed the workflow for biomass estimation using allometric equations. CJP wrote the paper with contributions from CVC and JAP and feedback from JWW, SJG, JSM, DJM, and AD. (Radeloff et al. 2000, Manies and Mladenoff 2000, Liu et al. 2011. This constitutes the cleaned raw data used in this work, which can be obtained by contacting David Mladenoff.
We then processed the cleaned raw data as follows.
We excluded 4088 points with the first tree marked as 'QQ' by the Mladenoff lab as this indicates the presence of water at the point. We used the vegetation code for each point (documented at the Wisconsin DNR website) to exclude points that might induce a negative bias in our estimates because trees were missing because of standing water. Specifically, we excluded 2803 points with one tree that are marked as Creek, Marsh, Swamp, Lake, and River. Note that we included points marked as Wet Prairie or "low land, low wet area", judging these areas to be terrestrial, albeit often wet.
Three areas southwest of Green Bay had no data because they were Menominee Native American lands and were not surveyed (see Fig. 1). We excluded these points and few other points in Wisconsin (a total of 736 points) for which there was no information on trees at the point. Later surveys on these lands are not included here.
There were 670 points that were missing survey years. The survey year for these was imputed based on nearby points.
Minnesota
Copies of the original submitted field notes from Minnesota are at the Minnesota Historical Society in St. Paul, Minnesota. These records have been digitized by three projects (Grimm 1981, Almendinger 1985, Minnesota County Biological Survey 1996) and are available online at the Minnesota DNR Bearing Tree Database ( https://gisdata.mn.gov/dataset/biota-original-pls-bearing-trees ). More details are given in Almendinger (1997). The data used in this work were obtained from the Minnesota DNR by the Mladenoff lab group in earlier work. This constitutes the cleaned raw data used in this work and provided in the data product.
We then processed the cleaned raw data as follows.
We used the vegetation code for each point (documented in Almendinger (1997)) to exclude points that might induce a downward bias in our estimates because trees were missing because of standing water. Specifically, we excluded 20560 points with no trees or one tree that are marked as Creek, Marsh, Swamp, Lake, and River. We retained 24243 points with two to four trees in such areas as it is hard to know how much of the area surrounding these corners is under water; this may lead to some downward bias in our density estimates in Minnesota. We excluded 902 points with missing taxon information and missing ecotype as these appeared unusable. Most occurred in far northern Minnesota (the Boundary Waters area) or along straight east-west lines, suggesting problems with the points. We excluded 1560 Forest, Grove, Bottom and Pine grove points with missing taxa information for all trees, as this is inconsistent with the presence of forest. We included points marked as Wet Prairie, judging these to be terrestrial, albeit often wet.
Northern Michigan
Copies of the original submitted field notes from northern Michigan are at the Michigan State Archives in Lansing, Michigan. These records have been microfilmed and are available online ( http://seekingmichigan.org/discover/glo-survey-notes or https://glorecords.blm.gov/default.aspx ). Michigan surveyor observations for the Upper Peninsula of Michigan and the northern section of the Lower Peninsula (see Fig. 1 of ) were digitized by the Mladenoff lab group in earlier work. Co-authors CVC and JP added additional points in Ontonagon, Schoolcraft and Gogebic Counties. The northern Michigan data were further processed to keep one record for locations that had two georeferenced data entries with identical tree information. In cases where there were two georeferenced data entries in the same location, with either a) one entry providing tree information and the other no tree information or b) both entries having the same tree information except for one attribute (typically the bearing), the point with no information or with less information, respectively, was removed.
The exact point coordinates for Isle Royale appeared to be incorrect (some points are in Lake Superior) and some points appeared to be duplicated. We omitted all data from Isle Royale in the current analysis, but in future work we plan to re-enter the data from the original survey notes.
From initial spot checks in Dickinson County we determined tree diameter and distances were transposed in the data transcription in many cases; these were corrected. Spot checks also indicated that distances in Iosco County (and scattered points in other counties) that had been listed with decimal values needed to be converted from chains to links (multiplied by 100) to standardize with the rest of the database. There were some trees with outlying diameter values greater than 48 inches dbh. All of these were checked carefully in the original field notes and were corrected if necessary.
The cleaned raw data represent the data obtained and processed as described just above and provided in the data product. We then processed the cleaned raw data as follows.
Unlike in Minnesota and Wisconsin, we do not have vegetation codes, so we cannot distinguish one-tree points that occur because of water from one-tree points in areas with low density. Some points have qualitative notes but in general these do not indicate the presence of any water. All one-tree points were retained, but they may contribute to a downward bias in density and biomass.
There were 2810 points with no trees indicated (i.e., with no taxa noted) for which we attempted to determine if these were truly points that had been surveyed that had no nearby trees. All such points (1316 points) without any surveyor notes were excluded. We included points where the notes indicated no trees (e.g., 'no witness trees', 'no trees convenient', 'no other tree data') but excluded points where the notes indicated water (102 points), lost information (73 points) or that the tree was used as the corner or no other trees were recorded (874 points). In the latter case, we cannot compute a density estimate (it would be infinity) for a point with one tree at zero distance.
Southern Michigan
Copies of the original submitted field notes from southern Michigan are at the Michigan State Archives in Lansing, Michigan. These records have been microfilmed and are available online ( http://seekingmichigan.org/discover/glo-survey-notes or https://glorecords.blm.gov/default.aspx ). Field notes were transcribed to topographic maps allowing the points to be displayed geographically and were then converted to Mylar maps by Denis Albert and Patrick Comer of the Michigan Natural Features Program for Albert et al. (2008). Ed Schools provided these Mylar maps temporarily to the Williams lab, who digitized them to a point-based ArcGIS shapefile and co-authors CVC and JP conducted a number of checks described below. When necessary they used the original field notes to check and make corrections.
There were some townships with no survey points (visible as small areas with fewer points per cell, albeit not zero points because the township and grid cell borders are not aligned, in south-central and southeastern Michigan in Fig. 1). Spot checks indicate these are generally caused by missing data from the original surveys, with the original field notes unavailable for unknown reasons.
We removed points already contained within the southern Michigan dataset. An initial assessment of the data digitized from the mylar maps indicated that the diameter and distances were transposed during digitization; we corrected these. When the Mylar maps were created, points on the township boundaries were entered twice for approximately 4000 exterior township corners (mainly on the southern and eastern township borders), resulting in four trees noted per corner when only two were surveyed. We kept the two of the four trees listed when they were located in quadrants inside the township. We excluded 469 points in cases where there was ambiguity because the trees were in quadrants outside the township. We plan to obtain data from these points from the original PLS field notes in future processing. An additional 68 interior township survey points had 3-4 trees but for which only two trees were truly surveyed. These were checked against the original PLS field notes and corrected. For entries with azimuths less than zero or greater than 360 we checked the survey notes and corrected as needed. There were some trees with outlying diameter values of greater than 48 inches dbh. All of these were checked carefully in the Mylar maps and/or original field notes and were either corrected or retained only if they were clearly noted in either source.
The cleaned raw data represent the data obtained and processed as described just above and provided in the data product. We then processed the cleaned raw data as follows.
Due to extensive incomplete data on the Mylar maps, 27 townships in the Detroit region (primarily Monroe and Lenawee Counties, with one township in Washtenaw) were re-entered and replaced by the McLachlan lab using the same protocol as used for the Indiana and Illinois data.
The Mylar maps for many areas of southern Michigan (outside of the Detroit region) have no quarter-section points. The areas with quarter-section points tend to be in savanna / low-density areas. Given this selection bias, we removed all 6593 quarter-section points that were present. This can be seen in the marked decrease in points per cell in the southern portion of the lower peninsula of Michigan in Fig. 1.
We excluded 40 points with three trees because surveyors in this area were instructed to only marked two trees. These points may be those with a corner tree plus two additional trees but extracting valid data from these points would require looking at the field notes.
Unlike in Minnesota and Wisconsin, we do not have vegetation codes, so we cannot distinguish one-tree points that occur because of water from one-tree points in areas with low density. Some points have qualitative notes but in general these do not indicate the presence of any water. All one-tree points were retained, but they may contribute to a downward bias in density and biomass.
Indiana and Illinois
Copies of the original submitted field notes from Indiana and Illinois are at the Indiana State Archives in Indianapolis and the Illinois State Archives in Springfield. These records have been microfilmed and are available online in the National Archives ( https://catalog.archives.gov/id/566714 ). Data from Indiana and Illinois were purchased from the Indiana State Archives (Commission on Public Records, Indiana State Archives, Indianapolis) Indiana) and Hubtack Document Resources ( http://hubtack.com/ab/index.php ), respectively, and processed by the McLachlan lab. Data entry for these states is ongoing. Originally, townships to digitize were chosen to provide an even distribution across both states. Since then, specific areas of each state (e.g., the Kankakee watershed, the Yellow River watershed, the savanna-closed forest transition north to south in Illinois and west to east in Indiana, and locations of US Forest Service Forest Inventory plots) have been chosen to complement ongoing projects in the lab. PLS land notes are transcribed by undergraduates in the lab. These data are then subjected to an initial QA/QC check by the original readers, followed up by a second QA/QC check by a different individual in the lab, georeferenced to the section and quarter-section locations and finally, reviewed for a final set of QA/QC checks. The R code used for the QA/QC checks and georeferencing are available in Github at: https://github.com/PalEON-Project/IN_ILTownshipChecker .
The cleaned raw data represent the data obtained and processed as described just above and provided in the data product. We then processed the cleaned raw data as follows.
There were 18 points in the Illinois data near the Illinois-Wisconsin border that were very close to points in Wisconsin. These were removed to avoid near-duplication of points. A small number of points were missing survey years. The survey year for these was imputed based on nearby points.
Points recorded as Water (where surveyor nodes indicated standing water) or having no data (which is not the same as having no trees) were omitted. Points recorded as Wet (where surveyor notes indicated water was ephemeral) were included as these were judged to be terrestrial. All one-tree survey points are retained as the survey information (including qualitative notes by the surveyors) indicates that such points were because of low density and not the presence of water.
Other notes
We excluded 127 points in Minnesota, Wisconsin, and northern Michigan in which at least one tree was marked as dead (or 'dry') or where the taxon was judged 'indeterminable' by the Mladenoff lab. In almost all these cases, there would be fewer than two live trees after excluding the dead or unknown cases.
date, internal versus external point, section versus quarter-section, and two-versus four-tree points and can be found in our Github repository. There are four correction factors: kappa accounts for the sampling design (two-versus four-tree points and where the two trees are located relative to quadrants and halves), theta for sector bias, zeta for azimuthal censoring, and phi for inclusion of trees less than 8 inches dbh, as discussed in Goring et al. (2016) and Cogbill et al. (in progress). Lighter grey in southern Michigan is caused by lack of quarter-section points. Illinois and Indiana digitization is ongoing. Fig. 2. Raw data, predictions and uncertainty for total biomass (top row) and total stem density (bottom row). Point estimates from raw data in each cell based on the average point-level biomass (left column), predictions are estimates from the statistical smoothing model (middle column) and uncertainty estimates from the standard deviation of the quasi-Bayesian posterior draws (right column). Note that in the raw data plots, grey indicates data were not available for a grid cell (this occurs rarely, except in Illinois and Indiana).
|
2019-12-05T09:26:44.968Z
|
2019-12-02T00:00:00.000
|
{
"year": 2019,
"sha1": "69828156840f934ceef081cac45ab1f0e0c070a8",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/12/02/856526.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "7cd3e2d33bb4617fc02c729db225983f7d6cfd46",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Environmental Science"
]
}
|
245453671
|
pes2o/s2orc
|
v3-fos-license
|
Investigation of the Morphology and Electrical Properties of Graphene Used in the Development of Biosensors for Detection of Influenza Viruses
In this study, we discuss the mechanisms behind changes in the conductivity, low-frequency noise, and surface morphology of biosensor chips based on graphene films on SiC substrates during the main stages of the creation of biosensors for detecting influenza viruses. The formation of phenylamine groups and a change in graphene nano-arrangement during functionalization causes an increase in defectiveness and conductivity. Functionalization leads to the formation of large hexagonal honeycomb-like defects up to 500 nm, the concentration of which is affected by the number of bilayer or multilayer inclusions in graphene. The chips fabricated allowed us to detect the influenza viruses in a concentration range of 10−16 g/mL to 10−10 g/mL in PBS (phosphate buffered saline). Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed that these defects are responsible for the inhomogeneous aggregation of antibodies and influenza viruses over the functionalized graphene surface. Non-uniform aggregation is responsible for a weak non-linear logarithmic dependence of the biosensor response versus the virus concentration in PBS. This feature of graphene nano-arrangement affects the reliability of detection of extremely low virus concentrations at the early stages of disease.
Introduction
The rapid spread the coronavirus disease 2019 (COVID- 19) during the pandemic and regularly occurring epidemics of influenza, which have killed hundreds of millions of people and caused significant damage to the global economy, have shown the need to create highly sensitive biosensors that allow quick (within minutes) detection of extremely low concentrations of antigens (viruses) at early stages in these diseases. The fabrication of such biosensors would make it possible to find out the mechanism of the spread of COVID-19. Recent reports have proposed graphene as a prospective material for these biosensors [1][2][3][4][5][6] thanks to its unusual physical properties, which are different from sensors' 3D bulk counterparts. Graphene is a two-dimensional single atomic layer of sp 2 bonded carbon atoms arranged in a honeycomb lattice. Two-dimensional (2D) materials are characterized by their strong interplanar bonding but weak interplanar interaction. Interfaces between neighboring 2D layers or between 2D overlayers and substrate surfaces provide intriguing confined spaces for chemical processes, which have stimulated a new area of "chemistry under 2D cover". The interaction between electrons and the honeycomb carbon lattice causes the electrons to behave as massless fermions, which give rise to novel physical phenomena such as anomalous room temperature quantum hall effect, extraordinarily high carrier mobility, high surface area per unit volume, and a low-noise [7][8][9][10]. Thus, graphene is a very promising material for the manufacture of various types of sensors. However, the combination of these properties leads to the fact that even a minimal amount of impurity in the graphene surface can noticeably change the conductivity of the graphene film.
When it comes to the registration of influenza viruses and the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that causes COVID-19, several reports have shown the great promise of the application of chips based on graphene films with two contacts, i.e., graphene resistances [1,2]. The main stages of creating biosensors for the influenza and SARS-CoV-2 viruses are similar. Therefore, influenza viruses can act as an affordable, safe, and better-studied model material. All studies of graphene-based biosensors are aimed mostly at creating a design of the sensor topology that would provide a quick and reliable response to the presence of antigens on the graphene surface.
Significant improvements in the detection of low concentration of influenza viruses and COVID-19 by graphene biosensors have been achieved recently. Some works [1,2,7] report the detection of a viral concentration of~1 fg/mL, which is comparable or below the detectable limits of modern laboratory methods of enzyme-linked immunosorbent assay. However, solving the above-mentioned practical tasks requires the high reproducibility of viral detection.
Reproducibility issues relating to the structural and physicochemical properties of graphene resistances are discussed in [1,7,11,12]. The inhomogeneity of the properties of graphene films over the chip area has led to the need to use several duplicate resistors in one biosensor. This makes it possible to neutralize the effect of inhomogeneity of resistance values on the results of virus detection by the biosensor [1,7,11]. These works show that the hook effect [13] related to a nonlinear dependence of detected signals versus analyte concentration may cause the lack of reproducibility of detection. The nonlinear dependence of detected signal versus analyte concentration is usually observed at both low (2-20 fg/mL) and high (1-10 ng/mL) concentrations [7]. Moreover, this effect can be observed not only in graphene, but also in other 2D materials that have a honeycomb structure [13]. The reasons behind the phenomenon are not yet clear. It would appear that the 3D properties of analyte and structural features of biosensor materials might lead to this effect [13]. In addition, improvements in synthesis, processing, and integration are necessary to implement the large-scale and widespread manufacture of 2D devices for health-related applications [14]. In particular, achieving large-scale uniformity of material properties is essential in order to implement the mass production of biosensors.
The commonly accepted concept of viral biosensor production is based on the creation of conditions for the antibody-antigen (virus) immune reaction on a graphene surface. It can include controlled treatment (functionalization) of the graphene surface to create covalent bonds that ensure the occurrence of selective chemical reactions of the attachment of biomolecules (antibodies) [1][2][3][4][5][6][7][8][9][10]. The antibody-antigen immune reaction on the graphene surface changes its electronic state, which can be registered, for example, by a change in the current flowing through a graphene chip. It is obvious that the ultimate sensitivity of such a biosensor is determined by the properties of graphene as well as by physical and chemical processes that occur on the graphene surface at all stages of the biosensor production. Moreover, it is also affected by the degree of homogeneity of these processes over the entire area of the graphene chip. In [1,2,[7][8][9][10], the main attention was paid to both the functionalization of graphene and the design of the contact pads that ensure the detection of viruses on the graphene surface of a chip.
Numerous techniques such as vapor deposition, epitaxial growth, mechanical and chemical exfoliation, and the thermal destruction of the silicon carbide surface have been explored for achieving desired properties in graphene [15]. A short review of these techniques and an analysis of the quality of obtained graphene films can be found elsewhere [1,2,7,[15][16][17]. It should be noted that the exfoliation method used by K. Novoselov and A. Geim in their first work on the preparation and study of graphene [7] is reduced to the separation of a one-atom-thick flake from a graphite crystal. Until now, graphene samples obtained by this technology have had the best structural perfection. However, their small size and irregular and unpredictable geometrical shape do not allow for the exfoliation method in industry. Graphene films obtained by thermal destruction of the surface of silicon carbide (SiC) come second in terms of structural perfection [2,17]. Thus, it is possible to obtain structures up to industrially important dimensions. In this case, the dimensions are limited only by the initial SiC substrate, i.e., up to 6 inches (150 mm) in diameter [2,17]. However, this technique also cannot fully avoid the issues of inhomogeneity of graphene quality. Apart from perfect monolayer graphene, there are typical inclusions of bilayer and multilayer graphene [18,19]. Typically, the grown samples are composed of 85% monolayer graphene and 15% bilayer graphene that is represented by small bilayer patches (inclusions) of various sizes. It is reasonable to assume that the unsaturated edges of these inclusions may create extra nucleation ions [18]. More information about the intrinsic properties of the epitaxial graphene on SiC can be extracted from the analysis of the Raman mapping data [19]. Meanwhile, the structural properties of graphene films fabricated by different methods vary significantly. Even for films fabricated by the same method, the nanostructural arrangement of graphene depends on the technological conditions and properties of the substrate material. There are few publications on the influence of graphene nanostructural arrangement on the properties of the chips and biosensors based on them. Its influence on the properties of graphene inclusions is also not discussed enough. The functionalization of graphene is discussed in various publications [1,7,10,20]. Fewer studies have been dedicated to the investigation of changes in the properties of graphene inclusions during functionalization and on the influence of these changes on antibody and virus binding.
In this paper, we used graphene films obtained by thermal destruction of the SiC substrates surface for biosensor chips used to detect the influenza viruses. We show the changes in the resistance, low-frequency noise amplitude, and graphene nano-arrangement reflected in surface morphology during the main stages of biosensor chip development (functionalization of the graphene surface, immobilization of antibodies, detection of influenza viruses). The uniformity of the distribution of these changes over the chip area was also investigated by probe methods of analysis, as well as by a low-frequency noise technique.
Materials and Methods
The main stages of graphene-based biosensors are presented in Figure 1 and explained below.
The Production of Graphene on a SiC Surface using the Sublimation Method
In our experiments we used 4H-SiC substrates with a minimum misorientation angle (α~0), and the growth was carried out on the (0001) ± 0.25° orientation (Si face). We used semi-insulating substrates. For the successful development of graphene growth technology, a necessary condition is the high-quality preparation of the SiC surface, which reduces the effect of contamination and surface inhomogeneity on the sublimation process. Pre-growth etching in a hydrogen atmosphere was used for preliminary cleaning of the SiC substrate surface. The essence of the technology lies in the high-temperature heating of the SiC substrate in a hydrogen atmosphere. At high temperatures, free carbon formed on the SiC surface binds with hydrogen to form volatile chemical compounds. We used a gas mixture containing argon (volume fraction 95%) and hydrogen (volume fraction 5%). Then, the growth of graphene on the SiC surface was performed at a temperature of 1700-1800 °С in an argon atmosphere (720-750 torr). The growth process was carried out in a graphite crucible with induction heating.
After the graphene films growth, a conventional photolithography process was used to pattern the graphene/SiC chips. The chips were processed from several samples of graphene films formed by thermal decomposition of semi-insulating 4H-SiC. Details on graphene film processing and the mounting of chips on holders can be found elsewhere [14]. This study was carried out on chips with two contact pads (graphene resistors) assembled on a convenient printed circuit board (PCB) holder. The size of the sensor area (active surface of graphene in the chip) was about 1 × 1.5 mm 2 .
The Functionalization of the Graphene Surface
To provide sensing ability, graphene functionalization is usually accomplished by using various covalent and noncovalent approaches [5,8]. Graphene functionalization modifies the surface chemistry of graphene and creates covalent bonds on its surface which are used to attach a specialized immune protein, an antibody. We used covalent graphene functionalization, as it is the most simple, reliable, and affordable method [2].
In this work, the process of the functionalization of the graphene surface in chips was carried out in two stages: (1) the formation of covalent bonds during the deposition of nitrophenyl groups (nitrobenzene, C6H5NO2) and (2), the subsequent reduction of the nitrophenyl groups to phenylamine groups (aminobenzene, C6H5NH2) by the method of
The Production of Graphene on a SiC Surface Using the Sublimation Method
In our experiments we used 4H-SiC substrates with a minimum misorientation angle (α~0), and the growth was carried out on the (0001) ± 0.25 • orientation (Si face). We used semi-insulating substrates. For the successful development of graphene growth technology, a necessary condition is the high-quality preparation of the SiC surface, which reduces the effect of contamination and surface inhomogeneity on the sublimation process. Pre-growth etching in a hydrogen atmosphere was used for preliminary cleaning of the SiC substrate surface. The essence of the technology lies in the high-temperature heating of the SiC substrate in a hydrogen atmosphere. At high temperatures, free carbon formed on the SiC surface binds with hydrogen to form volatile chemical compounds. We used a gas mixture containing argon (volume fraction 95%) and hydrogen (volume fraction 5%). Then, the growth of graphene on the SiC surface was performed at a temperature of 1700-1800 • C in an argon atmosphere (720-750 torr). The growth process was carried out in a graphite crucible with induction heating.
After the graphene films growth, a conventional photolithography process was used to pattern the graphene/SiC chips. The chips were processed from several samples of graphene films formed by thermal decomposition of semi-insulating 4H-SiC. Details on graphene film processing and the mounting of chips on holders can be found elsewhere [14]. This study was carried out on chips with two contact pads (graphene resistors) assembled on a convenient printed circuit board (PCB) holder. The size of the sensor area (active surface of graphene in the chip) was about 1 × 1.5 mm 2 .
The Functionalization of the Graphene Surface
To provide sensing ability, graphene functionalization is usually accomplished by using various covalent and noncovalent approaches [5,8]. Graphene functionalization modifies the surface chemistry of graphene and creates covalent bonds on its surface which are used to attach a specialized immune protein, an antibody. We used covalent graphene functionalization, as it is the most simple, reliable, and affordable method [2].
In this work, the process of the functionalization of the graphene surface in chips was carried out in two stages: (1) the formation of covalent bonds during the deposition of nitrophenyl groups (nitrobenzene, C 6 H 5 NO 2 ) and (2), the subsequent reduction of the nitrophenyl groups to phenylamine groups (aminobenzene, C 6 H 5 NH 2 ) by the method of cyclic voltammetry (CV). All CV experiments were performed in a conventional threeelectrode cell with an Ag/Ag+ (or Ag/AgCl) reference electrode, a platinum wire counter electrode, and a graphene/SiC chip as the working electrode. The three-electrode cell had a hermetic lid allowing the electrolyte and the space above it to be purged by dry Ar to remove traces of the moisture from the cell and the electrolyte.
At the first CV stage, the nitrophenyl groups were attached to the graphene surface. For this, a graphene chip assembled on a holder was immersed for 1-2 min in a nonaqueous electrolyte based on a mixture of 2 µM 4-nitrobenzenediazonium tetrafluoroborate (4NDT) and 0.1 M tetrabutylammonium tetrafluoroborate (TBATF) in acetonitrile (CH 3 CN).
In the second CV process, the graphene/SiC die was immersed in a 0.1 M KCl water/ethanol (9:1) solution in order to reduce the nitrophenyl groups to the phenylamine groups on the graphene die surface. Details on the graphene functionalization process can be found in [20].
Antibody Immobilization and Influenza Virus Detection
After surface functionalization, all chips were incubated in a solution containing influenza A (or B) antibodies for 3 h at 37 • C, followed by a single wash in PBS. The detection of influenza antigens (viruses) in PBS was then carried out. Concentrations of influenza virus in PBS solutions ranged from 10 −16 g/mL to 10 −9 g/mL.
For antibody immobilization on the functionalized graphene surface, we used the same concentration of antibodies diluted in the buffer solution in all experiments. Anti-NP monoclonal antibodies were dissolved at 200 µg/mL in PBS. The concentration of the antibodies was higher than the possible places with covalent bonds suitable for the antibody's attachment. We did not observe changes in conductivity after the antibody conjugation.
The following strains of influenza viruses used in the experiments were obtained from the collection at the museum of viruses of the Smorodintsev Research Institute of Influenza, Russia: influenza virus A/California/ 07/09 (H1N1pdm09) and influenza virus B/Brisbane/46/15. All experiments with viruses were carried out in a BSL-2 facility at the Institute of Influenza by its employees, who are co-authors of the work. All permits were in place.
Lysates of purified virus concentrates were used as an analyte. The lysates were prepared by diluting viruses in a lysis buffer (200 mM DTT, 0.05% Tween 20 in PBS), followed by a freeze-thaw step. Such viral lysates mainly contain destroyed virions. Therefore, the concentration of viruses in the lysates was assessed by measuring the total viral protein with the modified Lowry method, using the RC DC Protein Assay Kit (Bio-Rad, Hercules, CA, USA). Analyte solutions were prepared by tenfold dilution in PBS. The analytes were incubated at room temperature.
The biosensor concept in our studies is based on antigen-antibody immunoreaction on the graphene surface. The selectivity or specificity for sensing performance is mainly due to nature of the immunoreaction: only related (matched) antigens and antibodies participate in the interaction and can change the state of the graphene. However, other viruses can influence the biosensor response via mechanisms other than the immunoreaction.
In this study, we did not use a special passivation of the graphene surface. The response of biosensors with immobilized antibodies was investigated under the conditions of diluted solutions of related antigens in order to determine the influence of the graphene surface on the detection process.
During the detection process, a direct current voltage (20-80 mV) was applied to the graphene chip coated (immersed) with the influenza antibody, and the chip was immersed in a PBS diluted solution of influenza virus (antigen) for 30 s. The influenza antigen chemically attaches to the influenza antibody which results in a change in the resistance of the graphene channel, which can be promptly detected by the passing of a current through the graphene/SiC chip. Thereafter, the chips were pulled out, washed in pure PBS solution, dried, and immersed again in another PBS solution with a different influenza virus concentration.
Methods
Current-voltage (I-U) characteristics and low-frequency noise spectra containing information on the defective system state, which depict the quality of the material, were studied in the chips after each stage of the biosensor fabrication. The surface morphology and the surface potential distribution were monitored by atomic force microscopy (AFM) and Kelvin probe force microscopy (KPFM). In addition, scanning electron microscopy (SEM) was used to visualize the attachment of antibodies and influenza viruses to the graphene surface.
AFM and KPFM measurements were carried out on an Ntegra AURA setup (NT-MDT, Russia). AFM studies were carried out using the HA_FM cantilever (www.tipsnano.com, accessed on 22 December 2021). A resonant mode of operation was used in the work. The AFM probe knocks on the surface scanning frequency 0.6 Hz. The scanning speed was approximately 1.3 µm/s. The stiffness coefficient of such a cantilever is 3.5 N/m, the radius of curvature is less than 10 nm, and the scanning field size is 256 × 256 points.
The I-U characteristics were measured using the KEITHLEY 6487 power source. The power spectral density of voltage fluctuations was measured for the frequency range of 1 Hz to 50 kHz. The studied samples were connected in series with a low-noise load resistor R, the resistance of which varied from 100 Ω to 13 kΩ, depending on the current passing through the chip. The voltage fluctuations S U at the resistors R L were amplified by a low-noise preamplifier SR 560 (Stanford Research Systems, Sunnyvale, CA, USA) and subsequently measured by an SR 770 FET NETWORK Analyzer (Stanford Research Systems, Sunnyvale, CA, USA). The background noise of the preamplifier did not exceed 4 nV/ √ Hz at 1 kHz, which is approximately equivalent to the Johnson-Nyquist noise of a 1000-Ω resistance. SEM analysis of the chip surface was carried out by a JSM 7001F microscope (Jeol, Tokyo, Japan) in the secondary electron mode at an accelerating voltage of 5 keV and a beam current of 12 pA.
Raman spectroscopy measurements were carried out at room temperature in the backscattering geometry using a T64000 spectrometer (Horiba Jobin-Yvon, Palaiseau, France) equipped with a confocal microscope. The laser power of a YAG:Nd laser with a wavelength of 532 nm was limited to 1.0 mW in a spot 1 µm in diameter to prevent the damaging and modification of the graphene films. Along with local measurements, sample areas of 10 × 10 µm 2 were analyzed with subsequent plotting of Raman maps of spectral lines parameters. A YAG: Nd solid-state laser with a wavelength of 532 nm was used as an excitation source.
Investigation of the Properties of Graphene Chips before and after Functionalization
The I-U characteristics of all chips under study remained linear at each stage of the biosensor development. Table 1 shows the typical resistance of chips obtained from graphene/SiC plates of different series and the low-frequency noise S U before and after functionalization. The chips from several plates were investigated. The first five characters in the chip notation in Table 1 (like EG319) indicate the plate number on which the graphene film was grown. Characters after a dash indicate the number of the chip processed from this plate (like EG319-3). For each plate, the parameters of a specific chip typical for chips of this plate are given. Chips from different plates are combined into two groups that differ in the percentage of the inclusions of the bilayer graphene into the graphene monolayer. The presence of the inclusions of the bilayer graphene of different sizes is a typical feature of graphene film obtained by thermal decomposition of silicon carbide [16]. The presence of graphene films on the SiC surface was confirmed by Raman spectroscopy, as shown in Figure 2. Before and after functionalization, the Raman spectra of the chips in the region of 1300-3000 cm −1 were dominated by sharp G and 2D lines characteristic of monolayer graphene [21] and wide asymmetric bands centered at approximately 1380 and 1550 cm −1 corresponding to the buffer layer [22]. After functionalization, a new D line at~1350 cm −1 appeared in the spectra. This is attributed to the appearance of defects in the graphene's crystal lattice. We did not observe any significant shift or broadening of the G and 2D lines after the functionalization of the chips. The presence of graphene films on the SiC surface was confirmed by Raman spectroscopy, as shown in Figure 2. Before and after functionalization, the Raman spectra of the chips in the region of 1300-3000 cm −1 were dominated by sharp G and 2D lines characteristic of monolayer graphene [21] and wide asymmetric bands centered at approximately 1380 and 1550 cm −1 corresponding to the buffer layer [22]. After functionalization, a new D line at ~1350 cm −1 appeared in the spectra. This is attributed to the appearance of defects in the graphene's crystal lattice. We did not observe any significant shift or broadening of the G and 2D lines after the functionalization of the chips. Analysis of the full width at half-maximum of the 2D line (FWHM2D) distribution before and after functionalization allowed us to analyze the distribution of mono-and bilayer graphene areas on the surface of the chips. Figure 2b,c demonstrates the difference in graphene film thickness between samples from Group 1 and 2 before functionalization. In the areas with FWHM2D > 40 cm −1 , this line has an asymmetric contour corresponding to the envelope of four Lorentzians, which is a fingerprint of bilayer graphene [23]. One can see that the samples from Group 1 have relatively low share of bilayer inclusions (~5%), while in case of samples from Group 2, the share of bilayer inclusions is significantly higher (~30%). After functionalization, the shape and size of bilayer graphene inclusions did not change. In the surface potential maps (Figure 2d), the bilayer inclusions appear as regions of higher potential values.
The low-frequency noise spectra of graphene chips are shown in Figure 3. For all chips, a spectral dependence close to SU ~1/f was observed before and after functionalization. This type of dependence is typical in graphene. The dependence indicates that the noise is determined not by uniformly distributed single defects in the material but by a system of defects [24,25]. A higher noise indicates a greater level of defectiveness in the material [26].
The resistance of chips in Group 2 is noticeably lower than that in Group 1 before functionalization (Table 1). An opposite trend in the properties of the chips from Groups 1 and 2 was observed in changes after functionalization. The resistance of the chips in Group 1 decreased significantly. However, SU grew. Meanwhile, the chips from Group 2 showed no significant changes in these parameters. The changes in Raman spectra for the chips in Group 1 are similar to those presented in [7], while there is no noticeable change in the spectra for the chips in Group 2. These results allow us to assume that one of the reasons behind the observed phenomenon is the significant difference in the amount of bilayer graphene inclusions in the chips of these two groups.
We employed AFM to clarify the behavior of the bilayer graphene inclusions. Studying the surface morphology of the graphene chips of these two groups revealed significant differences in the nature of their nano-arrangement both before and after functionalization, as illustrated in Figures 4-8. It should be noted that AFM images of the Analysis of the full width at half-maximum of the 2D line (FWHM2D) distribution before and after functionalization allowed us to analyze the distribution of mono-and bilayer graphene areas on the surface of the chips. Figure 2b,c demonstrates the difference in graphene film thickness between samples from Group 1 and 2 before functionalization. In the areas with FWHM2D > 40 cm −1 , this line has an asymmetric contour corresponding to the envelope of four Lorentzians, which is a fingerprint of bilayer graphene [23]. One can see that the samples from Group 1 have relatively low share of bilayer inclusions (~5%), while in case of samples from Group 2, the share of bilayer inclusions is significantly higher (~30%). After functionalization, the shape and size of bilayer graphene inclusions did not change. In the surface potential maps (Figure 2d), the bilayer inclusions appear as regions of higher potential values.
The low-frequency noise spectra of graphene chips are shown in Figure 3. For all chips, a spectral dependence close to S U~1 /f was observed before and after functionalization. This type of dependence is typical in graphene. The dependence indicates that the noise is determined not by uniformly distributed single defects in the material but by a system of defects [24,25]. A higher noise indicates a greater level of defectiveness in the material [26]. All the chips exhibited a honeycomb like structure typical of graphene films formed on the Si-face of silicon carbide [27]. Local bright areas of small lateral sizes and heights distributed non-uniformly over the graphene surface were observed on all chips ( Figures 4 and 5). These are similar to large inclusions identified as a bilayer graphene in Figure 2 and are only just visible in small AFM scans 2 × 2 μm 2 . The Group 2 chips had a rather The resistance of chips in Group 2 is noticeably lower than that in Group 1 before functionalization (Table 1). An opposite trend in the properties of the chips from Groups 1 and 2 was observed in changes after functionalization. The resistance of the chips in Group 1 decreased significantly. However, S U grew. Meanwhile, the chips from Group 2 showed no significant changes in these parameters. The changes in Raman spectra for the chips in Group 1 are similar to those presented in [7], while there is no noticeable change in the spectra for the chips in Group 2. These results allow us to assume that one of the reasons behind the observed phenomenon is the significant difference in the amount of bilayer graphene inclusions in the chips of these two groups.
We employed AFM to clarify the behavior of the bilayer graphene inclusions. Studying the surface morphology of the graphene chips of these two groups revealed significant differences in the nature of their nano-arrangement both before and after functionalization, as illustrated in All the chips exhibited a honeycomb like structure typical of graphene films formed on the Si-face of silicon carbide [27]. Local bright areas of small lateral sizes and heights distributed non-uniformly over the graphene surface were observed on all chips ( Figures 4 and 5). These are similar to large inclusions identified as a bilayer graphene in Figure 2 and are only just visible in small AFM scans 2 × 2 μm 2 . The Group 2 chips had a rather higher AFM profile, up to 10-15 nm, and inclusions occupied a larger area ( Figure 5) than the Group 1 chips ( Figure 6). The height of the AFM profile in the bright areas in Figure 6 is less than 6 nm. There is a significant decrease in the sizes of bright areas in the chips of Group 2 ( Figure 6) and Group 1 (Figure 7) after functionalization. Meanwhile, the sizes of dark regions increase up to 500 nm. The dark regions are similar to shallow pores in plain or large honeycomb-like defects non-uniformly distributed over the chip area. Thus, we can conclude that functionalization changed the nano-arrangement in the graphene, making it less uniform.
AFM profiles depicting the surface of the chips before and after functionalization in Figure 8 visualize these changes and the differences between the features of pristine graphene nano-arrangements in Group 1 and 2 chips. It can be seen that pristine graphene large honeycomb-like defects non-uniformly distributed over the chip area. Thus, we can conclude that functionalization changed the nano-arrangement in the graphene, making it less uniform.
AFM profiles depicting the surface of the chips before and after functionalization in Figure 8 visualize these changes and the differences between the features of pristine graphene nano-arrangements in Group 1 and 2 chips. It can be seen that pristine graphene is more defective in chips from Group 2 than in chips from Group 1 (Figure 8a). The density of honeycomb-like defects in graphene in chips from Group 2 is higher. This correlates with higher low-frequency noise in the chips from Group 2 ( Table 1) Functionalization results in the occurrence of honeycomb-like defects with nanosteps (Figure 8b-d), which are deeper in the chips from Group 2 ( Figure 8b). We assume that these changes in the graphene nano-arrangement may lead to an increase in conductivity similar to the case when conductivity increases in the process of porous graphene creation [28]. Moreover, in this work, we used a two-stage functionalization process. The first stage was the formation of covalent bonds during the deposition of nitrophenyl groups. The second stage was the subsequent reduction of the nitrophenyl groups to phenylamine groups by a method of cyclic voltammetry. Details on graphene functionalization can be found elsewhere [20]. All the chips exhibited a honeycomb like structure typical of graphene films formed on the Si-face of silicon carbide [27]. Local bright areas of small lateral sizes and heights distributed non-uniformly over the graphene surface were observed on all chips (Figures 4 and 5). These are similar to large inclusions identified as a bilayer graphene in Figure 2 and are only just visible in small AFM scans 2 × 2 µm 2 . The Group 2 chips had a rather higher AFM profile, up to 10-15 nm, and inclusions occupied a larger area ( Figure 5) than the Group 1 chips (Figure 6). The height of the AFM profile in the bright areas in Figure 6 is less than 6 nm.
There is a significant decrease in the sizes of bright areas in the chips of Group 2 ( Figure 6) and Group 1 (Figure 7) after functionalization. Meanwhile, the sizes of dark regions increase up to 500 nm. The dark regions are similar to shallow pores in plain or large honeycomb-like defects non-uniformly distributed over the chip area. Thus, we can conclude that functionalization changed the nano-arrangement in the graphene, making it less uniform.
AFM profiles depicting the surface of the chips before and after functionalization in Figure 8 visualize these changes and the differences between the features of pristine graphene nano-arrangements in Group 1 and 2 chips. It can be seen that pristine graphene is more defective in chips from Group 2 than in chips from Group 1 (Figure 8a). The density of honeycomb-like defects in graphene in chips from Group 2 is higher. This correlates with higher low-frequency noise in the chips from Group 2 ( Table 1). conductivity similar to the case when conductivity increases in the process of porous graphene creation [28]. Moreover, in this work, we used a two-stage functionalization process. The first stage was the formation of covalent bonds during the deposition of nitrophenyl groups. The second stage was the subsequent reduction of the nitrophenyl groups to phenylamine groups by a method of cyclic voltammetry. Details on graphene functionalization can be found elsewhere [20]. The covalent binding of nitrophenyls to graphene films is known to lead to a remarkable decrease in conductivity. This happens because of a reduction in graphene aromaticity due to the transformation in hybridization of carbon atoms from sp 2 to sp 3 . Nitrophenyl groups are acceptors that reduce electronic density in graphene. The attachment of phenylamine groups (aminobenzene, C6H5NH2) by cyclic voltammetry leads to decreasing resistance values, since aminophenyl groups have weaker acceptor properties than nitrophenyl groups.
Thus, graphene nano-arrangement, in addition to its functionalization, can contribute to a decrease in graphene resistance. All the obtained results confirm that functionalization is accompanied by an increase in graphene defectiveness due to the formation of large honeycomb-like defects up to 500 nm in plane. At the same time, small inclusions of bilayer graphene disappear or decrease noticeably. This phenomenon might be related to the reduction properties of phenylamine groups. These results allow us to suggest that shallow bilayer inclusions contain nonequilibrium phases of weakly oxidized graphene.
SEM images of the surface morphology of a graphene chip after functionalization, immobilization of influenza B antibodies, and antibody-virus B antigen immune reaction are presented in Figure 9. The aggregation of antibodies and antigens and their nonuniform distribution over the graphene surface are observed (Figure 9). These phenomena are going to be discussed later using the results of AFM studies.
Insert 1 in Figure 9 shows that functionalization changes the emission properties of graphene films. This can be identified by a change in contrast between functionalized (dark areas) an unfunctionalized (gray areas) regions in graphene.
The further stages of biosensor fabrication (immobilization of antibodies of influenza A and B viruses, antibody-influenza immune reaction, and detection of influenza viruses) were studied mostly on the chips of Group 1. Significant changes in resistance and lowfrequency noise, which are comparable to changes after functionalization, were not Functionalization results in the occurrence of honeycomb-like defects with nano-steps (Figure 8b-d), which are deeper in the chips from Group 2 (Figure 8b). We assume that these changes in the graphene nano-arrangement may lead to an increase in conductivity similar to the case when conductivity increases in the process of porous graphene creation [28]. Moreover, in this work, we used a two-stage functionalization process. The first stage was the formation of covalent bonds during the deposition of nitrophenyl groups. The second stage was the subsequent reduction of the nitrophenyl groups to phenylamine groups by a method of cyclic voltammetry. Details on graphene functionalization can be found elsewhere [20].
The covalent binding of nitrophenyls to graphene films is known to lead to a remarkable decrease in conductivity. This happens because of a reduction in graphene aromaticity due to the transformation in hybridization of carbon atoms from sp 2 to sp 3 . Nitrophenyl groups are acceptors that reduce electronic density in graphene. The attachment of phenylamine groups (aminobenzene, C 6 H 5 NH 2 ) by cyclic voltammetry leads to decreasing resistance values, since aminophenyl groups have weaker acceptor properties than nitrophenyl groups.
Thus, graphene nano-arrangement, in addition to its functionalization, can contribute to a decrease in graphene resistance. All the obtained results confirm that functionalization is accompanied by an increase in graphene defectiveness due to the formation of large honeycomb-like defects up to 500 nm in plane. At the same time, small inclusions of bilayer graphene disappear or decrease noticeably. This phenomenon might be related to the reduction properties of phenylamine groups. These results allow us to suggest that shallow bilayer inclusions contain nonequilibrium phases of weakly oxidized graphene.
SEM images of the surface morphology of a graphene chip after functionalization, immobilization of influenza B antibodies, and antibody-virus B antigen immune reaction are presented in Figure 9. The aggregation of antibodies and antigens and their non-uniform distribution over the graphene surface are observed (Figure 9). These phenomena are going to be discussed later using the results of AFM studies.
Study of Immobilization of Antibodies of Influenza A and B Viruses, Antibody-Influenza Immune Reaction, and Detection of Influenza Viruses by Biosensors Based on Graphene Chips
The conventionally accepted method described in Section 2.3 was used to detect the influenza viruses (antigens) [2]. Influenza virus antigens were diluted in PBS. In the experiment, the current passing through the chip was measured versus the concentration of antigens of influenza A viruses in PBS in the concentration range of 10 −16 g/mL to 10 −10 g/mL. Figure 10 shows an almost monotonic increase in the magnitude of the response, which is approximated by a logarithmic function with the parameter R 2 close to 1 (0.96) in chips from Group 1 with lower concentration of honeycomb-like defects. A similar dependence was observed when the response of a graphene-based biosensor used to contact with solutions of egg albumin in PBS was studied [20,29]. It should be noted that a weak logarithmic dependence of the detected signal versus analyte concentration, as presented in Figure 10, is closer to linear than ones observed in other studies concerning viral detection [1,7,12]. The reasons behind the weak concentration dependence have yet to be clarified. The biosensor's response dependence versus the virus's concentration in chips from Group 2 is strongly nonlinear. 1 (left, top) shows a 5 µm × 5 µm section with boundaries of functionalized graphene (dark areas) and non-functionalized graphene (light area in the middle). Inset 2 (right, bottom) shows a 2.7 µm × 4.2 µm region with virus aggregates, located in a recess with a facet close to a hexagonal structure.
Insert 1 in Figure 9 shows that functionalization changes the emission properties of graphene films. This can be identified by a change in contrast between functionalized (dark areas) an unfunctionalized (gray areas) regions in graphene.
The further stages of biosensor fabrication (immobilization of antibodies of influenza A and B viruses, antibody-influenza immune reaction, and detection of influenza viruses) were studied mostly on the chips of Group 1. Significant changes in resistance and lowfrequency noise, which are comparable to changes after functionalization, were not observed after these stages. As a result, we chose probe methods to study the chips after each of these stages.
Study of Immobilization of Antibodies of Influenza A and B Viruses, Antibody-Influenza Immune Reaction, and Detection of Influenza Viruses by Biosensors Based on Graphene Chips
The conventionally accepted method described in Section 2.3 was used to detect the influenza viruses (antigens) [2]. Influenza virus antigens were diluted in PBS. In the experiment, the current passing through the chip was measured versus the concentration of antigens of influenza A viruses in PBS in the concentration range of 10 −16 g/mL to 10 −10 g/mL. Figure 10 shows an almost monotonic increase in the magnitude of the response, which is approximated by a logarithmic function with the parameter R 2 close to 1 (0.96) in chips from Group 1 with lower concentration of honeycomb-like defects. A similar dependence was observed when the response of a graphene-based biosensor used to contact with solutions of egg albumin in PBS was studied [20,29]. It should be noted that a weak logarithmic dependence of the detected signal versus analyte concentration, as presented in Figure 10, is closer to linear than ones observed in other studies concerning viral detection [1,7,12]. The reasons behind the weak concentration dependence have yet to be clarified. The biosensor's response dependence versus the virus's concentration in chips from Group 2 is strongly nonlinear.
which is approximated by a logarithmic function with the parameter R 2 close to 1 (0.96) in chips from Group 1 with lower concentration of honeycomb-like defects. A similar dependence was observed when the response of a graphene-based biosensor used to contact with solutions of egg albumin in PBS was studied [20,29]. It should be noted that a weak logarithmic dependence of the detected signal versus analyte concentration, as presented in Figure 10, is closer to linear than ones observed in other studies concerning viral detection [1,7,12]. The reasons behind the weak concentration dependence have yet to be clarified. The biosensor's response dependence versus the virus's concentration in chips from Group 2 is strongly nonlinear. We assumed that aggregation and graphene nano-arrangement might influence the trend of the concentration dependence of detected signal. Therefore, we studied AFM profiles of functionalized graphene surfaces after the immobilization of influenza virus antibodies and after antibody-antigen influenza virus immune reactions. Figure 11 shows alterations in the AFM profile of the chips in accordance with the graphene surface treatment stages. The maximum magnitude of the AFM profile of the Group 1 chips did not exceed 6 nm both before and after functionalization. The profile changes after all stages of graphene surface treatment are shown in Figure 11a (curves 1, 2, and 3). The features of the surface relief after graphene functionalization (curve 1) and after antibody-influenza virus immunoreaction (curve 3) are not indistinguishable on the same axis scales. Therefore, the AFM profiles after these two stages are shown with a larger scale for the y-axis in Figure 11b and with larger scales for both the x-and y-axis in Figure 11c. We assumed that aggregation and graphene nano-arrangement might influence the trend of the concentration dependence of detected signal. Therefore, we studied AFM profiles of functionalized graphene surfaces after the immobilization of influenza virus antibodies and after antibody-antigen influenza virus immune reactions. Figure 11 shows alterations in the AFM profile of the chips in accordance with the graphene surface treatment stages. The maximum magnitude of the AFM profile of the Group 1 chips did not exceed 6 nm both before and after functionalization. The profile changes after all stages of graphene surface treatment are shown in Figure 11a (curves 1, 2, and 3). The features of the surface relief after graphene functionalization (curve 1) and after antibodyinfluenza virus immunoreaction (curve 3) are not indistinguishable on the same axis scales. Therefore, the AFM profiles after these two stages are shown with a larger scale for the y-axis in Figure 11b and with larger scales for both the x-and y-axis in Figure 11c. Probe methods revealed the aggregation of antibodies and their non-uniform distribution over the graphene surface of the chips ( Figure 12); these were also revealed for the antigens (Figures 9 and 13a). For the antibodies, the maximum magnitude of the AFM profile was observed to be in clusters, which means that aggregation occurred both laterally and vertically.
Because of this aggregation, it was difficult to estimate the lateral dimensions of a single antibody. Its vertical dimension was about 15-20 nm, while in clusters it was more than 20-25 nm, as can be seen in Figure 12. A wide variety of sizes of virus aggregates and their inhomogeneous distribution over the chip area can be observed in the SEM images Probe methods revealed the aggregation of antibodies and their non-uniform distribution over the graphene surface of the chips ( Figure 12); these were also revealed for the antigens (Figures 9 and 13a). For the antibodies, the maximum magnitude of the AFM profile was observed to be in clusters, which means that aggregation occurred both laterally and vertically.
stages of infection.
When the viral concentration is high and it is necessary to register the presence of the virus, aggregation becomes a positive factor. Recently, a hexagonal honeycomb-like structure was created artificially by etching or laser-engraving [28]. Studies of aggregates by probe methods have shown that their formation is often associated with the peculiarities of the honeycomb-like defects. The 2000 × 2000 nm 2 AFM scan images in Figure 9 reveal the features of this structure, which become apparent in the alternation of honeycomb-like defects whose sizes vary from 100 nm to 500 nm and light regions whose sizes are slightly larger than those of the honeycomb-like defects. The shape of defects can be hexagonal. At the same time, light ball-like aggregates of antibodies are located either in the dark areas or at the borders of the light regions, i.e., on the nano-step (Figure 12a).
stages of infection.
When the viral concentration is high and it is necessary to register the presence of the virus, aggregation becomes a positive factor. Recently, a hexagonal honeycomb-like structure was created artificially by etching or laser-engraving [28]. Studies of aggregates by probe methods have shown that their formation is often associated with the peculiarities of the honeycomb-like defects. The 2000 × 2000 nm 2 AFM scan images in Figure 9 reveal the features of this structure, which become apparent in the alternation of honeycomb-like defects whose sizes vary from 100 nm to 500 nm and light regions whose sizes are slightly larger than those of the honeycomb-like defects. The shape of defects can be hexagonal. At the same time, light ball-like aggregates of antibodies are located either in the dark areas or at the borders of the light regions, i.e., on the nano-step (Figure 12a). Because of this aggregation, it was difficult to estimate the lateral dimensions of a single antibody. Its vertical dimension was about 15-20 nm, while in clusters it was more than 20-25 nm, as can be seen in Figure 12. A wide variety of sizes of virus aggregates and their inhomogeneous distribution over the chip area can be observed in the SEM images in Figures 9 and 13a, and in the AFM image in Figure 13b. It should be noted that the maximum sizes of aggregates obtained by these two methods correlate well.
The maximum lateral dimension of the antigen aggregates reached 5 µm and their height reached up to 250-300 nm. An enlarged image of one of the aggregates with a lateral dimension up to 5 µm is shown in inset 2 in Figure 9. The appearance of this aggregate is similar to the SEM images of swine flu aggregates found elsewhere [30]. It was difficult to determine the exact lateral and vertical size of a single viral cell under aggregation conditions. However, it can be estimated as 50-100 nm.
The aggregation of antibodies and viruses seems to be one of the reasons for the weak dependence of the current passing through the chip (chip response) on the concentration of viruses in PBS solutions, as shown in Figure 10. The mechanism leading to the formation of aggregates requires additional research. Understanding this mechanism is especially important for the reliable detection of low concentrations of viruses at early stages of infection.
When the viral concentration is high and it is necessary to register the presence of the virus, aggregation becomes a positive factor. Recently, a hexagonal honeycomb-like structure was created artificially by etching or laser-engraving [28]. Studies of aggregates by probe methods have shown that their formation is often associated with the peculiarities of the honeycomb-like defects. The 2000 × 2000 nm 2 AFM scan images in Figure 9 reveal the features of this structure, which become apparent in the alternation of honeycomb-like defects whose sizes vary from 100 nm to 500 nm and light regions whose sizes are slightly larger than those of the honeycomb-like defects.
The shape of defects can be hexagonal. At the same time, light ball-like aggregates of antibodies are located either in the dark areas or at the borders of the light regions, i.e., on the nano-step (Figure 12a).
Inset 2 in Figure 9 shows a contrast SEM image of virus aggregates. It can be seen that the viruses are located in a nearly hexagonal-shaped deepening. Figure 14 shows SEM images of virus aggregates smaller than those in Figure 9 but also located in the hexagonal-shaped deepening. Thus, the features of graphene nano-arrangement may be one of the reasons for the aggregation of viruses and antibodies. Inset 2 in Figure 9 shows a contrast SEM image of virus aggregates. It can be seen that the viruses are located in a nearly hexagonal-shaped deepening. Figure 14 shows SEM images of virus aggregates smaller than those in Figure 9 but also located in the hexagonal-shaped deepening. Thus, the features of graphene nano-arrangement may be one of the reasons for the aggregation of viruses and antibodies.
Conclusions
The study has shown that a graphene monolayer with 5% of bilayer inclusions, grown on SiC substrates, could be the basis for creating influenza virus biosensors. A set of diagnostic methods that includes SEM, AFM, Raman spectroscopy, and low-frequency noise measurements allow one to characterize the properties of graphene chips during the main stages of development. We found significant increases in the number of defects as well as a growth in conductivity in graphene chips after functionalization. We associate the observed phenomena with both the attachment of aminophenil groups and changes in nano-arrangement of the graphene. The changes in the graphene nano-arrangement are reflected, in particular, in the features of the surface morphology. Functionalization is accompanied by the formation of large honeycomb-like defects up to 500 nm in plane. At the same time, small inclusions of bilayer graphene disappear or decrease noticeably. An increase in the amount of bilayer or multilayer inclusions in graphene leads to a noticeable growth in the number of honeycomb-like defects. SEM and AFM measurements revealed that these defects facilitate the aggregation of antibodies and influenza viruses. Furthermore, they depict a non-uniform distribution of aggregated antibodies and influenza viruses over functionalized graphene films.
The features of graphene nano-arrangement affect the reliability of detecting extremely low concentrations of viruses during the early stages of diseases. They may also be the cause of observed weak non-linear logarithmic dependence of biosensor response versus the virus's concentration. A decrease in the concentration of honeycomb-like
Conclusions
The study has shown that a graphene monolayer with 5% of bilayer inclusions, grown on SiC substrates, could be the basis for creating influenza virus biosensors. A set of diagnostic methods that includes SEM, AFM, Raman spectroscopy, and low-frequency noise measurements allow one to characterize the properties of graphene chips during the main stages of development. We found significant increases in the number of defects as well as a growth in conductivity in graphene chips after functionalization. We associate the observed phenomena with both the attachment of aminophenil groups and changes in nanoarrangement of the graphene. The changes in the graphene nano-arrangement are reflected, in particular, in the features of the surface morphology. Functionalization is accompanied by the formation of large honeycomb-like defects up to 500 nm in plane. At the same time, small inclusions of bilayer graphene disappear or decrease noticeably. An increase in the amount of bilayer or multilayer inclusions in graphene leads to a noticeable growth in the number of honeycomb-like defects. SEM and AFM measurements revealed that these defects facilitate the aggregation of antibodies and influenza viruses. Furthermore, they depict a non-uniform distribution of aggregated antibodies and influenza viruses over functionalized graphene films.
The features of graphene nano-arrangement affect the reliability of detecting extremely low concentrations of viruses during the early stages of diseases. They may also be the cause of observed weak non-linear logarithmic dependence of biosensor response versus the virus's concentration. A decrease in the concentration of honeycomb-like defects in graphene made it possible to achieve an almost linear-logarithmic dependence of the response of the graphene biosensor versus the virus concentration in the range from 10 −16 g/mL to 10 −10 g/mL in PBS.
Thus, the control of graphene nano-arrangement is important to reduce the effect of viral aggregation and to create extremely sensitive biosensors for influenza viruses.
|
2021-12-25T16:11:53.018Z
|
2021-12-23T00:00:00.000
|
{
"year": 2021,
"sha1": "f82759f34ca7bc75ebe9377b6ba8662da6e50beb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6374/12/1/8/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "705d85eb015402760e340d54301fcb1e0fc9ac2d",
"s2fieldsofstudy": [
"Materials Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17667079
|
pes2o/s2orc
|
v3-fos-license
|
Generalized geometries and kinematics for Quantum Gravity
Our proposal here is to set up the conceptual framework for an eventual {Theory of Everything}. We formulate the arena -language- to build up {\it any} QG. In particular, we show how the objects of fundamental theories, such as p-branes (strings, loops and others) could be posed in this language.
Introduction
What are the quantum states of Quantum Gravity (QG)? The main purpose of this paper is to find them in the most natural way, in order to obtain a framework general enough to embody a complete theory of Nature, such that the states of the conjectured fundamental theories [1] be particular cases. Rephrasing: we wish to establish a "kinematics for QG".
Usually, the basis manifold in Quantum Field Theory (QFT) is assumed to be fixed: d-dimensional, differentiable, endowed with a metric tensor and a compatible covariant derivative.
On the other hand, there are various approaches to background-independent theories, more or less genuinely background independents (loops, strings, branes, posets, and so on [1]). Here, starting from very fundamental considerations, we present unifying perspectives on these, which actually generalizes to a general scheme for background independence.
In these usual approaches to QG, the manifold is thought as the picture in large scale of more fundamental geometrical structures, which are some type of lower-dimensional manifold, embedded in a given "ambient space"; they are in somewhere (thus, they they appears to be not completely backgroundindependent formulations).
Thus, the natural idea is to adopt the most general point of view; to assume that "the states of the spacetime are themselves manifolds, or collections of manifolds (multi-manifolds, M-M) 2 ", whose topology and dimension are in principle unrestricted. In a M-M state, for instance, each component could have different dimension and topological structure.
Furthermore, we shall assume that the physical fields live (are defined on) this generalized geometries . This is in accordance with a conception of the spacetime where it is defined in terms of the phenomenology [2].
So to speak, in this work, we propose a generalization of the concept of background of a full field theory; in our treatment, they are the spacetime states. This generalization is motivated by the need of a well-defined quantum theory of gravity. This would constitute clearly a formulation based upon a background-independent fashion, by construction. Moreover, it would be in agreement with other conceptual requirements [3].
This paper is organized according to the following outline: In Section 2, the main assumptions are established and the most general Hilbert space for QG is built; next, in Section 3, "embedding-type" structures, which are backgrounds embedded in other one, are described; and p-branes (and strings) and loops are proposed as examples. Questions related to dynamics for QG are also commented.
Finally, our concluding remarks are collected in Section 4.
QFT and bases of backgrounds: main assumptions.
In a Hamiltonian Field Theory (FT), the set of degrees of freedom is given by a complete set of commuting observables (CSCO) (the fields). For example, in a Klein-Gordon Field Theory, the CSOC is the scalar field ρ(x), where x is an element of a spatial Cauchy surface Σ(∼ IR 3 ); thus, the CSOC may be expressed by (Σ; ρ(x)).
In a classical theory, the most general CSCO is given by the background, which consists (in classical physics) of a differentiable basis manifold M with a metric g ab 3 , plus a collection of smooth fields (the metric is usually thought as other one) φ. The field dynamics is currently governed by the approach of gauge theories, while General Relativity (GR) describes the background.
If we restrict ourselves to globally hyperbolic space-times, the background structure is fully characterized by the geometry of a Cauchy's surface Σ 4 . Then, let us express by B the set of variables of the spatial geometry which characterizes its degrees of freedom (CSCO); for instance, topology, dimension, metric, connection, and so on, which a priori will be arbitrary 5 .
A certain ambiguity is unavoidable at this point, until a classical description of GR is not chosen.
Notice that, in the current formulations of GR, it is not possible to promote a background with defined metric and extrinsic curvature to be a QG state, since they are conjugate variables [4] and so they do not commute.
Recently [5], a Yang-Mills-type formulation of GR has been proposed for which this is different; the canonical degrees of freedom are SO(5)connections which contain information about the metric (vierbein) and its derivatives. The set of possible spatial backgrounds, characterized by the configurations set of these variables, can be promoted to be a "basis of the state space of QG" [6].
Remark: The form of the most general CSCO in classical physics is The set of CSCO; is the space of degrees of freedom, referred to as SDF. When a field theory is quantized (QFT), the background structure is assumed fixed, and the theory is determined by a functional ψ[φ] -the wave function-. The set of fields φ(x), the SDF, constitutes a basis for the Hilbert space.
We shall follow the same path to quantize spacetime. Our main statement is that this space of backgrounds or generalized geometries constitutes a basis for the state space of QG.
Let β be the set of generalized backgrounds B 6 , the SDF of the spatial geometry; then, we promote this to be a basis for the Hilbert pace of Quantum Gravity (H QG ).
Definition 2.1: Let us motivate this assumption from another point of view, following the same strategy to quantize QFT from QM. The structure underneath our construction shall become clear.
Formal derivation:
Let us consider a set of N (spatial) points, B N ; B N × IR is thought of as the basis manifold and the field φ : B N → IR m , describes the degrees of freedom at each point in B N .
This system has N.m degrees of freedom; the quantization rules yield the following Hilbert space: where Now, consider another set, B ′ N ′ . We shall have other Hilbert space with the same structure (2), H B ′ N ′ . Notice that, if a bijective map is possible between B N and B ′ N ′ (in the discrete case, if N = N ′ ); then, they are identified and characterized by N, So, we are able to build a total space: Now, we follow the same procedure as in the QFT formulation of N quantummechanical systems; we may consider the continuum-limit: N → ∞, B N → B: this structure is preserved, and H B is the current Hilbert space for the field φ on a background B. Notice that the isomorphism condition above must be replaced by the corresponding equivalence relation: for example, if the metric is one of those "background-variables", H B is defined modulo isometries 7 . Thus, we recover structure (1) the structure of the Hilbert space is such that H B ∼ ⊗ i H B i . We define this special subset of quantum states as: where n = 0 is included and describes the vacuum state, which means a no-background state 8 . Finally, let us remark that two inequivalent background states are said to be orthogonal, and then the scalar product in H QG naturally is defined from the above considerations.
Finally, we take useful working assumptions in order to have well-defined states and operators in QFT. Let us denote by φ, the CSCO of a FT. Then: Assumption I: For every local 9 operator A of the theory (FT), we can write: where, Assumption II: The QFT-wave function is where the subscript B denotes, in both cases, the object corresponding to the FT found on the fixed background B.
Up to now, we have established an important starting point for the fundamental kinematical structure of QG. Remarkably enough, this is background independent.
There are some questions in QG related to the particular dynamics chosen for GR. But their answers are absolutely independent of the ideas exposed above.
They are mainly: 1-What are the B-variables of the theory?, 2-They can be a full (spatial) background geometry (topological structure, metric, connection/covariant derivative), a state of QG?.
Finally, 3-What are the quantum equations for GR? Remarkably enough, notice that: "requeriments for these answers may be deduced in order to have a well-defined QFT for φ".
Some quantities related to the extrinsic geometry of set(B), extrinsic variables 10 , -but valued on this surface-could also be required to have a well-defined QFT.
p-brane states.
An essential property of the membrane fashion is the "ambient space" or target. From our point of view (background independent), we do not start off with this: the fundamental objects are the "generalized backgrounds". The main question of this section is: How can the notion of ambient space arise in this framework?
Now, we discuss more deeply the structure of H B in order to show that this language can serve to describe membranes and string states. Let us denote by a/b the set of functions from the set b in the set a, and redefine the standard notation as follows: L 2 [C/K] := L 2 [K] 11 . If the degrees of freedom of a full QFT (including gravity) are summarized by (B, φ), then we can write, where H φ denotes the usual Hilbert space in QFT for the fields φ. If φ : B → Φ 12 , then its structure is L 2 [C/(Φ/B)] 13 . Actually, as we have argued in the first section, if B is a one-component manifold, 14 once the atlas is given (i.e the set of the points of B is specified ), we shall denote this by set(B); then, all the local B-variables together with the rest of the fields; these are fields on set(B), valued in some manifold F (the fiber). Thus, the full structure of the Hilbert space for a background can be written, (10) Then, we define a sub-background or p-brane state (p is the spatial dimension) |B , if: There exists a decomposition F = A × Φ; A is called ambient space or target-, such that : I-Aǫβ 15 . II-The full local structure of B (loc(B)) can be induced from those of A, via an element xǫA/set(B). It can be written In general, we can decompose the B-variables as thus, |B = |set(B) |A, x .
III-For the structure (10), x is a physical degree of freedom -it is a dynamical field-16 .
Let us observe that non-scalar embedding fields x, the target A is a socalled non-commutative geometry.
12 When there are not fields φ, this is a one-dimensional space, and the theory is an enterely geometrical one.
13 Recall that we build up the Hilbert space using the canonical rule: H = L 2 [C/(SDF )], where SDF is the space of degrees of freedom.
14 It is not a multi-manifold. 15 Then, A has geometry-variables too. Besides that, there would be fields defined on F , further to the gemetrical ones. 16 In particular, this must be quantized.
An important remark: notice that the embedding in a "major" manifold is generically possible; but the main point which characterizes a "subbackground physical state" is that the embedding field x is a physical degree of freedom, and thus in particular it must be quantized.
This definition can be heuristically derived in a similar form to those of the first section, supposing subsets B N in a "major" continuum background M.
Notice that this definition contains in some sense the intuitive idea of these objects as "distributional" ones. Since Aǫβ, then in principle, we could write formally an alternative to the expression (10): The "" expresses that here (F /A) must be substituted by a more general set; the set of "generalized" functions on A -distributions-, and then L 2 should be replaced by some corresponding Sobolev space. Then, it is clear that the "sub-branes" can be described distributionally as intuitively expected; however, it seems more convenient to adopt structure (10), that is unique.
Intersecting sub-backgrounds: Now, we say that B 1 , B 2 , are intersecting if and only if the functions x 1 , x 2 coincide at some point. Then, let a configuration given by |A, gives the probability that B 1 , B 2 , intersect at one point p := p 1 = p 2 . As a precise example of this discussion, we write down the Hilbert space of the states of a string theory: where T is a Riemannian 10-dimensional manifold known as target space (we are concerned with the bosonic sector). The Hilbert space of the states of string theory is: Notice that this has the same structure of a Fock-type structure (F QG ) defined in the first section.
Closed string-space is expanded by the basis elements: Then, the wave functions: ψ|S = ψ[x(s)].
Loops:
Another non-perturbative approach to quantum gravity recently developed [8] is based on geometrical one-dimensional structures embedded in a three-dimensional one, namely: loops. Also, they can be described in this language: The ambient space is a three-dimensional (compact) manifold, Σ, and the "backgrounds" correspond to S 1 , or a collection of circles: Finally, the B-variables are induced by those of Σ via the embedding γ µ : S 1 → Σ; these variables, first introduced by Ashtekar [7], are the 3-dimensional space vierbeins E or their canonical conjugate: the SU(2)-connection A. Then, a loop state is |λ = |S 1 (|A; γ µ ), which resembles (13). According to our construction, these are the elements which we need to express the Hilbert space. The loop-QG Hilbert Space is Then, a state in the A-basis takes over the form ψ γ µ [A].
Concluding remarks.
Our claim is that the present work is the most general fashion for the space (or space-time) at observable level -direct or indirectly-. That is to say: if there are more fundamental structures, they must have contact with observable ones; which we believe to be the ones, proposed in this work. In a future work, following up this one, we exploit the dynamics due to new formulation of GR [5] and construct a particular, interesting full-model for QG, which complements the main ideas of this work. The possibility to describe, as it has been shown in the final examples, the kinematical objects of the recent more promising approaches to a fundamental theory in terms of the concepts formulated in this work, such as p-branes, strings, loops, and others, allows us to argue that they are simply "subspaces of our H QG ".
Finally, we hope that a diagrammatics for evolving B-geometries will arise when intersections/interactions are considered. In a "Feynman-rule" picture, where the time development of B would agree with a Feynman-diagram [2]. The recent Spin Foam Models seems to be examples of this. See for instance [9].
|
2014-10-01T00:00:00.000Z
|
2000-12-05T00:00:00.000
|
{
"year": 2000,
"sha1": "9d6b7c85a471fc65da37f4050faa54aecbf0c19f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9d6b7c85a471fc65da37f4050faa54aecbf0c19f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
32133786
|
pes2o/s2orc
|
v3-fos-license
|
Nurses knowledge, attitudes, practices and familiarity regarding disaster and emergency preparedness – Saudi Arabia
: The number of reported natural and human-made disasters continues to rise worldwide. Nurses comprise the highest percent of health and medical workforce. Nurses must understand the national disaster management cycle. The present study was aimed to examine nurses' knowledge, attitudes, practices and familiarity regarding disaster and emergency preparedness-Saudi Arabia. Cross-sectional descriptive study was conducted using five tools to obtain data from 252 two registered batches of bridging nurses students. Five tools were demonstrated to collect data about demographic data, questionnaire for knowledge, attitude and practice to measure disaster preparedness and emergency preparedness information questionnaire to measure nurses' familiarity. The study findings revealed that the mean age score was 26.36±1.82 and for the knowledge level was 21.2 ± 6.0. A highly significant difference was found for attitude and practice regarding disaster preparedness as well familiarity concerned emergency preparedness P≤.000. Based on the present study results; lacking of knowledge and practices with acceptable level of attitude regarding disaster preparedness and neutral familiarity with emergency preparedness were concluded. Thus an integration of clearly titled theory and practice teaching courses about disaster and emergency preparedness into nursing curricula are crucial needed and provided in respect to their learning/training preferences. Further, follow up research are necessary for maximizing nursing education and nursing quality in these critical areas applied to healthcare and community setting.
Developing nations are particularly vulnerable due to the lack of funding for disaster preparedness and the impact of disasters on the health care, economic and social infrastructure of the affected region and subsequently, the country.Disasters can change the face of a developing nation in seconds, wiping out years of development.Nations with greater resources are usually able to move more quickly to restore the infrastructure and economy.However, no matter where the disaster happens, the impact on the population and community can be devastating, leaving no nation, region or community immune.The number of reported natural and human-made disasters continues to rise worldwide.[2].
Disaster preparedness, including risk assessment and multidisciplinary management strategies at all system levels, is critical to the delivery of effective responses to the short, medium, and long-term health needs of a disaster-stricken population.Meanwhile, emergency preparedness refers to the preparedness pyramid which identifies planning, infrastructure, knowledge and capabilities, and training as the major components of maintaining a high level of preparedness [3][4].Disasters can be divided into three categories; natural events-such as storms, drought, earthquakes, disease epidemic, technological events-such as explosions, structure collapse, radiological accidents, civil/political events-such as strikes, terrorism, biological warfare [5].
A major concern facing public health nurses, especially in third-world communities, is the increase in vector-borne illnesses as a result of climatic changes.Malaria continues to be prevalent among communities in Africa and claims 1 in 5 children in Sub-Saharan Africa [6].West Nile virus may occur in drought conditions, and natural predators of mosquitoes are greatly reduced during drought.Dengue and malaria thrive in wet conditions such as flooding and tropical rainy seasons [7][8].
Nursing interventions and management of vector-borne illnesses are also important in the aftermath of disasters when waters become stagnant or gastrointestinal disease becomes prevalent due to unsanitary or over-crowded conditions that result from lack of electricity and/or plumbing.Advanced planning and mitigation are crucial for all countries and at all levels of government.It is especially imperative for healthcare providers to have a thorough knowledge of what lies ahead to take decisive action for training and mock-drills [1,2,7].
The ICN Framework of Disaster Nursing Competencies and recognized an accelerated and present need to build capacities of nurses at all levels in order to "safeguard populations, limit injuries and deaths, and maintain health system functioning and community well-being, in the midst of continued health threats and disasters" [2].
The PAHO and WHO have issued a call for countries to undertake six core actions to make their health facilities safe during emergencies: assess the safety of hospital, protect and train health workers for emergencies, plan for emergency response, design and build resilient hospitals, adopt national policies and programs for safe hospitals, and protect equipment, medicines and supplies.Nurses will be intimately involved with all of these goals [9][10].
Nurses comprise the highest percent of health and medical workforce.Nurses must understand the national disaster management cycle.Without nursing integration at every phase, communities and clients lose a critical part of the prevention network, and the multidisciplinary response team loses a first-rate partner.Eleven million nurses worldwide form the backbone of the health care system and are the frontline health care workers who are in direct contact with the public contribute to health of individuals, families, communities, and the globe [11].
Schools of nursing offer little or no information on disaster nursing and there is shortage of trained instructors/faculty [11].Although training and education have long been accepted as integral did not base nor standardized, the need for effective evidence based disaster training of healthcare staff at all levels, including the development of standards and guidelines for training in the multi disciplinary health responses in major events, has been designated by the disaster response community as a high priority.The role of nurses during disasters has expanded from simply caring for the sick and injured to development of the ability to react to a disaster in terms of preparedness, mitigation, response, recovery and evaluation Nurses need to have the knowledge and skills to employ an effective approach to respond to critical situations [12].Thus, the present study was aimed to examine nurses' knowledge, attitudes, practices and familiarity regarding disaster and emergency preparedness-Saudi Arabia.
Design
Cross-sectional descriptive study was utilized to conduct the present study
Target Population
A total of 252 of two (2) registered batches of bridging nurses students were recruited, throughout the academic period between March 2012 / 2014.The study participants' were technical nurses worked in different healthcare settings up to ten years then back to the nursing college to study four to five complementary semesters to obtain bachelor nursing degree.
Tools of Data Collection
Five tools were self administered to collect data about; 1. Demographics information as; age, department, years of experience, place of residence.
2. Knowledge questionnaire on disaster preparedness consisted of 47 objective questions; MCQ and T & F derived from [13][14], covered questions about disaster management and preparedness.Each question categorized as correct=1 and incorrect=0 3. Attitudes checklist about disaster planning consisted of eleven item categorized as agree, disagree and unsure adopted from [14] with some modifications done to suit the present study.
4. Practices currently taking place included questions about disaster drills done at their healthcare setting, what type of drills is done, ongoing training, how often, disaster plan update and how often developed by [14].
5-Emergency Preparedness Information Questionnaire (EPIQ) was used.It developed by Wisniewski et al in 2004 [15] and has been employed in many studies to measure familiarity of emergency preparedness.There is Wisconsin Nurses Association (WNA's) permission to use the EPIQ tool.The questionnaire was composed of two sections.Section one concerned with the overall familiarity with emergency preparedness included (45) familiarity responses and (11) subsets.The subsets were included; familiarity with emergency preparedness terms & activities composed of (7 questions), the incident command system (ICS) (8 questions), ethical issues in triage (4 questions), Fatma Abdelalim Abdelghany Ibrahim: Nurses Knowledge, Attitudes, Practices and Familiarity Regarding Disaster and Emergency Preparedness -Saudi Arabia epidemiology and surveillance (4 questions), isolation/quarantine (2 questions), decontamination (3 questions), communication/connectivity (7 questions), psychological issues (4 questions), special populations (2 questions), accessing critical resources (3 questions), overall familiarity.This questionnaire was categorized through likert scale as "not at all familiar", "slightly familiar", "somewhat familiar", "Moderately familiar", "Extremely familiar and scale was ranged from extremely familiar=1 to not at all familiar=5.Section two is about learning/training preferences of training format, course length, and nurses access to electronic training/educational information.
Ethical Consideration
Purpose and nature of the present study was explained to the study participants and that participation was appreciated.Also, the target population was announced that participation of the study may not have received prior training and/or previous exposure to many of these activities and the goal of this study was to assess information gaps and training needs, and identify any and all areas that need to be addressed.In addition to that is not a test and no way reflects on personally so didn't worry if they were lacking of information or unfamiliar with certain areas.Every single study unit was informed that participating in the research was voluntary.Oral consent was obtained from participants.Confidentiality of the information was observed by anonymity not mentioning name of the study units in the questionnaire and not being distributed by the researcher.The questionnaires completed were coded and no names were put on the questionnaires.Name links to the codes were kept in a coded drawer.Information was kept confidential and only pooled data was to be presented.
Procedure
Official permission was taken from the nursing faculty dean to conduct the present study.Two weeks before time of data collection; an announcement posters was placed on the advertisement board included orientation details such as; purpose, nature, ethical consideration and procedure of the present study.After the study participants' had finished their first academic work semester (2013/2014), a number was assigned to each student that used via demonstration of data collection tools, The data collection tools were carried out in two sessions separated by 30 minutes break at the college lectures rooms.In session one the demographic data, knowledge, attitude and practice were distributed and demonstrated.At the second session the students were given EPIQ.Each session was taken average of 20:30 minutes to fill in.
Statistical Design
The data were statistically analyzed by descriptive criteria such as number, percentage, mean and standard deviation; also inferential statistics t-test using SPSS 21 and P value was set at 0.5.
Section 1: Demographic Characteristics
Almost two third (67.1%) of the study participants were aged between ˃25: 30 year old with mean score of 26.36±1.82.The period of experience for three quarter of them (76.2%)ranged between ˂3:6 years.Nearly half of them (49.2%) were working in the critical care units and medical surgical departments (25.8 and 23.4% respectively) (table 1)
Section 2: Knowledge
Table 2 shows that the knowledge level regarding disaster preparedness among two third of the study sample fallen of the category between 13-25 degree out of 47 degree with mean score of 21.2 ± 6.0 with highly significant differences ( t= 55.82 & P≤.000).
Section 3: Attitudes
Table 3 illustrates the percentages of agreement regarding disaster preparedness attitudes of the study participants which were as following; (69.8) agreed to the need to knowledge about disaster plans, (73) to management should be adequately prepared should a disaster occur, (57.9) to disaster planning is for all people in the healthcare setting, (77.8) to potential hazards likely to cause disaster should be identified and dealt with, (83.7) to training is necessary for all healthcare management, (82.2) to having disaster plan is necessary, (73.4) to disaster plans need to be regularly updated, (47.6) to disasters are likely to happen in any healthcare setting, (60.3) to disaster management is for all healthcare team, (73.4) to disaster simulations should occur frequently in the hospital, and (79.8) to drills should be conducted in the hospital with highly significant difference (t=78.979and P≤ .000).
Section 4: Practices
Table 4 indicates that (48.8%) of the study participants' were knew that disaster drills are done at their healthcare setting.33.7% said they are not done and 17.5% didn't know.23.4% didn't know the type of drills done.Of the remaining 41.7% they mentioned code blue and fire evacuation.38.9% believed there is ongoing training at their healthcare setting.31.7 stated that training was done yearly, 34.5% not done and 33.8% didn't know.31% said disaster plans are regularly updated.29.8% said they are not regularly updated and 39.2% didn't know.45% believed it is done yearly and 55% didn't know.3% said it was done every 6 months.1% said it is done every 3-6 months and 10% said when the need arises.A highly significant difference of practices level was found as t 46.41 and P≤ .000.
Discussion
Increasingly frequent global disasters are posing threats to human health and life.The World Health Organization has called for countries to have detailed plans at all levels in order to be prepared for disasters that may arise.Nurses and midwives are frontline workers under stable conditions, but more so during situations of emergencies and crises, working both in pre-hospital as well as in hospital settings.In order to contribute to saving lives and promoting health under such difficult conditions, they need to have the right competencies [2].
The present study was aimed to investigate nurses' knowledge, attitude, practice and familiarity regarding disaster and emergency preparedness-Saudi Arabia.In overall view of the present study findings; the age mean score of the study participants' was 26 year old, the period of experience for three quarter of them ranged between ˂3:6 years.More than one quarter of them were working in the critical care units (table 1).Regards Knowledge part, the study participants showed lack of their knowledge level in disaster preparedness (mean score was 21.2 ) with highly significant differences P≤.000(Table 2).While the attitudes of them regarding disaster prepared was accepted except to attitude agreement that disasters are likely to happen in any healthcare setting with highly significant difference P≤ .000(Table 3).Concerning practices part; the study findings revealed that practices of disaster preparedness was below average level with highly significant differences P≤ .000.On the part of familiarity with emergency preparedness; the overall familiarity was recorded nearest (2.87) to the neutral familiarity with highly significant difference P≤0.000 was found (table 5).In the part II concerned with preferences of learning/training.More than half of study sample (55.2%) preferences was face to face (figure 1).while the amount of time spend preferences in training almost was close to all options (figure 2).In relation to the electronic access training/educational information may be seen as in need for more support especially in internet downloads at work and educational needs related to work (Figure 3) In respect to knowledge, attitude and practice (KAP) of nurses' preparedness in emergency situations that reflect the features and characteristics of critical situations.A comprehensive knowledge, skills, proficiencies, and the necessary measures in respond to cases such as natural disasters, man-provoked events, chemical, nuclear, biological and explosive cases were included [14].Nevertheless, there is no specified degree in crisis nursing and there are just short term educational courses [15].
Many KAP studies conducted of nurses' disaster and emergency preparedness.A study undertaken by nurses in Hong Kong concluded that nurses are not adequately prepared for disasters, but are aware of the need for such preparation.Also, that disaster management training should be included in the basic education of nurses [16].Disaster drills are a valuable means of training healthcare providers to respond to mass casualty incidents from acts of terrorism or public health Disaster and Emergency Preparedness -Saudi Arabia crises [17].Meanwhile, a cross-sectional descriptive study was conducted using a self-developed questionnaire to obtain data from 607 nurses working in four tertiary hospitals and two secondary hospitals in Fujian, China, in November 2011.Their findings showed that the nurses' average percentage scores on their responses to questions in the domains of knowledge, attitudes and practice were 66.33%, 68.87% and 67.60%, respectively.The results indicate that strategies need to be developed for nurses to improve their knowledge, attitudes and practice [18].
Another study was done to determine KAP of emergency nurse and community health nurse towards disaster management.Researchers found that adequacy of knowledge and practice, and portraying positive attitude was driven by being involved in disaster response and attending disaster-related education.They recommended paramount for health administrators to conduct disaster-related education/ training for front-liners such as emergency and community health nurses to improve their knowledge and practice towards disaster management [19].
Knowledge, skills, and preparedness for disaster management were examined to Jordanian RNs' perceptions.The research findings indicated that Knowledge, skills, and disaster preparedness need continual reinforcement to improve self efficacy for disaster management and there was a need for a consistent national nursing curriculum for disaster preparedness and nationwide drills to increase disaster knowledge, skills, preparedness, and confidence [20] A comparative study of 4 years undergraduates nursing students' was done to assess the educational needs concerning; disaster preparedness and response in Istanbul and Miyazaki.The study reported that most student nurses had no expectations on skills that could be gained from a disaster preparedness and response course/culture of disaster lecture.Nursing students in both cities seem more likely to participate in disaster preparedness and response courses/lectures.Researchers addressed the need to incorporate mass casualty care and disaster management skills into undergraduate curricula.Core contents for nursing curricula in both cities need to be continued.Outcome competencies must be identified and validated through further research [21] Most nurses receive little, if any, disaster preparedness education in nursing school.A 2003 survey of 2013 schools of nursing (348 responding) revealed that only 53% offered content in disaster preparedness, and a mean of 4 hours was devoted to this content.In general, nursing school faculties were inadequately prepared to teach disaster preparedness content [22].
Concerning the nurses' familiarity with emergency preparedness, the need for emergency preparedness training is well documented in the literature.A crucial first step toward designing well written, comprehensive, emergency preparedness curricula is to assess training needs.Additional studies using the revised EPIQ should provide data to assist nurse educators in the development of competency-based, relevant, emergency preparedness curricula [23][24][25][26][27].
In a publication of the Joint Commission on Accreditation of Health Organizations entitled emergency management in healthcare, an all-hazards approach, the JCAHO mandates that hospitals have an all-hazards emergency operations plan.Many national plans are based on the Hospital Incident Command System [28][29].
As well a study utilized the EPIQ to assess of emergency department staff knowledge of emergency preparedness.The results of the survey were found to be similar to the results of the first Wisconsin survey.Results showed that staff scored better in more general areas such as triage and basic first aid.Scores were lower when asked specific questions, such as antidotes to biological agents.The study showed that there is a need for more educational programs in the area of emergency preparedness.The results of the study may be utilized to develop educational programs to further the knowledge of the staff and have them better prepared in disastrous events [13].
Identifying an effective means of teaching hospital disaster preparedness to hospital-based employees is an important task.However, the optimal strategy for implementing such education still is under debate.Efforts were also undertaken to determine the types of educational offerings and class-scheduling options most preferred by nurses [30].
Conclusion and Recommendation
Based on the present study results; it can be concluded that the level of knowledge and practice were below average with acceptable level of attitude regarding disaster preparedness and neutral familiarity with emergency preparedness were found.Thus an integration of clearly titled theory and practice teaching courses about disaster and emergency preparedness into nursing curricula are crucial needed and provided in respect to their learning/training preferences.Further, follow up research are necessary for maximizing nursing education and nursing quality in these critical areas applied to healthcare and community setting.
Figure 2 .
Shows percentages agreement on amount of time spend preferences in training.The study findings revealed amount of time spend preferences in training in descending order as; participate in a 2-hour lecture or web-based training, attend an evening workshop, attend a one-day weekend workshop, attend a 2-3 day workshop/conference and take a course for an academic quarter/semester, respectively (75,67.5, 63.5,60.7 and 59.1%).
Figure 1 .A tten d a on e-day weekend w orksh op A tten d an ev ening worksh op P articip ate in a 2 -Figure 2 .
Figure 1.Percentages of nurses learning/training format preferences
Figure 3 .
Figure 3. Percentages of nurses access to electronic training/ educational Information
Table 3 .
Attitude regarding disaster preparedness among the study participants (n=252)
Table 4 .
Percentages of the study participants' practice regarding disaster preparedness (n=252)
Table 5 .
Description of nurses familiarity responses rate of emergency preparedness (n=252)
|
2019-03-11T13:11:02.779Z
|
2014-07-04T00:00:00.000
|
{
"year": 2014,
"sha1": "402586a653706e977e34e7bbdece663872edb2a7",
"oa_license": null,
"oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajns.20140302.12.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e97e87a3b3b05f57ef08359c5e961f5f1f8f588f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
232123597
|
pes2o/s2orc
|
v3-fos-license
|
Congenital dysfibrinogenemia caused by γAla327Val mutation: structural abnormality of D region
ABSTRACT
Background : Congenital dysfibrinogenemia (CD) is a coagulation disorder caused by mutations in the fibrinogen genes, which result in abnormal fibrinogen function. However, the precise pathogenesis underlying it remains unclear. Methods : In this study, we identified a novel heterozygous mutation in an asymptomatic patient with CD caused by γ Ala327Val mutation. Aimed to investigate the pathogenesis, functional studies of fibrinogen isolated from the proband and her family members were performed, such as coagulation function, fibrinogen aggregation test, and fibrin clot lysis test. Coagulation was monitored using a thromboelastometer, and the fibrin clot network structure was observed by scanning electron microscopy. The effect of the mutation on fibrinogen structure and function was predicted by molecular modeling. Results : The fibrinogen activity concentration in patients with CD was significantly lower than that in healthy individuals, indicating that fibrinogen activity was low. Proband’s fibrinogen activity concentration was 0.75 g/L(Clauss method) and antigen concentration (immune turbidimetry method) was 1.59 g/L(normal reference range for both parameters: 2.0–4.0 g/L). Thromboelastography showed that the K value of patients with CD was higher than that of healthy individuals and Angle values were decreased, indicating that mutation impaired fibrinogen function. Compared to fibrinogen from healthy individuals, fiber network structure of the proband was loose, pore size was increased, and fiber branch nodes were increased. Conclusions : Ala327Val heterozygous missense mutation leads to changes in the structure of fibrinogen D region and impairs the aggregation function of fibrinogen. This mutation is reported here for the first time.
Introduction
Fibrinogen (Fg), also known as coagulation factor I, is mainly produced by liver cells and is secreted into peripheral blood. Its plasma concentration is 2-4 g/L [1]. Fibrinogen is involved in the formation of fibrin clots and platelet aggregation, and plays an important role in hemostasis. The relative molecular weight of fibrinogen is 340 kD; it is composed of two identical subunits, each of which comprises three different polypeptide chains (A α, B β, and γ); these two subunits are linked by disulfide bonds [2]. Fibrinogen molecules have three major functional structures, the central nodule (E-region) that contains the N-terminus of each chain and two symmetric distal spherical D-regions that contain the C-terminus of Bβ-andγ-chains. The three polypeptide chains A α, B β, and γ are encoded by FGA, FGB, and FGG genes (on chromosome 4), respectively [3].
Mutations in FGA, FGB, or FGG may lead to the development of congenital dysfibrinogenemia (CD).
CD is a congenital blood disease caused by defects in fibrinogen genes, which lead to abnormal structure and function of fibrinogen molecules, and may affect coagulation. The clinical manifestations of CD are diverse; most patients with CD are asymptomatic [4], but a small number of patients with CD have thrombosis [5,6] and bleeding events, and some patients have pulmonary hypertension and other symptoms [7]. According to the latest published study, bleeding was present in 42% of patients with dysfibrinogenemia [8].
In this study, we identified a novel heterozygous mutation in an asymptomatic patient with congenital dysfibrinogenemia with the amino acid substitution of γAla327Val. After the approval of the hospital ethics committee and the informed consent of patients, functional studies of fibrinogen isolated from the proband and her family members were performed to study the molecular pathogenesis of CD caused by γ Ala327Val heterozygous missense mutation.
Basic data of patients and routine examination of coagulation function
The proband was a 45-year-old female patient. When she came to the hospital for routine physical examination during pregnancy, her coagulation function test result was abnormal: fibrinogen activity concentration was 0.75 g/L (Clauss method) and antigen concentration (immune turbidimetry method) was 1.59 g/L (normal reference range for both parameters: 2.0-4.0 g/L). Fibrinogen activity concentration was significantly lower than fibrinogen antigen concentration. Liver function, kidney function, and blood routine tests were normal. So the patient was initially diagnosed with dysfibrinogenemia. There were no bleeding or thrombotic events in daily life. The proband was pregnancy in 2006 and delivered by caesarean section. At that time, fibrinogen levels of proband was low but the caesarean section was smooth. Four members of her family were examined for coagulation function and other parameters.
DNA was extracted from the peripheral blood samples of the proband and four members of her family. DNA sequencing was performed by Beijing Liuhe Huada Gene Technology Co., Ltd, Beijing, China.
Fibrinogen aggregation test
Venous blood (5 ml, in sodium citrate anticoagulant) was drawn from the patients and healthy individuals, and centrifuged at 4°C, 3000 rpm, for 15 min to separate the plasma. Then, 1 ml sodium citrate (0.09 mol/L) and 666 μl saturated ammonium sulfate (pH 5.5) were added to 1 ml plasma, and the mixture was incubated at 25°C for 70 min. After centrifugation, the supernatant was discarded and fibrinogen was obtained. Fibrinogen was purified by precipitation with 20%saturated ammonium sulfate, and the pellet was washed 3 times with 25% saturated ammonium sulfate [9]. The reaction system of the fibrinogen aggregation test was as follows: Fibrinogen 0.5 mg/ml (final concentration), 5 ml of 2 mol/L NaCl, 5 ml of 2 mmol/L CaCl 2 , and 20 mmol/L HEPES buffer to make the volume up to 90 ml. Then, 10 U/ml of 10 μl thrombin was added and the optical density (OD) of the sample at 365 nm was continuously monitored (one reading every 20 s for 30 min) using a multimode plate reader Multiskan Go (ThermoFischer). The aggregation curve was drawn according to the time and the corresponding OD value.
Fibrin clot dissolution test
Fibrin clot dissolution test was performed by adding fibrinolytic enzyme and tPA into the fibrinogen aggregation test system to activate fibrin clot dissolution, and we continuously measured the OD value of the reaction system to reflect the rate of fibrin clot dissolution. The reaction system used for this test was as follows: The final concentrations of fibrinogen, thrombin, plasminogen, tPA, CaCl 2 , and NaCl were 0.5 mg/ml, 0.5 U/ml, 0.12 U/ml, 0.1 mg/ml, 8 mmol/L, 0.12 mmol/L, and 20 mmol/L, respectively. Finally, 20 mmol/L HEPES was used to make up the volume of the reaction system to 200 ml. The OD values of the samples at 365 nm were continuously monitored with Multiskan Go (ThermoFisher, U.S.A.) for 30 min at 20 s intervals. The fibrin clot dissolution curve was drawn according to the time and the corresponding OD value.
Thromboelastography
Whole blood samples (1 ml) from the study subjects were added to the reagent bottle (Shaanxi Yuze Yi Medical Technology Co., Ltd Shaanxi, China); Then, 340 µl of the sample with 20 μl of 0.2 mol/L CaCl 2 was added into the detection cup of the thromboelastography (TEG) instrument (Shaanxi Yuze Yi Medical Technology Co., Ltd, Shaanxi, China). As fibrinogen began to aggregate, the probe placed in the detection cup was subjected to shear stress produced during the formation and dissolution of the blood clot. The generated signal was transmitted to the processor in the form of a current and a TEG curve was obtained.
Scanning electron microscopy
The proband and healthy individuals were administered 33 μl of fibrinogen each, and thrombin was added at a final concentration of 2 U/ml. After incubation for 3 h at 37°C, the samples from proband and healthy individuals were rinsed with PBS buffer (pH 7.4, 0.1 mol/L), and incubated with 3% glutaraldehyde for 2 h. Glutaraldehyde was discarded, samples were rinsed with PBS solution, and dehydrated with alcohol. The ultrastructure of the fibrin clot was observed with VEGA3LMU scanning electron microscope (TESCAN, Czech Republic).
Modeling and analysis of amino acid mutation
The amino acid sequence of fibrinogen was obtained from the NCBI database, and a model of the fibrinogen structure was constructed using SWISS-MODEL website (https://swissmodel.expasy.org/)/. Swiss-Pdb Viewer software was used to analyze the effect of γ Ala327Val mutation on the function of fibrinogen.
Routine coagulation function tests
The Results of the laboratory examination for the proband including electrolyte, liver and kidney function, and blood routine tests were normal. Examination of these parameters for other members of the family showed that the coagulation results of the proband's sister, two brothers, and proband's daughter were similar to those of the proband. The results of the coagulation function test are given in Table 1. A single base heterozygous missense mutation, FGG c.1058C > T, p.Ala327Val (Ala353Val in mature protein chain), in exon 8 of the fibrinogen gene was found in the proband and her family members (Figure 1).
Fibrinogen aggregation test
Compared with the healthy individuals, the OD values of fibrinogen aggregation curves for fibrinogen isolated from the proband and her family members were changed slightly, and the maximum OD values from 5 patients (average 0.283) were lower than those of the healthy individual (0.422) (Figure 2).
Fibrin clot dissolution test
The clot turbidity began to decrease after 496 s on average (healthy individuals: after 660 s). There was no fibrinolysis delay and resistance during clot dissolution ( Figure 3).
Thromboelastography
The K values (time when the strength of blood clot reaches 20 mm) of patients with CD were increased; the average value was 3.7 min for the patients whereas it was 2.3 min for healthy individuals. Further, the Angle values (the angle between tangent line and horizontal line from the point of clot formation to the maximum curve radian of the figure) were decreased. The average value was 52.8°for the patients, whereas it was 61.5°for the healthy individuals. The thromboelastography results are shown in Figure 4 and Table 2.
Ultrastructure of fibrin clot
Shown in Figure 5(A) is the fibrin clot structure of the healthy individual under a scanning electron microscope. The diameter of the fiber filament was uniform, the fiber nodes were few, the arrangement was neat and compact, the fiber filament interweaved and overlapped to form a fiber network, the spatial structure was relatively dense, and the network pore size was small. (B) The fibrin clot network structure of the proband showed that the fiber filaments were of different thickness, arranged irregularly, and the ends of the fiber filaments were curled into a mass. The fiber network space structure was loose, the network aperture increased, and the fiber branch nodes were more than those in the healthy individual.
Modeling and analysis of amino acid mutations
In the molecular model of wild-type fibrinogen, Ala327 is located in the α helix of D region of the fibrinogen γ chain. The α helix is crucial to maintaining protein stability. The γ Ala327 backbone and γ Ser332 backbone form a hydrogen bond, which is 3.19 nm long. When Ala is replaced by Val, the hydrogen bond between γ-Val327 and γ-Ser332 did not change, but the side chain became longer, which affected its spatial structure, and the electrostatic force around it changed, which led to a change in the structure of the α-helix and weakening of protein stability ( Figure 6).
Discussion
Congenital dysfibrinogenemia (CD) is a hereditary blood disease caused by defects in fibrinogen genes that lead to an abnormal structure and function of fibrinogen. The mutations in the three fibrinogen genes include missense mutations caused by single base substitutions, frameshift mutation caused by single base insertion or deletion, and mutation in the regulatory region. So far, over 450 mutations in fibrinogen genes have been registered (www.geht.org).
Single amino acid mutations are reported at 212 sites, of which mutations in the FGA gene are the most common, followed by mutations in the FGG [10]. In fact, dysfibrinogenemia is almost always caused by heterozygous missense mutations (99.3%) in FGA and FGG [11]. In this study, the genetic testing of proband and her family members revealed a heterozygous missense mutation (C. 1058C>T, p.Ala327Val) in exon 8 of the FGG gene, which resulted in the substitution of alanine with valine at position 327 of the γ chain. Structure modeling analysis showed that this caused a change in the spatial position and electrostatic force of fibrinogen, resulting in a different structure of the αhelix affecting its stability.
Although the interaction between the D:E regions is the main driving force of fibrinogen aggregation, D:D interaction is important for fibrin oligomer formation [12][13][14]. The role of D:D interaction is to guide the D regions of the two fibrinogen monomers to connect. This D:D interaction assists γ chain polymerization, which is necessary for end-to-end fibrin binding. By projecting the D:D region and analyzing its crystal structure, it was found that the interface of D:D polymerization is composed of hydrogen bonds between two adjacent fibrinogen molecules and that D:D interaction is critical for the horizontal and linear aggregation of fibrin monomers [12,15]. The γC region (residues 143-411) forms a single globular domain (D regions) and plays a critical role in fibrinogen assembly and secretion [16]. The mutation site γAla327 investigated in this study is located in this region. In γAla327Val, alanine is replaced by valine, which affects the secondary structure, changes the spatial arrangement that consecutively impairs D:D binding between the γ chains of fibrinogen, and hinders the formation of fibrils, thereby affecting the aggregation of fibrinogen. Our aggregation test and thromboelastography results confirmed that the proband and her family members had impaired fibrin aggregation. We investigated the ultrastructure of the probands' fibrin clots using scanning electron microscopy. The filaments showed different sizes from what is seen in regular clot formation, their arrangement was irregular, and their ends were coiled. Furthermore, the fiber network was loose, and its pores were enlarged, and there were many fiber nodes. Impaired D:D interaction between adjacent fibrinogen molecules frequently leads to the lateral extension of the filament, increasing the number of branch points and forming thinner filaments than what is seen in normal clot formation [17]. The substitution of γAla327 causes incorrect end-to-end orientation of fibrin monomers during aggregation, resulting in the increase in fiber branching. Moreover, normal fibrin monomers can combine with mutant monomers during polymerization, resulting in variable fiber diameter, irregular arrangement, and a loose and large mesh-like structure of the fiber network. An analysis of the mutant fibrin clot network showed that the fibrin clots with impaired D:D interaction (γArg275His, γArg275Cys and γArg375Gly) had more pores in the network structure, and that the network comprised many conical fibers and pores [18]. These results are similar to those of in this study. Sugo et al [18] suggested that the structure of fibrin clots might be related to the clinical CD phenotype and that the phenotype could be predicted by studying the ultrastructure of the fibrin clot. Studies have shown that the dissolution rate of fibrin clots depends on the density of their structure rather than the diameter of the fibrin fibers and that loose networks composed of crude fibers decompose more easily than the dense networks composed of fine fibers [19]. Patients with high fibrin density in their fibrin network have a higher risk of thrombotic events than patients with normal density [20].
In patients with thrombotic CD, such as fibrinogen Paris V [21] (FGAc.1717C>T) and fibrinogen Perth [22] (FGAc.1541delC), fibrinolysis is prolonged. Fibrinogen Tokyo V was shown to have an amino acid substitution of γAla327Thr and extra glycosylation at γAsn325, accounting for the recurrent embolic episodes in the patient. In the crystal structure of the D region of fibrinogen, γAla327 resides near the calciumion binding site and the 'a' polymerization pocket. Thus, the substitution of γAla327Thr would likely interfere with calcium binding to this region and also affect the polymerization pocket. Furthermore, extra glycosylation of γAsn325 may interfere with the D:D association indirectly. Thromboembolism in these patients may result from fibrinolysis-resistant fragile clots and the formation of a large amount of soluble fibrin [23]. Fibrinogen Melbourne (c.1055G>T, Cys326Phe) is a novel congenital hypodysfibrinogenemia caused by g326Cys-Phe in the fibrinogen γ chain, presenting as massive splanchnic venous thrombosis. The γ326Cys is located in the centre of an important t-TPA binding site, which possibly explains why the conformational change is associated with thrombosis [24]. In γAla327Val, alanine is replaced by valine, affecting the D:D association, and hindering the formation of fibrils, thereby affecting the aggregation of fibrinogen.
The aggregation test and thromboelastography results in our proband and her family members revealed impaired fibrin aggregation. We did not find prolonged fibrinolysis or lysis-resistant clots in our patients with CD compared with healthy individuals, suggesting that the γAla327Val mutation does not affect fibrin clot dissolution.
In this study, in the proband, the concentration of fibrinogen activity was significantly lower than that of fibrinogen antigen (0.75 g/L vs. 1.59 g/L, reference range: 2-4 g/L). Immunoturbidimetry is based on the specific binding of fibrinogen antibody to fibrinogen. As fibrinogen antigen determinants are present in mutant fibrinogen molecules, the concentration of fibrinogen antigen in plasma of patients with CD will not be reduced. Combination of multiple methods to detect fibrinogen can reduce the rate of misdiagnosis and missed diagnosis of CD. Our previous study showed that the combination of a PT-derived method and the Clauss method is helpful in the screening for CD [25]. Although no bleeding or thrombotic events have occurred in the family investigated in this study, the possibility that thrombotic or hemorrhagic events may occur in the future, especially in the case of surgery, delivery, or trauma, cannot be excluded. Alving et al [26] reported a case of CD with an α Arg16His homozygous mutation (fibrinogen Giessen I), who had no bleeding or thrombotic events in daily life, but had symptoms of massive bleeding occurred during delivery. Therefore, the proband and her family still need follow-up to prevent thrombosis, bleeding, or other events.
|
2021-03-06T06:16:26.853Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "eb1dcd1161fbc0317bc9dc2ecf21fa2d06cc89b4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/16078454.2021.1893977",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "da720436d698652a5e323e2e28523bb46f239e77",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207870357
|
pes2o/s2orc
|
v3-fos-license
|
A Comprehensive Study on Pedestrians' Evacuation
Human beings face threats because of unexpected happenings, which can be avoided through an adequate crisis evacuation plan, which is vital to stop wound and demise as its negative results. Consequently, different typical evacuation pedestrians have been created. Moreover, through applied research, these models for various applications, reproductions, and conditions have been examined to present an operational model. Furthermore, new models have been developed to cooperate with system evacuation in residential places in case of unexpected events. This research has taken into account an inclusive and a 'systematic survey of pedestrian evacuation' to demonstrate models methods by focusing on the applications' features, techniques, implications, and after that gather them under various types, for example, classical models, hybridized models, and generic model. The current analysis assists scholars in this field of study to write their forthcoming papers about it, which can suggest a novel structure to recent typical intelligent reproduction with novel features.
Introduction
Some emergency cases will not be controlled easily and will be obstacles in front of the evacuation process because of perplexity, dread and even uncertainty and uneasiness to mass dwellers [1]. There are many factors that affect evacuation processes such as the surrounding, how they react with each other, and various environmental conditions, that is why many evacuation approaches occur like protective, preventive rescue and constructive evacuation [2]. The issue of evacuation proceeds to the crowd's movement and it is affected by the physical and social environment, such as the high degree of danger, pressure, and lack of data, which is a mixture of environmental hazards; population demographics and the attendee's conduct [3]. A crowd is gathering of a group of people [4] that has many features, during simulation a number of potential behaviors are anticipated [5], and simulation is a way of guessing behaviors through answering for the "what-if" conditions [6]. Hence, Crowd evacuation is a way to aping the behavior of participants in the same situation [4], in the last 20 years many types of research have been done, practicing evacuation have been considered to lower the damages; deaths and injuries in emergency situations involving pedestrians [7][8][9][10][11][12][13]. The sole purpose of the investigations is to improve a managing emergency situation that is why many models are enhanced to see how people react in different scenarios in emergencies [14][15][16][17][18][19][20]. One way of finding solutions is modeling thus after the careful examination because of the originality of the procedure it has been renamed a model [4]. The models could be divided into three groups; classical, hybridized and generic models, each of which is subdivided. These models have been used to explore crowd evacuation in normal and emergency situations.
This research has been conducted with the aim of First is to gather a huge number of papers about different 'applications' conducted on the evacuation of the pedestrians. The second is to examine characteristics, 'techniques' and the indications of various applications. Third, is to learn from the first two aforementioned points to decide to scheme a novel smart and dependable model to pretend attendees' 'appearing emergency' conducts and evacuation efficacy when an area is in need of emergency evacuation. Thus, the current research sheds light on the pedestrian evacuation specifically and generally. Yet it is a chance for the scholars to gain relevant information with ease and decide about how their forthcoming papers be directed.
The current paper will contribute to the literature by benefiting from the existing studies to assist scholars to utilize it in different cases. It can guide them to master their forthcoming expected studies, firstly. It can assist the researchers and those who specialize in design relying on the obtained results which were obtained through investigation of the past studies with the intention of making a decision in a better way for designing and implementing a novel smart 'simulation model' comprising recent capacities, secondly.
The following is the structure of this research: In the second section, 'evacuation models for the crowd' is presented. In the third section, the previous models and the relevant methods in various applications are demonstrated. The final section contains the conclusion and suggestions for further research.
Evacuation Models for Crowd
The crowd is performing a bunch of people together [4]. It is the only condition in which reverse could be the alternative of panic rather than being touched [21]. In 1895 presenting La Psychologie des foules by LeBon was the starting of researchers' concern for crowd dynamics [22]. On the other hand, Helbing in 1991 presented a widespread work, which was one of the attempts for such reason to display the motion of pedestrians [23]. Scientists, until 2001, had previously created relating models going for alleviating clog and obstacle wonders dependent on experimental information [24]. Meanwhile, different fields studied crowd dynamics when pedestrians' dynamic was presented [25]. Irregular movement, the effect of congestion and the occurrence of own-arrangement were specified [26]. Numbers of applications were simulated via this crowd evacuation model [25][26][27][28][29][30]. For displaying crowd evacuation from building various methods developed, such as cellular automata method, social force method, lattice gas method and agent-based method [31][32][33]. Hence, these methods based on the ability to know the detail of the individuals in the crowd covered via three different models macroscopic model, mesoscopic model and microscopic model [34,35]. On the other hand, different hybridized methods were developed, such as zone based, layer based, and sequentially based [36]. Another method generic framework was presented from the previous mentioned hybridized models [34]. Figure 1 shows an overview of the developed models for crowd evacuation. The crowd models are categorized into three main models; these are a classical model, a hybridized model, and a generic model. Each model has its own approaches to investigate the flow of people and their behaviors during the evacuation process.
Models and Their Approaches in Different Applications
In this section, the models and approaches are described, and emphasis on highlighting the features, techniques, and implications of current simulation models. Table 1 shows information for future use.
Classical Models
The classical model can be divided into three different models; macroscopic, mesoscopic and microscopic. Each model was to design different approaches to know how humans move and behave during movements from one place to another of the specified area. These models are described in the following subsections: Macroscopic Model: Macroscopic is one of the classical models and with such a model flow of people is noticed and individual features are neglected due to dealing with the homogenous people. Figure 2 illustrates the macroscopic model. In the macroscopic model, the fluid dynamic was designed.
Fig. 2. Macroscopic model
In previous decades, fluid-like characteristics had been represented as a pedestrian crowd. There were numbers of connections between fluid and pedestrians, for instance, movement on all sides of the obstructions shows follow "streamlines", so, it was not an unexpected situation, especially, such as the premature models of pedestrians, which is vehicular dynamics took motivation hydrodynamics or gas-kinetic theory [37][38][39][40]. Henderson believed that a person on foot flocks acts comparably to gases or fluid [41]. Bradley estimated that the Navier-Stokes conditions administering smooth movement could be utilized to depict movement in groups at high densities [42]. Helbing et al. abridged that at medium and high densities, the movement of a person on foot flocks demonstrated some hitting analogies with the movement of fluid. For example, the impressions of people on foot in snow seem to be like streamlines of liquids or, once more, the surges of walkers through standing groups are practically equivalent to riverbeds [43]. Liquid powerful models portray how thickness and speed change after some time with the utilization of halfway differential conditions [44].
In 2002, Hughes designed a continuum theory for the flow of pedestrians. The present hypothesis is intended for the advancement of general strategies to comprehend the movement of vast groups. Nevertheless, it is additionally helpful as a prognostic instrument. The manner anticipated by these conditions of movement is com-pared with the existing reaction for the Jamarat Bridge close Mecca, Saudi Arabia [45]. In 2003, Hughes built up a continuum model distinct from a classical fluid in light of the property that a group has the ability to think, fascinating new physical thoughts are associated with its investigation. This property made many intriguing applications scientifically controllable. To do this, models were given in which the hypothesis had been utilized to give conceivable help with the yearly Muslim Hajj, to comprehend the Battle of Agincourt, and shockingly, to find obstructions that really increment the stream of people on foot over that when there are no hindrances present [46]. Moreover, in 2004, Colombo and Rosini displayed a continuum show for the person on foot stream to represent normal highlights of this sort of stream, namely, a few impacts of fear. Specifically, this model depicts the conceivable over compressions in a group and the fall in the surge through an entryway of a freezing swarm stick. They considered the circumstance where a gathering of individuals needs to leave a passageway through an entryway. On the off chance that the maximal surge permitted by the entryway is low, the progress to freezing in the group moving toward the entryway may almost certainly cause an emotional decrease in the real outflow, reduction the outflow much more [47].
Microscopic Model: There are several old models of which microscopic is one. Within which everything is realized accurately such as full information about individual and individual manners. Nevertheless, it is not perfect in examining the huge number of attendees. Figure 3 illustrates the microscopic model. Several objects are designed in microscopic such as cellular automata, lattice gas, social force, agentbased, game theory, and experimental approaches. Details about cellular automata and its applications are demonstrated below:
Cellular Automata
The accurate invention of physical methods in which 'time and space' are a remote and liable set of dissimilar values being approved inside the corporeal dimensions is called cellular automata. Cell automation includes a normal identical network, which is to some extent perpetual in grade with a different variable at every position (cell). The status of each cellular automation is mostly based on the approximations of the total reasons for all sites. The development of cellular automation in distinct stages, with the speculation of the 'variable' at a site undergoing influence of the reasons at endpoints 'in its neighborhood on the' start of the previous procedure. For the area of site (cell) it is essential to take into account two things: the 'site' and the 'neighboring locales'. The causes at all the sites are to be up to date together at the same time in order, in the light of the speculations of all the causes in their neighborhood at the start of the previous process, and for the distinctive preparation of local instructions in the corporeal abilities. They have been connected and reintroduced for a wide assortment of purposes and alluded to by an assortment of names, including tessellation automata homogeneous structures, cell structures, cellular structures, and iterative arrays. Von Neumann and Ulam were the ones who introduced cell automata first, which they called it cell spaces, like imaginable idealizing of 'organic' outlines (Von Neumann, 1963, 1966, which has a unique enthusiasm behind showing 'natural selfmultiplication'. For a variety of reasons they have been linked and reinstated and referred to by a variety of names such as tessellation automata homogeneous structures, cell structures, cellular structures, and iterative arrays [48]. In the last two decades, cellular automata models have been created to consider an evacuating group of individuals under different circumstances. These models can be categorized into two groups. The first depends on the associations among situations and walkers. For example, in 2002, Perez et al. illustrated a cellular automata model to study pedestrian exit dynamics that distributed within a single room and content to leave through an experienced way out at the earliest possible time. The possible direction of pedestrians' movement was cardinal movement (or forward, backward, left, and right) which relied on the empty neighbor grid and specify coercion to their corporeal ability relation with neighbors and movement in conformity with ordinary instructions. This investigation presented the arching behavior due to the jamming effect at the way outs. Moreover, in the simulation of the way out output it observed various features, such as flowing and disorderly intervention. Furthermore, widths of the rooms' way out that create the possibility of pedestrians' exit at the same time caused to pedestrians leave the room in various sizes of bursts [49].
In 2002, Kirchner and Schadschneider demonstrated leaving process imitation utilized a new presented cellular automaton model for pedestrian dynamics. The idea of chemotaxis utilized in this model, which used a bionic method to define communication between pedestrians. In this research, some relatively simple conditions were examined, for instance, leaving a big room with a single or a couple of way outs. In addition, it was found that changing in dimensions of the model can define manner from regular to panic in various forms. Furthermore, it is discovered that for accomplishing best leaving times an appropriate amalgamation of herding manner and utilization of way out familiarity was essential [50]. In 2003, Kirchner et al. enhanced a new proposed cellular automaton model for pedestrian dynamics with adding friction parameters. This research investigated the effect of pedestrians' collisions. Friction parameter applied to prevent the possibility of moving conflicted participants into the same space at the same time step. Hence, this type of conflict is possible, but eliminating such situation is crucial in the precise definition of the dynamics. Besides, creating local compression among pedestrians in the model due to the friction parameter made the model have a role in an area with great density. Large room with a single exit door used for the evacuation simulations' experimentations. From the experimentation's result, it was discovered that the friction parameter in arching behavior participated in both of the quantitative influences and qualitative fluctuate [51].
In 2005, Yang et al. utilized a two-dimensional Cellular Automata model in mimicking leaving the process with kin manner. Within the real leaving process several attraction occurrences, such as confusion, congestion, assembly, step back and waiting, pretended due to the difference in constructing of the building, the organizing participants, choosing a path and the interesting for the kin manner. From the simulation results noticed some times walking in mass could be safe, there was no difference in leaving with one exit door and two exit doors from the aspect of kin manner, leaving efficiency greatly affected by sub-groups number and size of the sub-groups, as well as waiting and steps back decreased leaving efficiency [52]. In 2005, Li et al. presented a distinctive procedure based on human behavior to make the rules more practical for pedestrian movement, and then via assuming the bi-direction walker motion in a corridor bottom-up and top-down walkers' motion was illustrated, as well as probability of the influence of walkers' swapping locations was identified [53].
In 2006, Zhao et al. offered a two-dimensional cellular automata model to imitate participant leaving through exit dynamics. This study emphasized on two features, way out width and door partition. Hence some convenient aspects appeared, such as the width of the way out ought to be higher than a critical value, and door partition ought to be medium not too large and not too small. Moreover, One way out's width increment resulted in reducing the flow out for each unit width, nonetheless entire flow out greater than before. Whole way out's flow out was a cumulative nonlinear function of the way out width. Furthermore, way out width assessment did not effect on door partition's best value, and way out design had better be balanced. These aspects improve the efficiency of building design [54]. In 2006, Georgoudas et al. utilized a two-dimensional cellular automata model and applied a computational intelligent technique to examine pedestrians dynamic within a wide space. In addition, this recommended model differs from the previous ones when the model treated with a heterogeneous crowd. In fact, replying heterogeneous parts to the instruction, artificially arranged manner in the crowd, and made pedestrian attain one of the way outs. Finally, characteristics of pedestrians' actions utilized to examine different assumptions such as pedestrians' collision during the leaving process, collective effects, suspending issues, and fixed and movable obstructions [55].
In 2007, Varas et al. utilized a two-dimensional cellular automaton model to imitate the process of leaving from a single and double door classroom with complete ability. In this study, to each grid, the structure of the room, obstructions spreading, floor field were measured. Moreover, the effect of panic as a counted dimension was calculated which % 5 possibilities of not moving. The model applied the random selection to cope with collision issue. Therefore, the proposed model changed into non-deterministic via these characteristics. From the simulation result, it was clearly observed that the best locations of the door and evacuation efficiency were not enhanced by substituting double door with two distinct doors. Finally, for the evacuation time, a number of persons and way out width were considered due to suggesting simple scaling law [56].
In 2007, Yamamoto et al. demonstrated a real-coded cellular automata (RCA) model dependent on real-coded lattice gas to simulate the left from an area with one way out of different widths. The method of changing the pedestrian's position was exposed. In the former developed cellular automata model, the movement was partly simple, while they mimicked the over straight line movement and avoiding the perverse direction. Hence, observing the precise duration of evacuation was difficult. In this developed model, pedestrians allowed to move in the desired direction and measured the rational duration of the evacuation. From the simulation results, congestion observed at the exit of the big room. In fact, a critical number of pedestrian investigated who made congestion.
The correlation between the number of people in the room and the total evacuation time was achieved. Two regions region 1 and region 2 were tested. Inside region 1, however the number of pedestrians increased, the evacuation time remained steady. In contrast, inside region 2, people in the room needed more time to evacuate when an initial number of people increased [57].
In 2011, Alizadeh put forward a CA model to examine the procedure of evacuation in a place which was provided with 'obstructions' which had various configuration of the place, like places of exit and obstruction, 'the width of the exit, light' of the place, psychological status of the evacuee and the dispersal of the people gathered. Its influence was clearly seen in the process of evacuation. A restaurant and a classroom were taken as a case of this model. The way the evacuees distributed, 'location and with of the door on of the evacuation' discussed and production of the model was made ready for comparison with some motionless models [58]. In 2014, Guo, Ren-Yong made a model relied on 'CA with a better separation of the area and advanced speed of walking' to show going away of people who are walking from a place with one door for exit. Two factors affected the shape of the people gathered during the experiments; 'the advanced speed of walking and the separation of area' interval of people at different places and the efficacy of the people who left their houses shown through clocks. Moreover, the connection of 'width and flow of the exit' was demonstrated through this archetype [59].
In 2015, Li and Han proposed a model for simulating pedestrian evacuation relied on widened cellular automata to support various behavioral tendencies in people. Understanding and violence were two of the selected social tendencies to be looked at through this archetype. When examining this simulation, social constraints and pedestrians flow orders were confirmed. The results of the study show that evacuation time does not increase with an individual's knowledge and does not decrease when the individual's condition is noticed as aggressiveness. It is quite obvious that when the individual avoids aggressiveness in his conduct, the best type of evacuation will be recorded [60]. In 2018, Kontou et al. made a model of crowd evacuation on cellular automata (CA) parallel computing tool to simulate and evaluate manners and different features of pedestrians in the evacuation area; including disables. The simulation process was made in a school where disables existed. A center of education in Xanthi, which contained disable people, was selected for the simulation process. With observing and prevailing earthquake, the school organized security application; the total time of the emptying was noted. Lastly, suggested archetype through the experimental data validated and there was a suitability implication to the particular location [61].
Lattice Gas Models
In 1982 by Fredkin and Toffoli and in 1983 by Wolfram Lattice gases were promoted, which is a unique instance of cellular automata [62][63][64]. The individual on the grid of lattice gas models is measured as an active element. Possibility and measurement were considered to help these models to investigate individuals' crowd characteristics [44]. Individuals are fixed with L × W in this model, one individual is for one location. Based on executing a biased random walk with no back steps, the individuals move to a special direction, and available locations are allowed solely [65].
In 2001, Tajima et al., used lattice gas models of biased-random walkers to pretend walker channel stream at a bottleneck under the open boundaries. Then they noticed changing free flow into chocking flow under serious appearance density, filling flow proportion and changing the measure of density, and connection between flow rate and scaling law [66]. In 2002, Itoh and Nagatani presented a lattice gas model of pedestrians to simulate moving of the gathering of people between two halls through a door, and they noticed an ideal admission time for moving the viewers. The time has effect on the gate when visitors want to enter the hall because with decreasing the admission time to under optimal time jammed state occurs and the viewers have no ability to enter the inner hall while arrival pedestrians stopped by the departure pedestrians from entering [67]., in 2003 by Helbing et al. They made experiments and simulations for leaving process from a classroom. For the experimentation, they utilized video cameras for recording leaving students from a classroom and leaving time of each student recorded. Alternatively, for the simulation, they applied lattice gas model of pedestrian flows to compare with the experimentation results. They noticed disorganization specification empirically is well repeated in the evacuation process, and initial location has a major role in leaving time. Jamming or queuing state at the exit has a great effect on increasing leaving time [68].
In addition, this lattice gas model was utilized to think of group evacuation under various circumstances. For instance, in 2004, Nagai et al., made experimentations and simulations to present leaving the process in a room without visibility with a number of exits. In the experimentations, sightless students wearing eye covers imitate individuals in a room without visibility. Additionally, the video camera was applied to record the evacuation of confused students, and then the student path and leaving time were evaluated. On the other hand, students' detected manners were mimicked via the extended lattice gas model wherein sightless students are simulated through biased random walkers. Further, the mean value of the leaving time and students' leaving dynamic patterns were measured and made a comparison with the output of the experimentations. In this study, the exact emphasis was on the leaving time distribution [69].
In 2005, Nagai et al., made experimentations and simulations to show two types of counter-flow of students going on all fours in an open boundaries channel. In the experimentation, video camera utilized for recording and capacities of each student arrival times were calculated. Experimentally features of counter-flow were elucidated.
This research made a comparison between pedestrian counter-flow and the counter-flow of students on all fours. In this study, lattice gas simulation was applied to imitate the experiment, and a biased random walker utilized to pretend students crawling [70]. In 2006, Song et al., they made up a new lattice gas model ''multi-grid model'' due to presenting force concept of social force model into a lattice gas model. Therefore, better lattice participated which made the walkers reside in more than one grid, and constructed instructions for walkers and walkers and structures. This new model was used to simulate leaving walkers from a big room with an exit door, thus, the effect of collaboration force and drift factor on leaving time were evaluated. Finally, a common limitation of the two factors on the leaving process was discovered [71].
In 2007, Fukamachi and Nagatani Studied sidle influence on counter-flow pedestrian and investigated manners of sidle walkers within the crowd in the counter-flow pedestrian. Individuals within the crowd change his moving into sideways in order to be far from congestion and barrier. In this study, the influence of sidle investigated with the enhanced biased random walk model. Three models were demonstrated; 1) face to face usual pace, 2) only sidelong pace, 3) during crowd and barriers change usual walk into sideways, and get back to usual walk when mass left. They noticed the usual pace was slower compare to sidelong pace due to rising congestion, and emergent jamming state in the transference points. In the model number 3, jam cluster nearby middle of the channel extremely fluctuating near the jamming transference point [72], also, various ways joined with lattice gas models to deal with leaving process research.
In 2012, Guo et al. created a varied lattice gas model via utilizing both models of cellular automata (CA) and mobile lattice gas model (MLG model) to simulate evacuation processes during an emergency. Inside this model concept of local population density presented, and in this new model, this concept with a factor of exit crowded degree applied to update rule. Besides, drift D that is a significant parameter has an impact on the evacuation process and can be changed with considering the presented concept. The nonlinear function of the corresponding distance used to define communications, such as friction, attraction, and repulsion between every two walkers and walkers with building dividers. When spaces between them get smaller, repulsion forces increase severely. Simple characteristics of pedestrian evacuation, such as clogging and arching phenomena could be taken from numerical examples [73].
In 2013, Guo et al. offered an agent-based and fire and pedestrian interaction (FPI) model to investigate the leaving process during an existing emergency. It was thought that the environmental temperature field creates an effect on probability direction of the movement. Besides, the multi-grid method was applied to define decreasing speed by low transparency in the fire and pedestrian interaction (FPI). Hence, the authors created an extended heterogeneous lattice gas (E-HLG), model. Inside this new model factor of altitude was added to define the height location of lattice locations. Due to the model and experimentations, characteristics of the left in a terrace classroom were studied. Outputs from the extended HLG model were close to the experiments. In addition, leaving controlled due to the altitude factor, and the different decision of choosing evacuation paths and annoying high-temperature field causes to local jamming and clogging [74].
In 2016, song et al., created an evacuation scene based on cellular automata and a lattice gas model to simulate behaviors of selfless and selfish for the pedestrians during the evacuation and competitiveness behaviors, meanwhile to present the influence of them on pedestrians' strategies. Furthermore, some experimentation performed on the width of the building exit door and analyzed. Outputs of the simulation tests demonstrated that individuals with self-behavior caused more deficiency and rise evacuation duration. Conversely, sympathy caused to decrease evacuation duration and more collaborators. Finally, an important factor for the duration of the evacuation was the exit door width. When the size was less than six cells of the size of 50 x 50, evacuation time increased, conversely, the time was seriously decreased with increasing the width. However, this would be no noticeable when the door exit width much more increased [75].
Social Force Model
In 1995, Helbing and Molnar presented that pedestrian movements can comply with 'social forces'. The movement of the pedestrian is controlled by the accompanying principle impacts, which are first, pedestrian needs to achieve a specific goal. Secondly, pedestrian keeps a specific separation from different people on foot. The third one is that pedestrian additionally keeps a specific separation from the edge of obstructions, for example, dividers. Fourthly, a pedestrian is some of the time is pulled in by different people or objects [76].
In 2000, to simulate fear conditions Helbing et al. built an alternative social force model. In this model combination of physical forces and socio-psychological with people mass behaviors referred [77]. In 2002, Zheng et al made a combination of the social force model and neural network as a model to simulate different conditions of walkers for collective behaviors [78]. In 2005, Parisi and Dorso applied the social force model, which presented by Helbing and assistants in 2000 to permits investigation of various levels of fear in evacuation within a single exit door room. In this research, presenting concept 'faster is slower' participated in changing manners and the rising chance of congestion suspensions, and also this concept with the blocking clusters made a robust connection, whereas, size effect for exit door was concisely debated [79].
In 2006, Seyfried et al. applied the adapted social force model which was presented by Helbing and assistants in 1995 to specifically examine the effect of various methodologies for communication between walkers, which are self-driven objects moving in a continuous space on the velocity-density relation's output. Consequently, they noticed it is possible to simulate the usual arrangement of the fundamental diagram when individual space and current speed are increased. Additionally, they present distant force has an effect on velocity-density relation [80]. Besides, the social force models are joined with different models to examine swarm departure. In 2006, Lin et al., suggested a system for evacuation crowds during emergencies via dynamic model to propose a standard framework for future studies. Carrying out standard func-tions was the main task, and emphasized on framework constancy and capable of being extended to propose new required tasks in the future. This study tried to enhance framework independency and better system execution. Hence, experimentations were executed in a certain construction to evaluate the effectiveness of the crowd evacuation, thus, in the experimentations results specifically interpreted crowd manner, construction, and density of the people were mean of the effect [81]. Later in 2007, utilized the social force model to make pedestrians be dynamic, and then examined 200 pedestrians leaving process from a room during a panic situation. Parameter such as υd, which was denoted aspiring speed for pedestrians to move, was applied to control panic levels. In this study, the effect of "faster is slower" concept according to the attempts with applying various forces was known. When υd efficiency of the evacuation starting to reduction swiftly and flaw rate reaches to the peak, exponential mass distribution changes into ''U-shaped'' [82].
In 2008, Guo and Huang proposed a mobile lattice gas model based on utilizing both social force model and lattice gas model benefits. The model specifies each pair walkers' communication and structure partition with pedestrian communication due to space and pace size in motion. The output of this emergency simulation model demonstrated 1) walkers' evacuation simple features, such as arching and clogging behavior and practically produced a mean of the evacuation time 2) load computation not as much as social force model and gain duration of the evacuation more precisely [83].
In 2011, Okaya and Takahashi utilized a BDI model to simulate the behavior of communications in the crowd, which usually occurred during evacuation. Inside such a model, evacuation behaviors were influenced due to people interactions. Hence, Helbing's social force model adapted in order to consider the intentions of the pedestrians. The output of the experiment's simulation demonstrated that as usual interactions among pedestrians due to congestion made the evacuation take a longer time. In addition, evacuation of family members together increased evacuation duration. Additionally, evacuation behaviors were influenced by directing the evacuation process [84]. In 2014, Hou et al. applied a modified social force model to simulate the influence of the number and location of the evacuation guiders on evacuation dynamics in partial visibility rooms. Inside this model, guiders who are qualified can identify the exit location precisely, and others compliance with the guiders' locations and instructions. Experimentations' output reveals for one exit, one or two guiders put a significant impact. Alternatively, for more than one exit position without adequate benefit from the whole exit, the evacuation gets slower. Consequently, it was obvious to increase the effect of guider on making evacuation faster, a number of exits with the number of evacuation guiders should be equal and guiders properly inside the room centralized of the multi-exits [85].
In 2017, Han and Liu applied a modified social force model involving an information transmission mechanism to simulate behaviors of walkers, when the majority walkers were unfamiliar with the evacuation location during a disaster. This improved model considers the approach of preventing collision and disappearing information. The difference between this adapted model and the previous model was this altered model defines the way of finding and selecting the correct direction, and the previous model was applied to eliminate the pedestrians collide. The output of the simulation demonstrated that due to information transmission mechanism walkers could determine the right motion direction, although walkers' real behavior could be simulated when emergency exists. Furthermore, there were different outcomes from the simulation was obtained to enhance the evacuation. Firstly, utilizing all exit door via the occupied extensively reduce time and rise efficiency of the evacuation. Secondly, using exits with more width completely causes the decreasing time of the evacuation and enhancing evacuation efficiency. Thirdly, in the start of evacuation walkers were restricted to select exits with greater width with fewer densities for their evacuation route. Lastly, at the start of the evacuation process essential directing was vital [86].
Agent-Based Model
ABMs are computational models that assemble social structures from the '' bottomup '', by reproducing people with virtual agents and making promise associations out of the task of principles that run connections among operators [87]. Bonabeau maintained the perspective of the following manner. In describing agent, the manner of mutual fear is an occurrence, which is growing due to the generally complex individual-level manner and cooperation among agents. Therefore, the agent-based model (ABM) appeared to be perfectly suited to give significant prudence into the method and prerequisites for fear and overcrowd by incoordination [88]. Nearly a a couple of decades, the ABM method has been utilized to contemplate crowd evacuation in different circumstances. ABMs compare to other methods, such as cellular automata, social force, lattice gas or fluid-dynamic models are commonly more computationally costly. Besides, dealing with heterogeneous people is considerably easier due to ABMs' capacity to enable every agent to have distinctive manners [44].
In 2004, Zarboutis and Marmaras demonstrated a method of modeling and simulating a metro system with an existing fire state in a tunnel. The ability of the simulation method to search for an effective strategy in the rescue was debated. This system included various subsystems and made multifaceted adjustable system. Hence, they established an agent-based simulation to support fitting dynamic illustration of the difficulties in the designed area, and 1) made the serious reliance and robustness of the system be captured by the designer, 2) described the intended rescue plan via the determined characteristics by the designer, 3) evaluated their proficiency. From the experimentations, output demonstrated different arrangements with different situations for metro personnel's activities [89]. In 2005, Braun et al., dependent on social force model exhibited an agent-based model to simulate effects of various floors, walls, and obstructions on agents, and cooperation among agents in emergency conditions. In the model, XML script was utilized to define the probability of reproducing various scenarios. Danger incident exhibited and visualized, multifaceted environment measured alarms organization were spread and considered through the environment, which led to reducing dead agents' numbers [90].
In 2006, Toyama et al., dependent on cellular automata suggested an agent-based model (ABM) denote various walkers' features, for example, room geometry knowledge, speed; gender, obstacle avoidance behavior, and herding behavior. They investigated the effect of various designs, various number of pedestrians in groups, and features on pedestrians dynamics and system's macroscopic manners [91]. In 2007, Pelechano et al. introduced High-Density Autonomous Crowds (HiDAC) model, which was a multi-agent model. This model relies upon psychological and geometrical instructions while it is a parameterized social force model. It very well may be adjusted to mimic different kinds of the crowd, extending from the high-density crowd under quiet conditions (exit from a cinema after a movie) to extreme fear circumstances (fire leaving) [92].
In 2012, Simo et al. demonstrated a model, which employed the social force model extensively, and agents' movement defined via Newtonian manner to simulate counter-flow conditions for mediators' behavior, which attempt to eliminate the colliding with approaching mediators. Inside this model, mediators noticed the moving path of head mediators, and their activities were selected based on that observation [93]. In 2012, Ha and Lykotrafitis applied self-moving particles system and movement was directed via social force model to simulate how various conditions, such as, multifaceted structure of the building, size of the room exit, main exit size, friction factor, and preferred speed influence on the evacuation time in which conditions enhancement occurred in evacuation efficiency. The output of the simulation demonstrated evacuation in a single room with small size for the exit, defined high speed takes longer time due to occurring overcrowding. With increasing, door size overcrowding disappeared. Friction had a significant effect on overcrowding, with minimizing the factor of friction overcrowding quickly disappeared.
On the other hand, for a floor with two rooms, one main exit door, and a hallway, a smaller the door room size may enhance evacuation efficiency. Because the number of the evacuee agents would be smaller from the room into the hallways, thus, the number of evacuees for the main exit would be smaller and agents can go out without occurring serious congestion. Conversely, with increasing room door size and agents' speed of the evacuation time decreased, while serious congestion occurred near the main exit door [94]. In 2018, Poulos et al. employed an agent-based evacuation model to simulate the school's staff and nearly 1500 children of an inclusive evacuation process executed for the whole city. This study emphasizes on kindergarten to 12thgrade school and examines the movements of various mediators. This simulation certified via comparing the real event, which shot video of the event, and expected a result from the developed model simulation, errors between the real and expected was the only % 7.6. Hence, output said that utilizing a mathematical model in evacuation for adapting logistical issues in an emergency arrangement is fair [95].
Game Theoretic Models
On the off chance that the intelligent choice procedure of the evacuees is reasonable, a game theoretic methodology can be embraced to display the choice circumstance [96]. In a game, the evacuees survey the majority of the accessible choices and select the elective that augments their utility. Every evacuee's last utility adjustments will rely upon the activities picked by all evacuees. Game is the determination of a cooperative state via a group of individuals, conceivable approaches of every individual, and the group of all conceivable utility adjustments. For one leave, the competitive behavior of the walkers in crisis departure could be deciphered in a game hypo-thetical manner [97]. For a few ways out, Lo et al., built up a non-agreeable game hypothesis display for the dynamic leave choice procedure of evacuees. The model inspects how the reasonable communicating conduct of the evacuees will influence the clearing designs. For the leave determination process, a blended procedure is considered as the likelihood of leave decision. The blended methodology Nash Equilibrium for the amusement depicts the balance for the evacuees and the blockage conditions of ways out [96]. In 2006, Lo et al., presented an original method to show the dynamic process of participants' way out choice. In a space with the extensive crowd, density participants manage his/her plan of evacuation on crowd's action, movement space to the way out, and influence of environmental motivation and way out familiarity. This plan is due to observing activities of participants and environmental conditions via other participants and reply to their familiarity to choose their egress path. This research presented a non-cooperative game theory framework. The model looked at how evacuation patterns and leaving time of an area with numbers of the way out influenced via the participants' collaborating manner. A merged procedure is considered for a way out determination as to the way out the choice possibility. The merged procedure Nash Equilibrium for the game defines the stability for the participants and the overcrowding conditions of way outs [96].
Approaches Based on Experiments with Animals
The utilization of creatures is another methodology for examining swarm departure. Tests in real departure freeze are troublesome, particularly with people in view of conceivable moral and even legitimate concerns. The elements of evacuation restriction are not totally comprehended in light of the fact that reviews have been largely kept to numerical recreations [98].
In 2003, Saloma et al., examined the dynamics of outflow fear in mice getting away from a water pool to a dry stage through an exit entryway. The investigation demonstrated how the engineering of the space in which they are kept affected in the manner of fearing groups. The investigation output exposed that for a basic inspecting interim their outflow manners concurred with the numerically anticipated exponential and power-law frequency distributions of the leave burst measure notwithstanding for brief time spans [98]. In 2005, Altshuler et al. used ants as a model of pedestrians to demonstrate herding behavior. In the experimentations, the ants were applied in one cell with equally placed two exits. Ants nearly used both exits to abandon in the same way when the situation was normal, however, with setting panic due to expeller liquid select one of the exits more severely. Hence, they modified the former hypothetical model, which comprised herding related to the panic element as the main element to simulate noticed dynamics outflow in detail. Moreover, from the experimentation outputs, emerged hypothetical models, preferred that during the leaving the process with the existing panic situation there are some common characteristics of the social manners between humans and ants [99].
Mesoscopic Model: Mesoscopic is one of the classical models and in these model movements of large size of people are investigated and somehow individual features are specified, Figure 4 illustrates the mesoscopic model. In mesoscopic, cellular automata and gas Kinetic approaches are combined. The following describes cellular automata and gas Kinetic approach:
Cellular Automata and Gas Kinetic
Cellular Automata (CA) with Gas Kinetic methodology made a mesoscopic model, which utilized into the motion of individuals observation. Besides, this model presents and imitates the great size of the group [100]. CA is a model, which is divided into numbers of grids; every grid has adjacent and different state [4]. In addition, CA to interact with simulating the departure of agents depends on separation, distribution, and utilizes an irregular way. CA thinks about the collecting manner of the agents. A key part of the CA display is more suitable to speak to pedestrian stream in perspective of its straightness, flexibility, and effectiveness [101].
Hybridized Models
Via using both macro and micro models of the classical models, a model, such as hybridized models designed and it can be divided into three different models; zone based, layer based, and sequential based. These models deal with the area of the evacuation and motion of the participants during the evacuation. These models are described in the following subsections: Zone-Based Model: In this methodology, the area of simulation is partitioned into numerous zones. In light of use needs, each zone is reproduced either for the microscopic or macroscopic model. Zone imitation under macroscopic procedure gives in the general stream of the group, though zone mimicked with microscopic model offers singular dimension practices perception. Largely, the proposed procedures run the two models all the while on pre-defined zones [102][103][104].
In 2011, Wei et al. utilized Hybrid Grid Simulation Infrastructure to simulate a leaving process of a high-density mass in an urban area. Three heterogeneous models had been built that were a computational microscopic crowd model, a pedestrian agent model, and a vehicle agent model to depict different parts and features of the big and compound simulated situation. From the output exhibited that suggested infrastructure is a feasible and capable method for big and compound simulation system [103]. In 2011, Sewall et al., for shared simulation of largescale vehicle traffic for virtual universes and enlarged airborne maps exhibited a different strategy, which dynamically combines continuum and discrete approaches. Accepting these two unique approaches at the same time in various areas takes into account an adaptable simulation system where the client can without much of a stretch and naturally exchange quality and effectiveness at runtime. They had employed this method to the mimic of extensive systems of vehicle traffic dependent on artificial engineered urban situations, real-world information, and accomplished more prominent than continuous execution [105].
In 2012, Anh et al. showed a hybrid modeling method for evacuation simulation to increase the speed of pedestrians' movement and worked on the arrangement problem of both micro and macro models. Initial outputs demonstrated that to simulate leaving strategy in road network via the hybrid model more effective than via micro model alone [102]. In 2012, Xiong et al. wanted to use the benefits of macroscopic and microscopic models together. These two models in a simulation worked concurrently and performed within various shared special partitions. Through each step of simulation execution, each model had self-governing. Nevertheless, for crowd entering to opposite partition was allowed for each of the models. Models could join at edge border via using the various interactions of aggregation and disaggregation to swap information. From the output, it was exhibited that this hybrid model was more efficient than the microscopic model, and made quality enhancement compare to the macroscopic model [104].
Layer Based Model: Accepting way of applying both micro and macro methods partly into various layers is another method to deal with mass imitation. These applied methods are used in the whole area of the imitation in order to determine plane mass movement and additionally motion forms of the agents in the mass. This new method for both distinct layers does the arrangement of the global path, evasion of local obstacle and other wanted manners of the mass [106][107][108]. Inside this proposed method, the macro method applied to mimic mass motion in accordance set of rules in the first layer and the mass motion manner from this layer goes to the second layer as input. Hereafter, in the second layer micro method is using to mimic motion individuals independently and with rising density protect the cost-effectiveness.
In 2008, Banerjee et al. displayed an augmentation of the layered knowledge strategy that is prevalent in the amusement business for adaptable group recreation. They noticed a few navigation behaviors could be applied effectively in this system. The central preferred standpoint of this system is broadened capacity, where new behaviors can be included by including separate layers, without influencing the current layers. The edge rates have been observationally demonstrated to be adequate to handle huge groups continuously. A few angles have been distinguished where this simulation framework can be moved forward [107]. A natural way exhibited to deal with direct recreation of virtual groups using objective coordinated route capacities. The methodology effectively showed for a wide assortment of reproduction to produce diverse plainly visible practices and characteristic looking movement designs, resolve clog and perform an objective coordinated route. Mostly, this methodology offers a straightforward, yet incredible technique to direct or control swarm reenactments.
In 2011, Patil et al. exhibited a natural way and employed navigation functions to deal with the direct simulation of virtual groups. This approach effectively showed to produce diverse macroscopic behaviors and natural looking movement designs, resolve to clog and perform goal-directed navigation. This procedure was an incredible technique for controlling and directing the crowd simulation process [108]. In 2012, Tissera et al. exhibited a hybrid simulation model to check behavior patterns in an emergency leaving. Both environmental (EsM) and pedestrian (PsM) sub-models shared inside the hybrid model. Constructing a synthetic location occupied with independent cooperative agents due to the combination of the model with the computational procedure. Authors made sequence investigations; for instance, check the environment to the individuals leaving that were behaviors was available for the "adjacent door". After that, check the effect of familiarity of the individuals into the environment, outside motivation to instruct the individuals was utilized to the other conceivable outflow exit. The behavior of people reacting to this improvement is expected to "get out the entryway quicker" [106].
Sequential Based Model:
Like layer based hybrid models, another methodology is a sequential hybrid procedure, which additionally runs both large scale (macro) and small-scale (micro) models for the entire group. It first runs a large-scale model to direct the motion forms of group and after that applies a small-scale model to the same group for watching the individual manners. It executes the two models in a successive way where a synchronization technique is required to exchange the group state between the two modes [109,110].
In 2011, Park et al. demonstrated a hybrid framework for crowd simulation due to applying both continuum-based and agent-based methodologies. The model catches the dynamics of the crowd in a vast group besides the individual behaviors of every agent. From outputs of the performance demonstrated that their methodology made a good balance in big and great determination simulated environment. Another promising benefit of this methodology is map field development due to the capability of the model to link to any possibilities for the map field, which gives the ability to enlarge the continuum-based simulation with non-lattice based paths [36]. In 2013, Xiong et al. suggested a hybrid model due to utilizing both macroscopic and microscopic models to simulate crowd in dynamic environments. Movement tendency for the crowd was simulated via the macroscopic model. On the other hand, determining the velocity and moving direction was due to the microscopic model. According to the outputs of the simulation appeared there is a good performance to show the features of crowd movement and human [109].
Generic Model
Due to crowd density, the Simulation of an application nearby requires using most suitable software, the needed dimension of individual manners (corporeal, mental and collective), and execution time. Simulation software projects are reliant on fundamental models that cannot be changed according to end client necessity. Hence, the generic model would be an important need to give the ability to choose models on user selection for detecting different crowd dynamics [111]. The following describes the transit approach: TransiTUM Model: The latest attempt exhibited to build up a conventional structure for multiscale coupling of walker imitation models for transition zones [112]. Grouping different models, such as mesoscopic and microscopic models need the autonomous of these models. Besides that, essential parameters, such as speed, current location, subsequent goal, max speed and so on could be moved between them via assisting a data file. The displayed model concentrated on autonomous of related models and in this manner can be connected to any mix of mesoscopic and microscopic models. With the assistance of an outer information record, models can openly exchange essential parameters (speed, current location, subsequent goal, max speed and so on) between themselves. It has employed the idea of transit area and relaxation zones to flawlessly move the people from one model to another. Therefore, walkers can enter from any points. Nonetheless, this starter progress in the direction of conventional coupling and multi-point entry to transition zone needs further examination. Lattice Gas Model Applications [66] LGM
Microscopic homogenous
Normal free flow to the choking flow, bottleneck width caused to saturated flow rate and transition density scale [70] LGM Emergency Counterflow of people crawling on all fours, speed, jamming transition, and pattern formation [71] LGM Normal/Em ergency Sidle effect, counterflow jamming transition and pattern formation [67] LGM Normal Shifting of the audience between two halls, and Jamming transition [68] LGM emergency Jamming (queuing), and effect on increasing escape time [69] LGM Communication of escape times and the exit configuration, explain blind people feature properties [72] LGM and SFM Determine simple features of the social force model, such as, clogging and arching, [73] MLGM and CAM heterogeneous Impact of concept of local population density on drift D within the evacuation, friction, attraction, and repulsion [74] E-HLGM Altitude factor added caused to control evacuation, choosing evacuation paths, annoying high tempera-ture field caused to local jamming and clogging [75] LGM and CAM Selfless and selfish for the pedestrians during evacuation Cellular Automata Model Application [52] CAM and SFM
Discussion and Conclusion
Although the developed models relied on pedestrian evacuation methods have been used widely and completed, it is yet necessary to create models that precisely 'simulate people evacuation time' and existing manners in the time of emergency evacuation. It becomes necessary to review the past studies and make a distinction between dissimilar causes that influenced the manners of participants when emergency cases happen and similarly on participants' evacuation time. [98] Mice investigation Microscopic Homogenous Emer gency Self-organized queuing, diffusive flow, scale-free behavior in escape panic [99] Ant investigation Emer gency Panic condition, herding behavior
Hybridize Models
Zone-based Model Applications [105] Zone-based microscopic and macroscopic heterogeneous Normal Leaving strategy in the road network [103] Normal Leaving process of a high density mass urban area [102] emergency increase speed of pedestrians' movement [104] Normal Improve efficiency and enhancing quality
Layer-based Model Applications
[107] Layer-based microscopic and macroscopic heterogeneous emergency navigation behaviors, new behaviours can be included by including separate layers, handle huge groups continuously [108] clogging and perform goal-directed navigation [106] behaviour patterns in an emergency evacuation, environment, familiarity, and external motivation impacts This paper is a comprehensive and systematic survey of pedestrian evacuation which sheds light on 'applications' characteristics, methods, and implications of models methods. There are different categories for these models' methods and generic models. Forthcoming studies may benefit from this sorting of models and shedding light on their dissimilar features in several conditions and as a sort of guide to direct future studies. Finally, researchers can get benefit from results so as to be able to make better decisions in the system of evacuation and it will be a great support to enhance novel simulation models with novel abilities.
Sequential-based Model Applications
Relying on the literature review, the forthcoming studies will use a developed recent smart model. Moreover, the way fire is designed and implemented will be looked at. Other characteristics like the way fire spread out through the residential places as related to its location will be part of this, too. Additionally, the influence that the fire leaves on the agent's conducts will be explicated.
To finalize, the consequence of the fire and smoke will be written in the suggested model numbers of the demise, wounded or suffocated people.
|
2019-11-04T12:42:14.000Z
|
2019-11-04T00:00:00.000
|
{
"year": 2019,
"sha1": "1cfc4f81658b73feedd2fdf0c7c89bbfcdbc9f19",
"oa_license": "CCBY",
"oa_url": "https://online-journals.org/index.php/i-jes/article/download/11767/6223",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "1cfc4f81658b73feedd2fdf0c7c89bbfcdbc9f19",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
267972407
|
pes2o/s2orc
|
v3-fos-license
|
Mechanisms by Which Electroacupuncture Alleviates Neurovascular Unit Injury after Ischemic Stroke: A Potential Therapeutic Strategy for Ischemic Brain Injury after Stroke
Stroke is the most common cerebrovascular disease and one of the leading causes of death and disability worldwide. The current conventional treatment for stroke involves increasing cerebral blood flow and reducing neuronal damage; however, there are no particularly effective therapeutic strategies for rehabilitation after neuronal damage. Therefore, there is an urgent need to identify a novel alternative therapy for stroke. Acupuncture has been applied in China for 3000 years and has been widely utilized in the treatment of cerebrovascular diseases. Accumulating evidence has revealed that acupuncture holds promise as a potential therapeutic strategy for stroke. In our present review, we focused on elucidating the possible mechanisms of acupuncture in the treatment of ischemic stroke, including nerve regeneration after brain injury, inhibition of inflammation, increased cerebral blood flow, and subsequent rehabilitation.
Introduction
According to previous studies, approximately 13.7 million people are affected and afflicted by stroke each year [1,2].Stroke is currently the second leading cause of death worldwide [3], accounting for 10% of all deaths [4,5].Its high mortality, disability, and recurrence rates impose a heavy economic burden on global health care systems.Stroke is broadly categorized as ischemic or hemorrhagic; this review focuses on the mechanisms associated with ischemic stroke.This is the most common type of stroke and is caused by occlusion of cerebral blood vessels or cerebral thrombosis, which results in blockage of cerebral blood flow (CBF) and thereby causes ischemia, hypoxia, and softening or even necrosis of brain tissue.
After ischemic stroke onset, restoring cerebrovascular function to reinitiate the blood supply to the brain is the first priority.Current conventional treatment strategies include angioplasty, stenting, intravenous thrombolytic therapy, and thrombectomy.However, there are obvious limitations to these therapies, including stringent treatment time windows, strict indications for administration, and many contraindications [6,7].The safety and efficacy of these treatment strategies are also controversial according to previous clinical research [8].Therefore, there is an urgent need to identify novel effective therapeutic strategies for clinical use.
Acupuncture originated in China, has a history of more than 3000 years, and is an important component of Chinese medicine [9].It involves the stimulation of specific acupuncture points on the body surface with specially designed metal needles within the theoretical framework of traditional Chinese medicine and utilizes twisting and lifting techniques or electrical impulses to treat diseases [10][11][12].Acupuncture is rapidly developing; its use has spread to many countries and its efficacy is widely recognized.Scientists have studied the efficacy and underlying mechanisms of acupuncture, and the effectiveness of acupuncture as an alternative therapy for stroke has been described in the literature [13].However, the potential mechanisms through which acupuncture may aid the treatment of stroke are not yet fully understood.Therefore, the purpose of this study is to investigate the possible mechanisms of acupuncture in the treatment of stroke.Various studies have demonstrated that acupuncture pretreatment can treat ischemic stroke by inducing cerebral ischemic tolerance, regulating oxidative stress, increasing CBF, inhibiting apoptosis, and promoting neural regeneration; thus, it is a promising prevention strategy and alternative therapy for stroke [14].
Ischemic Stroke and Electroacupuncture
Ischemic stroke is caused by a reduction in or interruption of blood flow in the cerebral arteries due to various causes, including atherosclerosis, cardiogenic embolism, vasculitis, hereditary diseases, and hematologic disorders [15,16].Ischemic stroke directly impairs neurological function in three main ways.First, ischemia and infarction during ischemic stroke cause direct damage to brain tissue.Second, ischemia induces the production of excess reactive oxygen species (ROS), causing oxidative stress, which exacerbates neuronal dysfunction.Finally, the inflammatory cascade caused by ischemic stroke is thought to result in further neuronal damage.
Acupuncture, which originated in China, has been developed as a unique treatment modality during its long history and is an important part of Chinese medicine [17,18].Several clinical studies have shown that acupuncture improves postural balance, reduces muscle spasms, and increases muscle strength [19][20][21].In electroacupuncture (EA) therapy, a fine needle is inserted into the selected acupoint.After tactile confirmation that the needle is correctly placed, an EA machine is connected to the acupuncture needle, and a low-frequency pulse current similar to that of human bioelectric currents is delivered with different stimulation parameters to treat different diseases [22].A systematic review and meta-analysis published in 2015 evaluated the clinical efficacy and safety of EA in the treatment of ischemic stroke [23].The mechanism of EA in the treatment of ischemic stroke has also received much attention recently; scientists conducted a systematic review and analysis of recent clinical studies and found that EA can play a role in the treatment of ischemic stroke through the following five mechanisms: (1) EA can promote neuronal proliferation and differentiation and induce poststroke neurogenesis and neuroprotection, (2) EA can effectively ameliorate the damage caused by cerebral ischemia-reperfusion, (3) EA can increase CBF and alleviate blood-brain barrier dysfunction after stroke, (4) EA can inhibit apoptosis, and (5) EA pretreatment can induce cerebral ischemic tolerance [24][25][26][27][28]. Herein, we review the main potential mechanisms of acupuncture in the treatment of stroke.
Method
Studies on the mechanism of acupuncture in treating ischemic stroke models were obtained by a PubMed literature search, which was limited to full-text studies published in English between January 1, 2000, and December 31, 2022.The following search string was used for the literature search: ("electroacupuncture" OR "acupoint") AND ("ischemic stroke" OR "neurogenesis" OR "cerebral
Mechanism of Action Underlying the Effect of EA on Neurogenesis and Cell Proliferation
The beneficial effect of EA on neurogenesis after brain injury may be related to neurotrophic factors.Neurogenesis in adult mammals is accomplished primarily through the division of stem and progenitor cells [29].Adult neurogenesis, i.e., neuronal growth and development, mainly occurs in two brain regions: the subventricular zone (SVZ) of the lateral ventricle and the subgranular zone (SGZ) of the dentate gyrus (DG) of the hippocampus [30,31].Reports indicate the presence of neural stem cells (NSCs) in the neocortex, amygdala, striatum, and substantial nigra [32].These NSCs have the ability to differentiate into different types of neurons, astrocytes, and oligodendrocytes at the time of brain injury [33,34].Neural progenitor cells (NPCs) are present in the SVZ and migrate to the site of the injury, where they form new functional synapses with the remaining neurons and connect new neural circuits [35,36].Brain injury activates endogenous neural repair systems, and the proliferation and survival of these neural cells are thought to play a role in neural repair in the brain, which is a potential therapeutic target [37][38][39].Neurotrophic factors can prevent ischemic injury and exert neuroprotective effects.Glial cell-derived neurotrophic factor (GDNF) belongs to the transforming growth factor beta (TGF-β) family and was originally found to enhance the function of midbrain dopaminergic neurons by promoting survival and differentiation during the development of the central and peripheral nervous systems [40,41].Moreover, it was shown that GDNF ligands can promote the neurogenic differentiation of NPCs [42].Brain-derived neurotrophic factor (BDNF) is a protein that regulates neuronal survival and synaptic plasticity by binding to tyrosine kinase beta (Trkβ) and p75 neurotrophin receptor (p75 NTR) and activates intracellular signaling pathways [43,44].Lu et al. [45] reported in a meta-analysis of 34 studies (1617 animals) that acupuncture can promote the proliferation, differentiation, and migration of nerve cells after stroke.Brain injury can activate the neural repair system, but spontaneous regeneration cannot meet the requirements of brain function recovery [46].Nerve growth factor (NGF) can promote the growth and dif-ferentiation of central and peripheral neurons and accelerate the repair of the nervous system after injury, and EA can induce the expression of NGF; the number of BrdUpositive cells was found to be significantly increased by the combination of NGF and EA, indicating increased proliferation and survival of neuronal cells [47,48].In addition, EA pretreatment elevates stroke-induced striatal neurogenesis and improves neurological recovery through modulation of the resinous acid (RA) pathway [49,50].Acupuncture alters the expression level of growth-associated protein-43 (GAP-43), a protein specific to the nervous system, and promotes nerve regeneration in the marginal zone of the ischemic lesion [51].In some MCAO animal models, the expression of BDNF, GDNF, and vascular endothelial growth factor (VEGF) was found to be increased significantly after EA stimulation of Baihui GV20, Dazhui GV14, and Quchi LI11; this resulted in increased proliferation and differentiation of NSCs in the DG and SVZ and promoted the differentiation of proliferating NSCs into neurons and glial cells [24,[52][53][54][55].
Notch receptors are highly conserved, single-channel transmembrane proteins that are involved in various cytogenesis-related processes, including cell differentiation, apoptosis, and proliferation, and play an important role in the regulation of self-renewal and differentiation of NSCs after ischemic injury.In the MCAO animal model, EA at LI11 and Zusanli ST36 was found to activate the Notch signaling pathway, significantly decreasing cerebral infarct size, improving cerebral nerve function, and promoting the proliferation of NSCs in rats [56][57][58][59].Basic fibroblast growth factor (bFGF) promotes the regeneration and repair of mesodermal ectodermal cells, which is essential for the development and differentiation of the central nervous system (CNS).In both cerebral ischemia/reperfusion (CI/R) and MCAO models, EA treatment was found to significantly increase the expression level of bFGF in the striatum and cortex and thus exert neurogenic and protective effects [60][61][62].One of the most abundant neuropeptides in the nervous system, neuropeptide Y (NPY), is a key regulator of homeostasis inside and outside the CNS, while NPY can also promote neurogenesis in the SVZ and DG regions.Furthermore, acupuncture can upregulate the expression of NPY in the CNS [63][64][65][66].In addition to affecting neurogenesis and proliferation, ischemic stroke also severely impairs brain function due to cerebral ischemia/reperfusion.
Oxidative Stress and EA
CI/R leads to the generation of large amounts of ROS, resulting in an imbalance between oxidative and antioxidative status in the body and thus oxidative stress, which in turn causes cellular damage and necrosis, which is a key factor in ischemic brain injury [67,68].Mitochondria are the main site of aerobic cellular respiration, supplying en-ergy to cells and participating in cell differentiation, apoptosis, and information transfer [69].Under normal physiological conditions, the respiratory chain of mitochondria is the main source of ROS, but CI/R increases the leakage of electrons generated in the respiratory chain, resulting in the production of large amounts of ROS [70].Furthermore, CI/R causes a large decrease in the level of superoxide dismutase (SOD), a key factor in maintaining oxidative and antioxidative balance, impairing its function as a free radical scavenger and the redox balance of mitochondria, and causing the excessive production of ROS and free radicals with impaired antioxidant capacity, ultimately leading to mitochondrial dysfunction [71,72].Experiments have shown that EA of Fengchi GB 20 can activate the antioxidant enzyme system and increase the ability of SOD and glutathione peroxidase (GSH-Px) to scavenge excessive ROS; this increase in SOD activity involves the nuclear factor erythroid-2 related factor 2 (Nrf2)/heme oxygenase (HO-1) signaling pathway, and the increase in GSH-Px activity may be associated with increased antioxidant activity by glutathione (GSH) [73,74].CI/R causes lipid peroxidation to produce malondialdehyde (MDA) and 4hydroxynonenoic acid (4-HNE), and EA at GB20 and ST36 can reduce the degree of lipid peroxidation and MDA production during MCAO [75].A systematic review and metaanalysis published in 2014 showed that EA reduced the infarct size and improved neural function in animal models of cerebral ischemia [76].The respiratory chain function of mitochondria was found to be significantly improved after EA at Shuigou GV26 and GV20, and the activities of succinate dehydrogenase (SDH), cytochrome C oxidoreductase, and NADH dehydrogenase (NADH-Q) reductase were found to be increased, resulting in an elevated antioxidant capacity and inhibition of ROS production [77].It was also found that CI/R impaired the phosphoinositide 3-kinase (PI3K)-protein kinase B (AKT)-mammalian target of rapamycin (mTOR)-mediated autophagy-lysosome pathway (ALP) to increase the percentage of dysfunctional mitochondria while impairing mitochondrial autophagy, an important change associated with CI/R, and EA restored mitochondrial autophagy through the Pink1/Parkin signaling pathway to ameliorate the impairment of the ALP [78,79].
Anti-Inflammatory Effect of EA
The inflammatory cascade after ischemic stroke results in the activation of a series of inflammatory cells and the release of inflammatory signals.Acupuncture exerts anti-inflammatory effects by regulating multiple immune cell populations and inflammatory transmission, which involves glial cells, vagal cholinergic anti-inflammatory pathways, and leukocytes (Fig. 1).
Glial Cells
Microglia are the most numerous immune cells in the brain, accounting for 5-10% of all cells in the brain [80].Damaged neurons release damage-related molecular patterns (DAMPs) after CI/R, and DAMPs rapidly activate microglia [56].On the one hand, the anti-inflammatory factors (interleukin, IL-4, IL-10, IL-13, TGF-β) and neuroprotective factors secreted by activated microglia can remove harmful substances from damaged neurons in the CNS and promote the repair of neurological functions [81,82].On the other hand, excessively activated microglia secrete high levels of proinflammatory factors (e.g., tumor necrosis factor alpha (TNF-α), IL-1β, IL-6, and matrix metalloproteinases), causing neurotoxicity, inhibiting the repair of the nervous system, and aggravating damage [83][84][85].EA effectively reduces microglial activation, which is associated with inhibition of the P38 mitogen-activated protein kinase (MAPK) and extracellular regulatory protein kinase (ERK) pathways, reduces the levels of proinflammatory factors, and controls inflammatory responses [86].In an animal model of MCAO, acupuncture at Neiguan PC6 and LI11 inhibited Toll-like receptor 4/enhanced κ-light chain in B cells activated by nuclear factor (TLR4/NF-κB) pathway activation; decreased the expression of IL-1β, TLR4, TNF-α, inhibitor of nuclear factor kappa-B kinase subunit beta (IKKβ), NF-κB, RelA (P65), and tumor necrosis factor-associated factor 6 (TRAF6); and alleviated neuronal injury [87].Microglia can be polarized toward two phenotypes, the proinflammatory M1 phenotype and the anti-inflammatory M2 phenotype, which contribute to neurological damage and neuroprotection, respectively [87].Acupuncture at ST36 was shown to induce a shift from M1 polarization to M2 polarization, achieving a balance between the two polarization states by suppressing proinflammatory factor expression and increasing anti-inflammatory and repair factor expression [74].At the site of CI/R injury, many reactive astrocytes and microglia proliferate and dif-ferentiate, forming a glial scar [88].Increased secretion of chondroitin sulfate proteoglycans (CSPGs) by cells forming the glial scar is an important hindrance to regeneration and functional recovery of axons [89,90].Astrocytes have a nutritional and protective role in neuronal development and are involved in the formation of the blood-brain barrier and synaptic signaling [91].In an animal model of stroke, the expression of the astrocyte activation marker GFAP was found to increase substantially after EA at GV20 and GV14 but decrease after a period of time, suggesting that EA has the potential to activate astrocytes in the area of injury and prevent excessive glial scar production [92].In addition, EA increases the synaptic density and thickness and plays an active role in synaptic reorganization [93].
Vagus Nerve
Stimulation of the vagus nerve may be a potential therapeutic strategy to effectively reduce the inflammatory response and promote neurological recovery [94].The vagal cholinergic anti-inflammatory pathway can inhibit the release of TNF-α and proinflammatory factors such as IL-1β, IL-6, and IL-18 by macrophages through electrical stimulation [95,96].The a7 nicotinic acetylcholine receptor (a7nACHR) is the key to the function of the cholinergic anti-inflammatory pathway [97].EA can exert antiinflammatory effects by suppressing the function of the reflex center of the innate immune system [96].In an experimental model of stroke, EA activates a7nACHR to inhibit high mobility group protein B1 (HMGB1), a nuclear protein with proinflammatory effects [98].Acetylcholine secreted by the vagal nerve inhibits peripheral inflammation in the brain, and EA of the rat forepaw increases acetylcholine release, which may be related to the targeting of miR-132, an inflammatory regulator, by acetylcholinesterase (AchE) [99,100].In addition, EA at GV20 and GV14 exerts neuroprotective effects by reducing the expression of five subtypes of another muscarinic cholinergic receptor to ameliorate damage to the central cholinergic system [101].
Leukocytes
After ischemic injury, the production of adhesion molecules and chemotactic mediators of leukocytes increases, resulting in the recruitment of large numbers of leukocytes to the injured area and their adhesion to endothelial cells, as well as an increase in neutrophil infiltration, which exacerbates the release of proinflammatory factors and the inflammatory response [102,103].Intercellular cell adhesion molecule-1 (ICAM-1) is an important receptor that mediates the leukocyte adhesion response, and its expression level is increased in response to inflammatory stimuli in ischemic injury, thereby exacerbating leukocyte adhesion and infiltration [104,105].The expression of platelet p-selectin (P-selectin), an adhesion molecule, is upregulated after ischemic injury, causing leukocytes to roll on stimulated endothelial cells, resulting in leukocyte extravasation and aggravating the inflammatory response while causing platelets to adhere and aggregate and lose their stability, thus forming a thrombus [106].EA was found to inhibit the expression of ICAM-1 and P-selectin, reduce the adhesion and infiltration of leukocytes, and exert an anti-inflammatory effect [107].
Other Transmitters and Receptors Involved in Inflammation
In addition to its analgesic and sedative effects, activation of the delta-opioid receptor (DOR) has been shown to effectively protect against ischemic-hypoxic injury during CI/R by reducing excitatory neurotransmitter expression through ion homeostasis and reducing impaired neurotransmission, which may be related to the BDNF-tyrosine kinase receptor B (TrkB) signaling pathway.EA at GV26 and PC6 was shown to upregulate DOR expression by mediating the BDNF-TrkB signaling pathway and significantly reduce ischemic infarction and functional impairment [108,109].Glutamate is an important excitatory neurotransmitter in the CNS.Ischemic transporter dysfunction in the brain after stroke leads to excessive release of glutamate and overactivation of N-methyl-D-aspartate (NMDA) glutamate receptor (NMDAR), which induces excitotoxicity leading to neuronal cell injury and death [110].In an animal model of ischemia induced by vascular occlusion, glutamate levels were found to be significantly higher in the control group (135.19 ± 23.76 µm) than in the needle stimulation group (72.20 ± 27.15 µm) after acupuncture at Yanglingquan GB34 versus Xuanzhong GB39, which may be related to the reversal of high expression of the NMDAR1 subunit by EA [111,112].Calcium overload due to abnormal release of glutamate causes an imbalance in calcium ion homeostasis [113].Calcium overload blocks ATP synthesis and contributes to excessive ROS production, leading to mitochondrial dysfunction while inducing NLRP3 inflammasome activation and exacerbating the inflammatory cascade [114,115].Calmodulin-dependent protein kinase (CaMKII), an important receptor for calcium signaling, inhibits CaMKII-AMPA receptor 1 (GluA1) phosphorylation to exert anti-inflammatory effects in a complete Freund's adjuvant (CFA)-induced mouse model of inflammation [116].The analgesic and anti-inflammatory effects of EA in a CFA-induced inflammation model are closely related to the cannabinoid type 1 (CB1) and cannabinoid type 2 (CB2) receptors [117].Studies have shown that endogenous cannabinoids can regulate various ion channels, including T-type calcium channels, and that CB1 and CB2 receptors are key regulators that can help maintain calcium homeostasis and reduce subsequent inflammatory damage [118,119].NLRP3 is involved in the inflammatory response by inducing the secretion and maturation of IL-18 and IL-1β.As a conserved anti-inflammatory miRNA, miR-223 can negatively regulate NLRP3 expression to inhibit the inflammatory response [120,121].In an MCAO rat model, EA at Waiguan TE5 and ST36 was shown to upregulate the expression of miR-223 in the peri-infarct cortex, by inhibiting the miR-223/NLRP3 pathway, and to reduce the expression of NLRP3, caspase-1, IL-1β, and IL-18 to alleviate neuroinflammation [122].
Microbiota-Gut-Brain Axis
The gut is the most important immune organ of the human body, accounting for approximately 70% of the immune function of the whole body.The microbiota-gutbrain axis (MGBA) is a bidirectional regulatory network between the brain and the gut [123].After stroke, homeostasis of the intestinal microbiota is disrupted, dysregulation of the autonomic nervous system (ANS) leads to the weakening of intestinal movement and barrier function, and proinflammatory factors in the intestine enter the brain through the damaged blood-brain barrier to aggravate injury [124].Studies have shown that T cells play a key role in tissue damage secondary to ischemic stroke [125].The initial inflammatory cascade causes T cells to migrate to, and T cell subpopulation (γδT) cells to be transported to, the pia mater of the brain where they secrete the proinflammatory factor IL-17 to aggravate the inflammatory response [123,126].CD4+CD25+ regulatory T (Treg) cells play an important role in peripheral immunity.After stroke, Treg cells migrate to the intestine with the help of mesenteric lymph node dendritic cells, and the expression of the antiinflammatory factor IL-10 is upregulated to inhibit IL-17mediated inflammation and the proliferation and differentiation of γδT cells [123,127].Studies have shown that EA at GV20, GV14, Shenshu BL23, and ST36 can alleviate the disruption of the intestinal microbiota, inhibit the inflammatory response, and promote the recovery of neurological function [128].In an MCAO model, the proportion of CD3+TCRγδ+carboxyfluorescein diacetate succinimidyl ester (CFSE)+ cells was found to decrease from 12.06% to 6.52% after EA at GV20, and this change was related to an increase in the number of Treg cells in the brain and small intestine and the inhibition of γδT cell function [129].
Blood-Brain Barrier
The blood-brain barrier (BBB) consists of the brain capillary wall and glial cells and is a barrier between the blood circulation and brain tissue.Due to its poor permeability, the BBB can limit the exchange of free substances between blood and brain tissue, and it has a role in maintaining the homeostasis of the brain's internal environment and protecting neural tissue from damage by toxins and pathogens [130][131][132].After ischemic injury, the integrity of the BBB is disrupted and its permeability increases, resulting in vasogenic edema and hemorrhagic transformation [133].In a CI/R rat model, EA at GV20 and GV14 can reduce ischemic damage to brain cortical neurons and the BBB [134].In addition, EA downregulates the expression of Nogo-A, an inhibitory neuroregenerative factor, to alleviate BBB damage [135].EA at GV20, GV26, and ST36 improved the vascular ultrastructure of brain tissue in CI/R rats, promoting capillary generation and restoration of vascular function, which was closely related to the upregulation of VEGF mRNA expression [136].The expression of metalloproteinase inhibitor-1 (TIMP-1) mRNA and protein tissue inhibitor was found to be dysregulated after EA at GV20, Hegu LI4, and Taichong LR3 in an MCAO animal model [137].The protein and mRNA expression levels of matrix metalloproteinase 9 (MMP-9) were shown to significantly reduced in the BBB of CI/R rats after EA at GV20 and GV26 [138].Reduced inflammatory cell infiltration and upregulation of matrix metalloproteinase 2/water channel protein (MMP2/AQP) expression occurred after EA at GV20 and ST36 [139].The above experimental results all indicate that acupuncture can ameliorate BBB injury and exert neuroprotective effects on ischemic brain tissue.
Angiogenesis
VEGF promotes the proliferation and division of endothelial cells, increases vascular permeability, and promotes neoangiogenesis [132,140].EA was shown to activate the HIF-1α/VEGF/Notch1 signaling pathway and promote angiogenesis after ischemic injury via exosomal miR-210 [141].Furthermore, in an MCAO animal model, EA can effectively promote angiogenesis and neurological recovery through the EphB4/EphB2-mediated Src/PI3K signaling pathway [142].The PI3K-AKT pathway increases the secretion of VEGF through hypoxia-inducible factor 1 (HIF-1) and regulates the expression of angiogenesis factors such as nitric oxide and angioplasty, which play a major role in the process of angiogenesis, while the activation of the PI3K-AKT pathway promotes the neuroprotective effect of the opiate receptor agonist (D-Ala2, D-Leu5)-Enkephalin (DADLE) [143,144].In a CI/R animal model, the relative protein expression of PI3K p85, PI3K p110 and p-AKT was found to be upregulated in the acupuncture group, and the expression of VEGF, GAP-43, and synaptophysin (SYN) was shown to be significantly increased, which indicates that acupuncture exerts its angiogenic and protective effects through the PI3K-AKT signaling pathway [145].
Cerebral Blood Flow
A >100% increase in blood flow at the ischemic foci and significant alleviation of ischemic infarction and nerve injury were observed in MCAO model animals treated with EA at GV26 (1.0-1.2 mA and 5-20 Hz) compared with those treated with EA at GV20 [146].Data from another ex- periment showed that CBF was elevated in all brain regions in CI rats after two applications of EA at bilateral ST36 or 15 Hz EA [147].After EA at ST36 and LI11, cerebrovascular resistance (CVR) was reduced, cerebral blood flow was increased, and meningeal microcirculation was improved [148].
The Mechanism by Which EA Inhibits Apoptosis and Autophagy
Apoptosis is a genetically controlled, programmed process of autonomous cell death, and can be induced by excess free radicals generated in acute cerebral ischemia, by a Ca 2+ concentration surge, or by excitotoxicity; however, apoptosis in the ischemic penumbra seems to be reversible [149,150].NGF, a nerve growth factor involved in neuroprotection and functional repair, acts through the ERK pathway and the PI3K pathway mediated by TrkA, a highaffinity receptor for NGF.EA was found to decrease NR1 subunit expression while upregulating TrkA expression and to mediate the TrkA-PI3K pathway to inhibit the increase in transient receptor potential melastatin-subfamily member 7 (TRPM7) expression induced by ischemia [110,112,151].In a rat model of I/R injury, EA at LI11 and ST36 was found to activate the PI3K-AKT signaling pathway, increase the expression of the PI3K activators BDNF and GDNF, upregulate the expression of the antiapoptotic protein Bcl-2, and decrease the expression of the proapoptotic protein Bax, inducing the formation of a stable heterogeneous structure and thus exerting an inhibitory and neuroprotective effect against cell apoptosis [152].Pretreatment by EA at GV20, BL23, and Sanyinjiao SP6 was found to reduce the Bax/Bcl 2 ratio and inhibit the expression of cleaved cystatin-3, which attenuated neuronal apoptosis and was associated with EA-mediated inhibition of transient receptor vanilloid (TRPV1) [153,154].
Autophagy is a process by which cells degrade their own components using lysosomes; autophagy and apoptosis jointly maintain cellular homeostasis under normal physiological conditions [155,156].Cellular autophagy is regulated by Unc-51-like autophagy-activated kinase 1 (UIK1) and FUN14-containing structural domain protein 1 (FUNDC1) [157].Pretreatment by EA at GV20 and GV26 was shown to suppress p-ULK1 and FUNDC1 expression and upregulate p-mTORC1 and microtubuleassociated protein light chain 3 (LC3-I) expression, thereby improving neurological function and reducing the infarct volume [158].The silent information regulator 1 (SIRT1)forkhead box proteins O1 (FOXO1) signaling pathway is an important factor in autophagy regulation.After EA pretreatment, the ratio of IC3-II/LC3-I is decreased; the complex effect of acetylated (AC)-FOXO1 and Atg7 is reduced; the levels of p62, SIRT1, and FOXO1 are elevated; and the number of autophagosomes in CI/R rats is significantly reduced.The neuroprotective effect of EA pretreatment may be related to the activation of the SIRT1-FOXO1 signaling pathway [159].
EA Pretreatment for Cerebral Ischemic Tolerance
As early as the pre-Qin period in ancient China, the concept of prevention before disease has been followed in Chinese medicine [28].Preventive acupuncture can open the meridians, regulate the organs, balance yin and yang, support the positive, and eliminate the evil [160].Acupuncture is widely used in the prevention and treatment of ischemic stroke because of its few side effects and high safety and efficacy [28].EA pretreatment was found to confer tolerance to cerebral ischemic injury in rats; Furthermore, neurological impairment was shown to be significantly increased and the infarct volume (38.3 ± 25.4 mm 3 ) was found to be significantly reduced in MCAO model animals subjected to repeated pretreatments with EA at GV20 compared with control MCAO model animals (220.5 ± 66.0 mm 3 ) and animals in the isoflurane anesthesia group (168.6 ± 57.6 mm 3 ) [161].In addition, cerebral ischemia tolerance induced by EA preconditioning is closely related to the endocannabinoid system, and the CB1 receptor-mediated PI3K/Akt/GSK-3β signaling pathway plays an important role in cerebral ischemic injury [162].In MCAO rat models, EA preconditioning enhances the activation of signal transducer and activator of transcription 3 (STAT3) and protein kinase Cϵ (PKCϵ) by upregulating the expression of the CB1 receptor and increases the levels of the endocannabinoids 2-arachidonic glycerol and n-arachidonic ethanolamine, which significantly reduces the infarct volume after reperfusion, improves nerve function, and inhibits neuronal apoptosis [163][164][165].In addition, monocyte chemotactic protein-inducible protein 1 (MCPIP1) is also involved in EA preconditioning-induced cerebral ischemic tolerance, and the neuroprotective effects of EA preconditioning were found to be significantly decreased in MCPIP1-deficient MCAO model animals [166].
Rehabilitation Phase
The main sequelae of ischemic stroke sequelae are numbness, hemiparesis, motor dysfunction, cognitive dysfunction, and memory loss, which are often responsible for the high disability rate and poor outcomes.Zhan et al. [167] reported in a meta-analysis of 14 randomized controlled trials {896 patients with poststroke cognitive impairment (PSCI)} that EA improved cognitive function and motor function in patients with PSCI.In a stroke rehabilitation experiment, the control group was given standard physiotherapy, and the intervention group received acupuncture.Analysis of the experimental data showed that the immediate and long-term outcomes of the intervention group were better than those of the control group, with EA significantly ameliorating spasticity and motor dysfunction of the limbs caused by stroke and restoring the ability of the patients to perform daily life activities [168].Another study on stroke hemiplegia found that acupuncture significantly improved muscle spasms and mobility in hemiplegic limbs [169].Ischemic stroke induces vascular cognitive impairment (VCI), and the learning and memory abilities of VCI model rats are improved after EA at GV20 and Shenting GV24, which may be related to the fact that EA increases the postsynaptic current frequency in neurons in the hippocampal CA3-CA1 regions, promoting connectivity and plasticity [170].Furthermore, acupuncture at ST36 was shown to alleviate cognitive dysfunction and normalize cAMP concentrations, Protein kinase A (PKA) activity, and phosphorylation of cAMP response element binding protein (pCREB) and pERK expression in patients with vascular dementia, while blocking the PKA signaling pathway was found to reverse the beneficial effect of acupuncture, indicating that acupuncture improves hippocampal function through the regulation of the cAMP/PKA/CREB signaling pathway [171].Long-term potentiation (LTP) in the hippocampus is responsible for memory formation and learning, and induction of LTP is dependent on NMDAR activation [172].EA is able to reverse LTP impairment in a rat depression model, possibly by upregulating the expression of the NMDAR subunit NMDARs with NR2B subunits (GluN2B) in the hippocampus [173].EA at ST36 and SP6 was shown to reduce local circuit inhibition and enhance LTP, possibly by promoting synaptic transmission via inhibition of gamma-aminobutyric acid (GABA) release from interneurons to increase the excitability of granule cells [174].Acupuncture can activate language-related brain areas and rebuild the neural network responsible for language to relieve language impairment [175] (Table 1, Ref. [52,75,77,87,101,111,137,152,153,170,174,176]).
Issues Related to Acupuncture
Acupuncture is a promising alternative treatment option for ischemic stroke with high efficacy, safety, and convenience.However, the efficacy of acupuncture has been challenged and questioned, the fundamental reason being that some of the mechanisms of action are still unclear.At present, there have been few international reports or studies on acupuncture treatment; high-quality acupuncture research studies are lacking and the theoretical basis for the benefits of acupuncture has not yet been adequately described.Furthermore, most of the research on acupuncture has been performed in China; clinical research results from China are not accepted by international mainstream medicine, and the acupuncture methods used in this research are not in line with international standards.Research methods for studying acupuncture are not consistent with modern scientific methods, and no method has yet been established to evaluate the clinical efficacy of acupuncture.It is difficult to perform a randomized controlled trial (RCT) study on acupuncture; furthermore, the use of blinding and placebos in acupuncture research is also controversial and it can be difficult to determine the criteria for a meta-analysis of acupuncture studies.The qualifications of the physician are also an important consideration related to acupuncture, as the teaching modes of major universities and training institutions vary greatly; thus, the theoretical knowledge and operational skills of acupuncturists worldwide are not necessarily consistent, meaning that treatment effects vary from person to person.In addition, due to interindividual variability, the location of acupuncture points differs among individuals, which in turn leads to biased clinical conclusions, and the optimal frequency, timing, and methods of acupuncture have also yet to be determined.
Conclusions
This study reviewed the evidence for the beneficial effects of EA on ischemic stroke in animal studies.EA can promote nerve cell regeneration after ischemic stroke and alleviate CI/R injury by reducing oxidative stress and inhibiting the inflammatory response.EA can also improve cerebrovascular function, affect angiogenesis, and reduce apoptosis and autophagy.Furthermore, EA preconditioning can increase ischemic tolerance and contribute substantially to subsequent rehabilitation.
|
2024-02-27T16:36:37.420Z
|
2024-02-18T00:00:00.000
|
{
"year": 2024,
"sha1": "6310a3a9a2149c8c865f47a72793a59df7e58c24",
"oa_license": "CCBY",
"oa_url": "https://www.imrpress.com/journal/JIN/23/2/10.31083/j.jin2302031/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "69b8e514f1656b470c102f5a93a246483687f771",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235205302
|
pes2o/s2orc
|
v3-fos-license
|
Effects of sodium fluoride and Ocimum sanctum extract on the lifespan and climbing ability of Drosophila melanogaster
Background: Fluoride may induce oxidative stress and apoptosis. It may also lead to neurobehavioural defects including neuromuscular damage. The present study aimed to explore the effects of sub lethal concentrations of sodium fluoride (NaF) on the lifespan and climbing ability of Drosophila melanogaster. In total, 0.6 mg/L and 0.8 mg/L of NaF were selected as sublethal concentrations of NaF for the study. Lifespan was measured and climbing activity assay was performed. Results: The study showed significant decrease in lifespan of flies treated with fluoride. With increasing age, significant reduction in climbing activity was observed in flies treated with sodium fluoride as compared to normal (control) flies. Flies treated with tulsi (Ocimum sanctum) and NaF showed increase in lifespan and climbing activity as compared to those treated with NaF only. Lipid peroxidation assay showed significant increase in malondialdehyde (MDA) values in the flies treated with NaF as compared to control. The MDA values decreased significantly in flies treated with tulsi mixed with NaF. Conclusions: The results indicate that exposure to sub lethal concentration of NaF may cause oxidative stress and affect the lifespan and climbing activity of D. melanogaster. Tulsi extract may help in reducing the impact of oxidative stress and toxicity caused by NaF.
Background
Fluoride may enter the human body through (i) drinking water, (ii) food and food products (e.g. contaminated with pesticides) and (iii) industrial emission of fluoride dust and fumes (Susheela, 2013). Excessive ingestion of fluoride results in development of dental, skeletal and non-skeletal fluorosis in humans. Acute pesticide poisoning during childhood may lead to neurobehavioural deficits (Berger, Friedman, Jaffar, Kofman, & Massarwa, 2006;Beseler, Bouchard, & London, 2012). Toxic effects of fluoride have been documented in other animals also. Chronic fluoride exposure can alter kidney structure, renal function and induce apoptosis in pigs (Zhan, Wang, Xu, & Li, 2006). It can cause severe health problems in rat, mice and fish due to oxidative stress, DNA damage and apoptosis (He & Chen, 2006;Mukhopadhyay & Chattopadhyay, 2014).
The basic biological, physiological, and neurological properties are conserved between mammals and D. melanogaster, and nearly 75% of human disease-causing genes are believed to have a functional homologue in the fly (Lloyd & Taylor, 2010;Reiter, Potocki, Chien, Gribskov, & Bier, 2001). Therefore, D. melanogaster is used as a model system for investigating the roots of human diseases such as neurological diseases, including neuromuscular disease.
O. sanctum (Linn) commonly called 'tulsi' belongs to family Labiatae. Several medicinal properties have been attributed to the plant not only in Ayurveda and Siddha but also in Greek, Roman and Unani system of medicines. O. sanctum has been reported to possess antimicrobial, anti-stress, antidiabetic, hepatoprotective, antiinflammatory, anti-carcinogenic, immunomodulatory, radioprotective, neuroprotective and cardioprotective properties. The leaves of O. sanctum contain alkaloids, flavonoids, glycosides, saponins, tannins, ascorbic acid and carotene (Mondal, Mirdha, & Mahapatra, 2009).
This paper reports about the effects of fluoride toxicity on the lifespan and climbing ability of D. melanogaster. Impaired climbing ability may be linked to premature ageing caused by fluoride due to oxidative stress. This is because fluoride inhibits bioenergetic reactions, in particular oxidative phosphorylation, reducing physical activity of muscles (Machoy-Mokrzyńska, 2004). An attempt was made to study the effect of O. sanctum on the lifespan and climbing activity of D. melanogaster exposed to NaF.
Methods
Native D. melanogaster was cultured in the laboratory at 25°C in standard cornmeal medium. The standard cornmeal medium consisted of maize powder, sucrose, dextrose, yeast extract and agar. Single line culture (stock) of D. melanogaster was maintained to obtain flies of the same age and strain. For determining the sublethal concentration of NaF for D. melanogaster, 0.6 mg/L, 0.8 mg/L and 1 mg/L of NaF were tested following Singh et al. (2020) and Mishra, Kumari, Ranjan and Yasmin (2020). Four sets of cornmeal media were prepared. Each set consisted of three bottles. The media were changed after every 4 days.
The four sets of bottles were as follows: 1) Control set-fruit flies were cultured in standard cornmeal media. 2) NaF-treated sets (three sets)-three sets of standard cornmeal media were prepared and 1.0 mg/L, 0.8 mg/L and 0.6 mg/L of NaF were added respectively into these three sets.
Five flies from the stock were transferred into each set and monitored.
Preparation of O. sanctum extract
O. sanctum (tulsi) leaf extract was prepared following Mitra et al. (2014). The tulsi leaves were dried in a hot air oven and powdered using mortar and pestle. The dried O. sanctum leaf dusts were soaked overnight in distilled water (15 g leaf dust per 100 ml distilled water) and filtered through a fine muslin cloth. The filtrate was centrifuged at 5000 rpm for 10 min. The supernatant thus obtained, was filtered again using a fine muslin cloth and the filtrate was collected in sterile polypropylene tubes and frozen at 20°C.
Published report on clinical trials conducted on humans till date suggests that tulsi is a safe herbal intervention. Tulsi dosage and frequency in such studies varied from 300 mg to 3000 mg given as 1-3 times per day as tulsi leaf aqueous extract to human subjects (Jamshidi & Cohen, 2017). In the present study, 10% v/v tulsi leaf extract (TLE) was taken which was roughly 50% of the above mentioned dosage. However, higher dose of TLE may also be tried.
For performing the experiments, five sets of cornmeal media were prepared. Each set had three bottles of cornmeal media.
1) Control set with standard cornmeal media 2) NaF (0.6 mg/L) treated cornmeal media 3) NaF (0.8 mg/L) treated cornmeal media 4) O. sanctum cornmeal media (with 10% v/v tulsi leaf extract in standard cornmeal media) 5) O. sanctum + NaF (0.8 mg/L) containing media (with 10% v/v tulsi leaf extract and 0.8 mg/L of NaF in standard cornmeal media) For studying the lifespan of native D. melanogaster in the different media mentioned above, newly ecloded flies were collected from the stock and raised in the respective media at 25°C. Twenty flies were placed into each bottle and were transferred to bottles with fresh media after every 4 days. The numbers of dead flies were counted every day.
Climbing activity assay was performed following Manjila and Hasan (2018) in a glass cylinder of 50 ml. Batches of 20 flies from each experimental set were used to perform climbing activity assay. Timer of 10 s was set and the number of flies crossing the 50 ml border was counted. Each assay was repeated thrice and average climbing ability of each batch of flies was calculated. Climbing activity of each batch of flies was monitored at interval of 3 days.
Lipid peroxidation assay was performed following Ohkawa, Ohishi, and Yagi (1979) on third generation flies (flies exposed to NaF for three generations). In total, 0.3 g of the flies was taken and homogenised by adding 1 ml of 0.1% trichloroacetic acid (TCA) in a glass homogeniser. The homogenate was centrifuged at 5000 rpm for 15 min at room temperature. One millilitre of the supernatant was transferred into a clean and dry test tube and 2 ml of freshly prepared 0.5% thiobarbituric acid (TBA) in 20% TCA was added into it. This sample was incubated at 90°C for 30 min and subsequently cooled at room temperature. Absorbance was measured by dual beam spectrophotometer at 532 and 600 nm. All the readings were taken in triplicates.
MDA level was calculated by following formula: Where, OD = optical density TV = total volume of the sample dw = dry weight of sample Statistical analysis was performed using ANOVA and p<0.05 was considered as significant. One-way ANOVA was used to analyse climbing assay data and two-way ANOVA was used to analyse lifespan data.
Results
Flies cultured in media with 1.0 mg/L NaF survived for 1 week only. Some eggs but no larvae were found in these media after 1 week. Flies cultured in media with 0.6 mg/L and 0.8 mg/L NaF continued to survive and produce eggs and larvae. So, 0.6 mg/L and 0.8 mg/L of NaF were considered as sub lethal concentrations for the present study. Singh et al. (2020) and Mishra et al. (2020) also reported 0.8 mg/L as the sublethal concentration of NaF for D. melanogaster and Zaprionus indianus respectively. Survival of D. melanogaster was better in control media (Table 1) as compared to that in the media containing 0.6 mg/L NaF (Table 2), where only a single fly survived for 25 days. In the media containing 0.8 mg/L NaF (Table 3), a single fly survived for 22 days. Survival of flies in the media containing O. sanctum extract (Table 4) was similar to those in the control media whereas survival of flies in media containing NaF + O. sanctum extract was found to be better than those in media with NaF (Table 5). Fifty percent of the flies was dead by the end of 4th week in control media (Table 1), by the 2nd week in media with NaF (Tables 2 and 3), by the 5th week in media with tulsi extract (Table 4) and by the end of 3rd week in media with tulsi extract+0.8 mg/L NaF (Table 5). Lifespan of D. melanogaster was found to be~75 days in control set,~76 days in media with tulsi extract,~40 days in media with 0.6 mg/L NaF, 37 days in media with 0.8 mg/L NaF and~57 days in media with tulsi extract+0.8 mg/L NaF. A comparative picture of survival of flies in different media is shown in Fig. 1a and b.
Climbing activity of D. melanogaster was maximum after 3 days of eclosion in control media (Fig. 2). On the other hand, the climbing activity of flies reared in media with 0.6 mg/L and 0.8 mg/L NaF was suppressed markedly after 3 days of eclosion. A progressive decline in climbing activity was seen in the flies exposed to NaF after 6, 9 and 12 days of eclosion (Figs. 3, 4 and 5). These flies could not show any climbing activity after 15 days of eclosion (Fig. 6). D. melanogaster cultured in media with tulsi extract showed improved climbing activity after 15 days of eclosion (Figs. 2, 3, 4, 5 and 6).
Discussion
The significant reduction in lifespan of fluoride treated flies was similar to the findings of Khatun et al. (2018), where NaF exposure to D. melanogaster in the parental generation led to an increase in adult mortality. Significant reduction in the climbing activity was also seen in the flies treated with NaF. Similarly, Khatun et al. (2018) and Sarkar, Roy, and Roy (2018) also observed alteration in climbing behaviour in flies exposed to sub lethal concentrations of fluoride. Fluoride is thought to inhibit the activity of antioxidant enzymes, such as superoxide dismutase, glutathione peroxidase and catalase. Depletion of glutathione results in excessive production of reactive oxygen species at the mitochondrial level, leading to the damage of cellular components. Abolaji et al. (2019) treated D. melanogaster with NaF and found altered levels of oxidative stress markers (Glutathione-S-transferase (GST), catalase and acetylcholinesterase (AchE) activities, total thiol (T-SH), nitrites/nitrates and hydrogen peroxide (H 2 O 2 ) levels). These parameters could be balanced by resveratrol (a natural polyphenol with antioxidant and anti-inflammatory properties).
In the present study, when tulsi extract was mixed in the media with NaF, the flies survived and maintained their climbing activity in comparison to the flies cultured in media with NaF only. Similar results were found by Siddique, Faisal, Naz, Jyoti, and Rahul (2014), where the flies exposed to various doses of O. sanctum extract showed a dose-dependent significant delay in the loss of climbing ability.
Higher MDA value in the flies treated with NaF indicated high oxidative stress which is similar to the results obtained by Patel and Chinoy (1998); Wang et al. (2004) and Dutta et al. (2017). Treatment with tulsi extract possibly reduced the oxidative stress. Siddique et al. (2014) 2 Climbing ability of Drosophila melanogaster after 03 days of eclosion. *, significant difference between control vs NaF; **, significant difference between tulsi vs NaF; ***, significant difference between tulsi + F in medium vs NaF Fig. 3 Climbing ability of Drosophila melanogaster after 06 days of eclosion. *, significant difference between control vs NaF; **, significant difference between tulsi vs NaF Fig. 4 Climbing ability of Drosophila melanogaster after 09 days of eclosion. *, significant difference between control vs NaF and tulsi; **, significant difference between tulsi vs NaF; ***, significant difference between tulsi+0.8 mg/L NaF vs NaF Fig. 5 Climbing ability of Drosophila melanogaster after 12 days of eclosion. *, significant difference between control vs NaF and tulsi; **, significant difference between tulsi vs NaF; ***, significant difference between tulsi+0.8 mg/L NaF vs NaF also found that treatment with tulsi extract caused reduction in oxidative stress in the brain of Parkinson's diseased model flies.
Oxidative stress plays a major role in ageing, and is associated with several neurodegenerative diseases. O. sanctum (tulsi) leaf extract possesses antioxidative properties (Mitra et al., 2014). O. sanctum leaves are rich in polyphenolic flavonoids which act as antioxidants (Hakkim, Gowri Shankar, & Girija, 2007) and are helpful in preventing lipid peroxidation (Geetha & Vasudevan, 2004). Aqueous extract of the O. sanctum leaves may function simply by quenching the free radicals generated during oxidative stress or may improve the antioxidant enzyme status of the tissue in the face of the oxidative stress (Mitra et al., 2014). Tulsi has been found to be a chief source of many biologically active compounds like ursolic acid, eugenol, rosmarinic acid, linalool, carvacrol and β caryophyllene and these compounds play a significant role in the treatment and prevention of many diseases (Almatroodi, Alsahli, Almatroudi, & Rahmani, 2020). The antioxidant nature of O. sanctum leaf extract might have reduced the oxidative stress caused by NaF in D. melanogaster used in this study. Shivananjappa and Joshi (2012) also found that aqueous extract of tulsi had putative potency to enhance the endogenous antioxidant defences in human hepatocyte cell line (HepG2) which can potentially effect faster dissipation of ROS. Free radical scavenging activity is a chief mechanism through which Ocimum sanctum products protect against cellular damage. Its role in free radicals scavenging property has confirmed its strong antioxidant activity and free radicals scavenging property (Ganasoundari et al., 1998;Keshari, Srivastava, Verma, & Srivastava, 2016).
In the present study, tulsi extract (alone) was not found to significantly increase the lifespan of flies as compared to control, but lifespan of flies treated with fluoride + tulsi significantly increased as compared to the flies treated with fluoride. The results of this study can be considered preliminary and further investigations are required to prove the worth of tulsi in increasing the lifespan. Research has shown that tulsi reduces stress, enhances stamina, relieves inflammation, lowers cholesterol, eliminates toxins, protects against radiation, prevents gastric ulcers, lowers fevers, improves digestion and provides a rich supply of antioxidants and other nutrients. The nutritional analysis of Ocimum sanctum has shown high level of ascorbic acid, N, P, K, total phenol, carbohydrates and proteins in their leaves, which may be very good for health. These properties may help to enhance the lifespan (Patel, 2020). Further, tulsi has been found to mediate a significant reduction in tumour cell size and an increase in lifespan of mice having sarcoma-180 solid tumours (Nakamura et al., 2004). Fluoridated insecticides may be helpful in targeting the pests (Metcalf, 2015), but the toxic effects of fluoride on non-target animals should not be neglected (Dhar & Bhatnagar, 2009;Sauerheber, 2013). The results of the present study indicate that aqueous tulsi leaf extract acts as antioxidant by possibly scavenging the oxygen free radical and other reactive oxygen intermediates. Thus, O. sanctum has the potential to reduce fluoride toxicity in D. melanogaster. The study suggests that O. sanctum (tulsi) may be of future therapeutic relevance particularly in the area where humans are chronically exposed to fluoride either occupationally or through food chain.
Conclusion
The present study concluded that exposure of D. melanogaster to sub lethal concentrations of NaF caused oxidative stress induced damage in its body leading to reduced lifespan and climbing activity. It was also concluded that O. sanctum extract may reduce oxidative stress and fluoride toxicity. Therefore, O. sanctum can be of therapeutic relevance.
|
2021-05-27T13:28:01.072Z
|
2021-05-26T00:00:00.000
|
{
"year": 2021,
"sha1": "232d0fa70ce80b6523df9539d0112b599aebc12a",
"oa_license": "CCBY",
"oa_url": "https://basicandappliedzoology.springeropen.com/track/pdf/10.1186/s41936-021-00229-8",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "de3cffe6905dc030f73bd1b7cafd5a5d97671ad1",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
195658297
|
pes2o/s2orc
|
v3-fos-license
|
Electron cooling with graphene-insulator-superconductor tunnel junctions and applications to fast bolometry
Electronic cooling in hybrid normal metal-insulator-superconductor junctions is a promising technology for the manipulation of thermal loads in solid state nanosystems. One of the main bottlenecks for efficient electronic cooling is the electron-phonon coupling, as it represents a thermal leakage channel to the phonon bath. Graphene is a two-dimensional material that exhibits a weaker electron-phonon coupling compared to standard metals. For this reason, we study the electron cooling in graphene-based systems consisting of a graphene sheet contacted by two insulator/superconductor junctions. We show that, by properly biasing the graphene, its electronic temperature can reach base values lower than those achieved in similar systems based on metallic ultra-thin films. Moreover, the lower electron-phonon coupling is mirrored in a lower heat power pumped into the superconducting leads, thus avoiding their overheating and preserving the cooling mechanisms. Finally, we analyze the possible application of cooled graphene as a bolometric radiation sensor. We study its main figures of merit, i.e. responsivity, noise equivalent power and response time. In particular, we show that the built-in electron refrigeration allows reaching a responsivity of the order of 50 nA/pW and a noise equivalent power of order of $\rm 10^{-18}\, W\, Hz^{-1/2}$ while the response speed is about 10 ns, corresponding to a thermal bandwidth in the order of 20MHz.
A cornerstone in this field is the electron refrigeration in voltage-biased Normal metal-Insulator-Superconductor (NIS) tunnel junctions [41,42]. In such a system, the gap of the superconductor acts as an energy filter for the N metal electrons: under an appropriate voltage bias, only the most energetic electrons, i.e., the hottest ones, are able to tunnel into the superconductor, resulting in a decrease of temperature in the N metal [23,24,41,42]. The performance of this system is adversely affected by two main phenomena. One consists of an intrinsic thermal leakage owing to the electron-phonon * francesco.vischi@df.unipi.it coupling [23]. The phonons of the metal can be considered as a thermal bath, which temperature is set by the substrate temperature. Phonons interact with electrons over the metal volume, consequently supplying heat. Secondly, the heat extracted from the N metal warms up the superconducting leads, with the consequent decrease of the superconducting gap and deterioration of the energy filtering over the electrons [43][44][45].
In this paper, we study the graphene refrigeration based on two Graphene-Insulator-Superconductor (GIS) tunnel junctions forming a SIGIS system. Graphene has several interesting properties compared to metals, for example, a charge carrier concentration-dependent density of states [46], and a weaker and gate tunable electronphonon coupling [47,48]. The weak electron-phonon coupling arises from the graphene dimensionality [49], as tested in other low-dimension materials [50,51]. As a consequence, for given cooling power, a SIGIS can reach lower temperatures respect to a SINIS system. Moreover, the lower heat current pumped into the leads decreases their adverse heating, making electron cooling more accessible for concrete applications.
A natural application of electron cooling in SIGIS systems concerns the detection of electromagnetic radiation via bolometric effect. It is known that SINIS systems can be used as bolometers, where the built-in refrigeration enhances the responsivity and decreases the Noise Equivalent Power (NEP) [52][53][54][55][56][57]. A SIGIS-based bolometer inherits the advantages of built-in refrigeration from a SINIS system, combining them with graphene optoelectronic properties [13], such as wide energy absorption spectrum, ultra-fast carrier dynamics [58][59][60][61][62], and tunable optical properties via electrostatic doping [63,64]. In particular, the lower operating temperature and the weaker electron-phonon coupling allow further decreasing the NEP, while the graphene low heat capacity allows a faster response time compared to a SINIS bolometer.
From the industrial point of view, SIGIS systems may also have high potentiality in wafer-scale integration thanks to the high quality currently reached in largearea graphene production [65]. Moreover, the tunnel junction can be realized with hexagonal Boron Nitride (hBN), which is an insulating material extremely suitable to be combined with graphene due to the crystal similarities. Tunnel barriers based on hBN represent a valuable alternative to standard metal oxides insulators, simplifying the fabrication into standard steps. [66] The paper is organized as follows. Section II introduces the device model, the GIS tunneling, and the thermal model. Section III studies the graphene base temperature in a biased SIGIS, also giving a comparison with a standard SINIS system. Section IV investigates the system response to perturbations and the related dynamical response time. In section V, we study the bolometric properties by focusing on the responsivity and the NEP. Section VI discusses the impact of the junction quality on the studied properties and yields a quantitative threshold for experiments. Section VII compares our findings with similar bolometric architectures. Finally, section VIII summarizes our main findings.
II. MODEL
We consider the system sketched in Fig. 1. It consists of a graphene sheet contacted by two superconducting leads through tunnel junctions of resistance R t each. Superconductors are assumed made of aluminum with superconducting gap ∆ 0 = 200 µeV and critical temperature T c ∼ 1.3 K. The graphene can be deposited directly on SiO 2 or hBN. The graphene sheet has a rectangular area, with geometrical dimensions A = W × L. The two leads, with dimensions W × W S , are placed at distance L (see Fig. 1b) and connected to a voltage generator V ext . The electric current I is determined by R t and the graphene sheet resistance R G = Lρ/W , where the sheet resistivity ρ = 1/enµ depends on the carrier density n and the electron mobility µ, being e the modulus of the electron charge. The graphene is gated with a back-gate placed under the substrate and connected to an external generator V G .
The proposed setup has many geometrical/fabrication parameters. As a consequence, we fix some of them to reasonable experimental values. By choosing proper geometrical dimensions for the graphene sheet, we consider a negligible sheet resistance compared to the tunnel resistance (R G R t ). This assumption allows neglecting the voltage partition between the junctions and the sheet. So, the Joule heating of graphene results negligible. To this aim, we set the aspect ratio to L = W/5, corresponding to R G ≈ 250 Ω for graphene with µ ≈ 5000 cm 2 /Vs and residual carrier density n 0 ≈ 1 × 10 12 cm −2 , typical for graphene on SiO 2 [67][68][69]. A similar value of resistance can be considered for an encapsulated graphene in a hBN/G/hBN heterostructure, where mobilities are commonly over µ ≈ 50 000 cm 2 /Vs but the residual charge densities are lower than n 0 ≈ 1 × 10 11 cm −2 [70][71][72][73]. An advantage of the encapsulated graphene is that the top layer of ultra-thin hBN can be exploited as a high-quality tunnel junction [66].
We consider a large graphene area A = 100 µm 2 . Large area samples are preferred for bolometric applications since they keep the device in linear response regime and extend the dynamical range of the detector [14,74]. Moreover, a greater area reduces the temperature fluctuations, since the thermal inertia due to the heat capacity scales with the area.
Finally, we fix the tunnel resistance as R t = 10 kΩ. This value is compatible with tunnel junction made of 2layer hBN [47,66] and makes the assumption R t R G valid. We also observe that the tunnel barriers suppress the superconducting proximity effect in graphene. Table I is a summary of the parameters adopted in the numerical simulations. Some of them will be introduced in the following.
A. GIS tunneling and cooling
Here, we introduce the main equations and discuss the electron tunneling through a GIS junction. The tunneling rate is proportional to the Density of States (DoS) of graphene and superconductor [75]. The graphene DoS ν G reads [46] where is the energy, ρ G0 is the DoS at the Fermi level, ρ G ( ) is the normalized graphene DoS and E F is the Fermi energy. The DoS at the Fermi level is related to the carrier density by where v F ≈ 10 6 m/s is the Fermi velocity [46] and ≈ 6.6 × 10 −16 eV · s is the reduced Planck constant.
The superconductor DoS is where is the energy, ρ S0 is the DoS at Fermi level of the normal state aluminum, ρ S ( ) is the superconductor normalized DoS, ∆(T ) is the temperature-dependent superconductivity gap of the Bardeen-Cooper-Schrieffer (BCS) theory and Γ D is the Dynes parameter that phenomenologically takes into account the subgap tunneling and the smearing of the superconducting peaks, which are also related to the quality of the junction. In this paper, we fixed Γ D = 10 −4 ∆ 0 , for simplicity. In section VI, we show the dependence of the results on higher values of Γ D . The charge current in a GIS tunnel junction can be expressed as [47,75] where V is the voltage drop across the tunnel junction, T G and T S are the graphene and superconductor electronic temperatures, respectively. Finally, f ( , T ) is the Fermi distribution. In the following we assume that Similarly, the heat current from G to S is We set the sign convention such that P GIS > 0 means that the heat is extracted from graphene towards superconductors. It is important to note that when the graphene Fermi energy E F is much greater than the superconducting gap ∆ 0 , the graphene DoS dependence on energy can be disregarded in the tunneling integrals, i.e., defines an energy bandwidth of a few ∆ 0 around the Fermi level. In this energy window, the graphene DoS has a variation of the order of ∆ 0 /E F that can be hence neglected when E F ∆ 0 . This condition, in general, holds experimentally, as indicated by the presence of a residual charge density n 0 [72]. The lowest values of residual charge density can be obtained in high quality hBN/G/hBN heterostructures and unlikely goes below n 0 ≈ 5 × 10 10 cm −2 [76]; this value corresponds to the lowest value of Fermi energy that is at least 100 times the value of ∆ 0 = 0.2 meV, confirming ∆ 0 E F . We remark that the BCS theory provides that ∆(T ) < ∆ 0 , implying that E F ∆ 0 > ∆(T ), i.e., ensuring that the superconducting gap is lower than the Fermi energy at every temperature.
Therefore, tunneling integrals in Eqs. (4), (5) take the standard functional form of the NIS tunneling expressions [23,24,77,78]. We point out that this approximation does not completely drop out the dependence of the tunnel integrals on the Fermi level/carrier density. It is indeed still contained in R t . We will discuss this point better at the end of this subsection. Figure 2a displays the behavior of P GIS versus V and T G is equal to the bath temperature T B . In the regions where P GIS > 0, the heat is extracted from graphene, implying electron cooling. It corresponds to the yellowgreen area delimited by the white curve (P GIS = 0). The cooling power is maximized, for given value of T B , at the optimal voltage bias V opt (T B ) (see red curve in Fig. 2a). The cooling power value along the V opt curve is reported in Fig. 2b as function of T B . The maximum is about P GIS ≈ 0.06∆ 2 0 /(e 2 R t ) for T B ≈ 0.6 K ≈ T c /2 and V ≈ 0.82∆ 0 /e (∼ 170 µV for aluminum). For R t = 10 kΩ, the maximum cooling power corresponds to about P GIS ≈ 0.24 pW.
Low temperature (T S , T G ∆ 0 /k B ) approximated expressions of Eqs. (4) and (5) are reported in Refs. [23,24,77,78]. In this approximation, the optimal cooling is eV opt ≈ ∆ 0 − 0.66k B T S (see the dotted black curve in Fig. 2a), corresponding to an electric current and a related cooling power Before concluding this section, we wish to discuss the dependence of equations (4) and (5) on the carrier density n and how this can affect the electronic and thermal transport. The carrier density n is tuned via field effect by the gate voltage V G (see Fig. 1a). The electric and thermal currents depend on n through the tunnel resistance R t . The latter is proportional to the DoS of both graphene and superconductor and to the modulus square of the tunneling amplitude |U 0 | 2 , i.e. [75,79]. Since ρ G0 ∝ √ n, the GIS tunnel resistance depends on the carrier density as where n 0 is the residual carrier density. This equation implies P GIS (V, T G , T S , n) = P GIS (V, T G , T S , n = n 0 ) n n 0 .
This simple scaling on n is valid when the approximation ρ( − eV − E F ) ≈ 1, i.e. when E F ∆ 0 . This condi-tion is experimentally respected since charge density n can be tuned typically in a range from 5 × 10 10 cm −2 to 5 × 10 13 cm −2 , when using standard solid gating. This range is experimentally limited on the bottom by the presence of charge puddles [68] and on the top by the occurrence of gate dielectric breakdown caused by high voltage.
B. SIGIS Thermal model
In this section, we describe the thermal model that includes all the thermal channels to graphene, as sketched in Fig. 2c. We consider the graphene sheet homogeneously at the same temperature, neglecting the spatial dependence of T G , thanks to the high heat diffusivity in graphene [58][59][60]. Moreover, we treat the graphene phonon bath as a reservoir at a fixed temperature T B . This assumption is physically reasonable owing to the negligible Kapitza thermal resistance between the graphene and the substrate [80,81]. Finally, we consider the superconductor electrons as a thermal reservoir well thermalized with the substrate, by imposing T S = T B . This assumption can be violated in real experiments, where the heat pumped into the superconductor heats up its quasi-particles, and the weak electron/phonon (e/ph) coupling provides a poor cooling to the bath [23]. This effect is detrimental for the superconducting state and, as a consequence, for cooling. In general, this effect can be weakened by contacting the superconductor with hot quasi-particles traps or coolers in cascade [43-45, 82, 83], making our assumption physically reasonable. Moreover, in a SIGIS system, the amount of heat transferred into the superconductor is lower than that present in a SI-NIS system, because of the lower heat leakage from the phonon bath to the graphene electrons.
Thus, in our thermal model (see Fig. 2c) the only variable is the graphene temperature T G , which is determined by the solution of the following heat balance equation This equation takes into account the heat current across the two junctions 2P GIS , the electron-phonon coupling in graphene P e/ph , the Joule heating P J and a possible external power input P in (for example a radiation power) that we consider to investigate the bolometric response. We also consider the time dependence of T G introducing the electron heat capacity C, which plays the role of thermal inertia of the system when dynamic response is investigated.
Let us consider the electron-phonon heat current P e/ph . Below the Bloch-Grüneisen temperature (∼ 50 K), P e/ph is characterized by the presence of two different regimes depending on whether the wavelength of thermal phonons is longer or shorter than the electron mean free path l mfp [47,48,[84][85][86]. In the clean regime (or short wavelength regime) the e/ph coupling reads while in the dirty regime (or long wavelength regime) takes the form where Σ C , Σ D are the electron-phonon coupling constants, depending on the sound speed s ≈ 2 × 10 4 m/s, the mass density ρ M ≈ 7.6 × 10 −7 kg/m 2 , the deformation potential D p ≈ 13 eV, l mfp ≈ 60 nm and the Riemann Zeta ζ(3) ≈ 1.2. As final result, the coupling constants are Σ C ≈ 0.024 pWµm −2 K −4 and Σ D ≈ 0.023 pWµm −2 K −3 [46-48, 74, 87-89].
In the following we consider both the graphene regimes, writing the generic coupling , where δ can be 3 or 4 according to a dirty or clean regime respectively and Σ δ is Σ C or Σ D coherently. In the temperature range between 0.1 K to 1 K, graphene on SiO 2 shows a dirty regime, while the hBN-encapsulated graphene is in a clean regime [47,74]. The reason is the different mobility (and therefore different electron mean free path) due to the presence of the hBN-encapsulation [47,48,74].
The effect of the two regimes can be evaluated by the electron-phonon thermal conductance G e/ph in a system where T G is perturbed from the equilibrium. G e/ph is calculated by the linear expansion P e/ph ≈ G e/ph ( The G e/ph in the two regimes are of the same order of magnitude at T B = 1 K, but the different temperature scaling makes the clean regime weaker compared to the dirty one when T B is below 1 K. The Joule heating is due to the electron current flow in the resistive sheet of graphene. It is given by P J = R G I 2 (T G , T B , V ) and is a component that spoils cooling. In this system, the current-voltage characteristic is non-linear, and the current is suppressed by the presence of the superconductor gap. The Joule heating scales as ∼ ∆ 2 0 R G /(eR t ) 2 , while the cooling power as ∼ ∆ 2 0 /e 2 R t . The ratio between the Joule heating and the cooling power then scales as ∼ R G /R t , implying that the cooling performance is not affected by the Joule effect when R G R t . Indeed, we found out in our simulations that Joule heating weakly affects the thermal equilibrium, which is instead dominated by the competition between P GIS , P e/ph , and P in . For this reason, we neglect the Joule heating in the analytic results, while we keep it in the numerical ones.
We remark that, in our thermal model, we do not include the photonic and the phenomenological backtunneling channels [24,87,[90][91][92]. These two contributions are indeed dependent on the fabrication parameters, such as the device design and on the junction quality. For this reason, they are often considered as empirical parameters to fit the experimental data. Moreover, in the range of temperatures studied in this paper (above 0.1 K), the photonic thermal conductance in our device is negligible compared to the phononic thermal conductance [87]. Finally, the quasi-particle back-scattering can be managed by adjusting the tunnel resistance of the junction.
The heat capacity for k B T G E F is given by the standard Fermi liquid result [87,88,93] where γ = (π 2 /3)k 2 B ρ G0 is the Sommerfeld coefficient. We notice that the linear behavior of C in temperature owes to the fact that k B T G E F , yielding the same behavior of a metal. The dependence of C on the Fermi energy (and hence by the residual charge by Finally, we comment on the dependence of the heat current contributions on carrier density. For simplicity, we assume a homogeneous charge density n over the whole graphene area, even though under the metallic contacts the screening may slightly affect this assumption. Anyway, since cooling require very small potential differences (≈ 1 mV) between the contacts and graphene, the electron density under the electrodes can be considered constant. Hence, the carrier density of the whole graphene sheet can be tuned mainly by the backgate, with negligible charge inhomogeneities due to the specific electrostatic problem. We recall that the sheet resistivity is given by ρ ∝ 1/n, implying This equation and R t (n) in Eq. (8) return that P J ∝ R G /R 2 t does not depend on n. Moreover, considering Eq. (10) and P e/ph ∝ √ n, the heat balance equation can be written as The dominant terms P GIS and P e/ph scale as √ n. The terms that are constant in n are the Joule heating and the external power input P in . Hence, the thermal properties are weakly affected by the graphene carrier density if Joule heating is negligible and P in = 0. The heat balance equation in presence of an external source (P in = 0) will be discussed in section V.
III. BASE TEMPERATURE
In this section, we investigate the stationary (∂ t T G = 0) quasi-equilibrium case of the heat balance equation (11) in the absence of external input power (P in = 0). Solving the balance equation for T G , we can calculate the base temperature T G,b reached by cooled graphene. Fig. 3a reports a color map of T G,b /T B versus (V, T B ) for the case of clean graphene regime. The black line for T G /T B = 1 separates the region of cooling and heating of graphene. Figure 3b reports T G versus V for chosen values of bath temperature T B . When V → 0, the graphene temperature tends to the equilibrium with the bath temperature T B . The minimum temperature is reached when the voltage bias is set closely below ∆(T )/e. In the dirty regime, the cooling behavior is qualitatively similar but lower in performance compared to that in the clean graphene regime, due to the stronger e/ph thermal conductance (see Eq. (16)), implying higher base temperatures.
When Joule effect is negligible, the base temperature is given by the equilibrium between the electron-phonon heating power and the junction cooling power. The former scales as the area A, while the latter scales as P GIS ∝ R −1 t . As a consequence, the base temperature is lowered by decreasing the factor AR t . The junction resistance cannot be decreased at will since the R G R t condition must be satisfied; otherwise, the detrimental Joule heating contribution is not negligible, and the voltage partition between sheet and junctions must be properly considered.
The heat balance equation can be solved analytically at optimal bias and low temperatures if the Joule heating is negligible and if the graphene is in the dirty regime. With these assumptions, Eq. (7) can be used for P GIS and then the heat balance equation has a polynomial form that can be solved exactly. On the opposite, the T 4 G form of the e/ph coupling in clean regime yields a not analytically solvable balance equation. The analytic solution is obtained by substituting P GIS with the Eq. (7) and P e/ph with Eq. (15) in the thermal balance equation that is a second-order equation Fig. 3c reports the dependence of T G,b on R t calculated numerically in case of dirty and clean regimes. The analytical result of Eq. (21) for T G,b in the dirty regime is represented by the red dashed line. We can notice that decreasing R t further reduces the achievable base temperature. The agreement between the numeric and analytic results for T G,b in the dirty regime is generally 1, the solution depends on the accuracy of the P GIS approximation with the consequence that the leading order approximation of P GIS in Eq. (7) is not anymore sufficient.
In order to investigate the advantage of graphene e/ph coupling, we make a comparison of the base graphene temperature in a SIGIS with the base temperature of a tunnel-cooled system based on a metallic thin film and a two-dimensional electron gas (2DEG). To this aim, we solve the balance equation 2P GIS +P e/ph = 0 for the different systems, where P GIS is the same butP e/ph is the electron-phonon heat current in a metallic thin film or in a conventional 2DEG with parabolic band dispersion [94]. For simplicity, we neglect the resistances of metal and 2DEG and the related Joule heating. For the sake of comparison, we consider the same A and R t . For a metallic thin film, it isP e/ph = AwΣ N (T 5 e − T 5 B ) and Σ N = 1 nWµm −3 K −5 , where T e is the electron temperature. We consider a low thickness w = 1 nm, for which we have a coupling per unit area wΣ N ≈ 1 pWµm −2 K −5 . For a 2DEG in In 0.75 Ga 0.25 As, we havẽ P e/ph = AΣ 2DEG (T 5 e − T 5 B ) and a coupling per unit area Σ 2DEG ≈ 0.073 pWµm −2 K −5 [94][95][96]. At a temperature of the order of 1K, the coupling per unit area of the metal is about 40 times larger than that of graphene, while the coupling per unit area of the 2DEG is about 3 times larger. It can be expected that graphene and 2DEG can reach lower temperatures compared to the metallic thin film. This is shown in Fig. 3d, reporting the base temperatures of the different systems.
Deeper insight can be reached by comparing the e/ph thermal conductance per unit area of the different systems. We have in a metal G N /A = 5wΣ N T 4 B , in a 2DEG G 2DEG /A = 5Σ 2DEG T 4 B and in graphene G e/ph /A = δΣ δ T δ−1 B , with δ indicating different e/ph regime. It can be noticed that the former two have a better scaling behavior compared to graphene. However, in metals, the coupling constant is large enough that this advantage is effective only below T B = 0.1 K, i.e., below the typical temperature range for the tunnel cooling. This can be seen in Fig. 3d where the metal curve reaches the graphene curves (dirty and clean) at about 0.1 K. We remark that a 1 nm thick metallic film is very challenging to be produced. A different conclusion holds for the 2DEG where the coupling constant Σ 2DEG is low enough that the T 5 scaling ofP e/ph can allow for a lower e/ph heat current in the temperature interval of interest. This can be seen in Fig. 3d, where the 2DEG reaches the base temperature of graphene at T ≈ 0.5 K for dirty regime and at T = 0.3 K for clean regime. This indicates that cooling performances for a 2DEG and a SIGIS are comparable. In this case, the main (and non-trivial) advantage in graphene relies on the fabrication issues. Indeed, the growth of III-IV materials for 2DEGs requires molecular beam epitaxy that is an expensive technique. Furthermore, the use of 2DEGs implies the use of several steps of lithography, etching, and evaporation of metals. On the opposite, Chemical Vapor Deposition is nowadays an established and cheaper technique for growing graphene or hBN/graphene/hBN heterostructures [73], allowing easier scalability to industrial standards.
IV. THERMAL RESPONSE DYNAMICS
In this section, we study the dynamics of the SIGIS with thermal perturbations from the base temperature, focusing on its response time. The latter is an important parameter for any time-dependent application since it affects the thermal bandwidth of the system.
The response time is a parameter that appears in the transfer functions and involves thermal properties, such as the power-to-temperature transfer function or the bolometric responsivity. Both these quantities are studied below.
As an example of thermal response, we report in Fig. 4 the numerical solution of the heat balance equation (11) at bath temperature T B = 0.5 K, optimal voltage bias eV opt (T B ) ≈ 0.87∆ 0 and dirty graphene regime. Figure 4a shows the evolution of temperature over time. At t < 0, the graphene is at base temperature T G,b ≈ 0.37 K. The input power is null for the whole process, except at t = 0, where a power pulse drives the graphene temperature from T G,b to T G = 0.7 K. After this pulse, the graphene thermalizes to the bath temperature in about 50 ns. The associated heat currents evolution is plotted in Fig. 4b. In the whole process, it is 2P GIS + P e/ph + C(T G )∂ t T G = 0. At t < 0, the graphene is in a stationary state, where ∂ t T G = 0 and the equilibrium is given by 2P GIS + P e/ph = 0. From Fig. 4b it can be noticed that the numerical calculations yield an always negligible Joule heating. Important physical insight into the dynamics can be obtained by studying small perturbations from base temperature by linearizing the heat balance equation. Therefore, we consider the left hand side of Eq. (11) in a series expansion around T G = T G,b and we assume a constant heat capacity for small perturbations: C(T ) ≈ C(T G,b ). Moreover, we neglect Joule heating. In this way, we have the linearized thermal equation where ∆T G = T G − T G,b , and G GIS and G e/ph are thermal conductances related to the junction and the e/ph coupling, respectively. The first term is where the approximation in the last passage is valid at V opt and T B , T G ∆ 0 /k B . The e/ph channel G e/ph is given by Eq. (16) evaluated at the equilibrium point The solutions of the linearized thermal balance equation (22) have the exponential form ∆T G ∝ e −t/τ th , where τ th is the response time at V opt given by The denominator in Eq. (24) is the sum of the junction and e/ph thermal conductances. The different temperature scaling of G GIS and G e/ph implies two regimes defined by the dominance of one of the two channels. The two regimes are separated by a crossover temperature T G,cr that can be estimated by equation G GIS (T G,b ) = G e/ph (T G,b ), yielding: We obtain T G,cr = 0.39 K for dirty graphene regime and T G,cr = 0.53 K for clean graphene regime. When T G,b T G,cr the junction conductance dominates over the e/ph conductance and τ th is For T G,b T G,cr , there is a regime dominated by the e/ph coupling, yielding that depends only on the graphene properties and not on geometrical parameters of the SIGIS. 27)]. The response time increases with R t , since the thermal conductance of the junction is lowered. In particular, at low T B , the curves of Fig. 5a indicate that τ th ∝ R t as given by Eq. (26).
The results in Eq. (24) and Fig. 5a are obtained for V = V opt . τ th has a dependence also on the bias voltage, since the latter tunes the transport properties of the junction. Figure 5b reports τ th versus V calculated for different bath temperatures in the case of dirty graphene regime. We notice that the response time τ th decreases from 95 ns at V = 0 to 5 ns at V = V opt when T B = 0.1 K, because when the cooling operates, the junction thermal conductance is enhanced.
This point can be investigated analytically. To evaluate the voltage dependence of the thermal response at small bias, we need the thermal conductance of the junction G GIS (T G = T B , V = 0) = ∂ TG P GIS (T G = T B , V = 0). It can be approximated by the tunnel integral expression in Eq. (23) at k B T G , k B T B ∆ 0 . At the leading order we obtain finally .
The difference between τ th (V = 0) [Eq. (28)] and τ th (V = V opt ) [Eq. (23)] is strong. In particular, at low temperatures the junction conductance is exponentially suppressed at zero bias, while G GIS has a large contribution in the optimally biased case. The difference of τ th between the biased and unbiased case is remarked in Fig. 5c. Dashed curves show τ th in an unbiased system at (T G =T B =T , V =0), while solid curves show τ th for T G,b =T and T B , V opt (T B ) are set subsequently. For completeness, we show both the dirty (blue curves) and clean (red curves) graphene regimes. The difference in response time between V = 0 and V = V opt can reach one or two orders of magnitude depending on the value of T G and the graphene regime. Furthermore, at V = 0, there is no maximum in τ th , since both the G e/ph and G GIS are increasing functions of T G .
It is worth to note that the response time does not depend on carrier density n. Indeed, both C and G tot are proportional to √ n. As a consequence, the gating does not affect τ th .
Finally, we evaluate the temperature response to a finite external power signal P in = 0. This quantity will be exploited for investigating the bolometric response of the device. It is useful to write the linear heat balance equation (22) in the frequency domain including the signal P in (ω). We remark that the frequency ω of P in refers to the Fourier component of the power and not to the electromagnetic frequency. The resulting equation takes the form where T T P = 1/(G tot (1 + iωτ th )) is the power-totemperature transfer function. This equation shows that the SIGIS responds as a low-pass filter with cut-off frequency ω 0 = 1/τ th . Considering the values of τ th reported in Fig. 5a, the corresponding frequency is in a range of 10 MHz − 60 MHz. In the following section, this transfer function will be used to evaluate the responsivity, a figure of merit which quantifies the SIGIS performances as a bolometer.
V. BIASED SIGIS AS A BOLOMETER
In this section, we study the cooled SIGIS as a bolometer. An input power P in is converted in a variation of current when the SIGIS is kept at a constant voltage bias. In detail, we characterize two bolometric figures of merit, the responsivity and the NEP.
The bolometric properties of a SINIS system with electron cooling have been studied in literature [54,55,57,97]. The main result is that the built-in refrigeration enhances the responsivity and decreases the NEP. Here, we essentially follow a similar analysis for a SIGIS.
We point out that SIGIS systems have already been investigated in literature, at V → 0, where the cooling is negligible [14,74,98]. The purpose of these low V schemes is to decrease the thermal conductance across the junction in order to use the device at lower input power regimes [14,74,98].
Our bolometer scheme consists of a SIGIS system connected to an external voltage generator V ext = 2V , being V the voltage drop across a single junction (see sketch in Fig. 6a). The graphene is also connected to the superconducting antenna by means of a clean superconductor/graphene junction. The superconducting antenna allows carrying the power P in and traps it in graphene since the superconducting leads work as Andreev mirrors [52,54], reducing the thermal leakage to the antenna. It is important to remark that the distance between the antenna electrodes must be enough to make the Josephson coupling through proximity effect negligible [99]. The electric current I in the circuit is measured by means of an inductance coupled to a superconducting interferometer read-out [9,11,12,100,101].
A. Responsivity
We start our investigation with the responsivity, defined as a power-to-current transfer function: where I(ω) and P in (ω) are the electric current and the input power signal in the frequency domain, respectively. We calculate the responsivity as the product of the power-to-temperature transfer function T T P in Eq. (30) with the temperature-to-current transfer function T IT = ∂ ∆TG I. The product of the two transfer functions is equivalent to calculate the derivative R = ∂ Pin I by the factorization R = ∂ TG I × (∂ TG P in ) −1 , since T T P = ∂ Pin ∆T [54]. We obtain The responsivity has a cut-off at the frequency ω 0 = 1/τ th . We focus on the low frequency limit, which is valid when the band of the input signal is sufficiently below the cut-off frequency. Fig. 6b reports a color map of R versus V and T B , obtained by Eq. (32) using the numerical derivative of Eqs. (4), (5). Cuts of Fig. 6b versus V are reported in Fig. 6c. The responsivity shows a peak on the red dashed curve V R opt (T B ). The latter does not coincide with V opt (dotted black in Fig. 6b), which maximizes the cooling performances. Indeed, V R opt (T B ) and V opt are different by definition, since the former is obtained by maximizing ∂ TG I/∂ TG P in and the latter by maximizing P GIS . V R opt (T B ) is located closely below ∆(T B )/e. Above this voltage, the current characteristics I(V, T G , T B ) lose sen-sitivity to temperature since they converge to the ohmic behavior I = V /R t . On the other hand, for V well below the gap, the current is suppressed.
Other physical features of responsivity are represented in Fig. 6d. Here, the solid curves are calculated by considering the graphene cooling, while the dashed curves are obtained by imposing T G,b = T B , i.e., disregarding the cooling effect. This treatment corresponds to a physical situation where a spurious heating source completely spoils the cooling power of the junction. Let us investigate how the difference of graphene regime affects the responsivity. We first consider the dashed curves in Fig. 6d, representing the absence of cooling, where we can notice that the clean case is slightly more responsive. The reason is due to the enhanced power-to-temperature transfer function T T P . Indeed, in both the dashed results (T G,b = T B ), the temperature to current transfer function T IT in Eq. (32) is the same, since it is a property of the junction depending only on V, T G , and T B . But the transfer function T T P changes between a clean or dirty graphene regime, since the phonon thermal conductance is lower in the clean case. This means that, given a power input, the temperature raise ∆T G is bigger in the clean case, resulting in a greater current response.
The comparison between the dashed and solid curves in Fig. 6d shows that the presence of an active cooling enhances the responsivity. The graphene base temperature is lower for clean graphene regime (see Sec. III), resulting in a stronger enhancement of responsivity compared to the dirty graphene case.
A physical insight to this argument can be obtained by using the low temperature approximations studied above. We underline that these expressions hold for V opt and not V R opt , but they give enough information for a physical picture. The responsivity at low temperatures is As in the previous section, the denominator shows the presence of two regimes separated by the crossover temperature T G,cr in Eq. (25). The regime at T G,b T G,cr is dominated by the e/ph thermal channel with responsivity The regime at T G,b T G,cr is dominated by the junction thermal channel with responsivity at V opt This expression does not involve any graphene property, but it is obtained by the ratio of the two junction properties ∂ TG I and G GIS = ∂ TG P GIS . In particular, both terms scale as 1/R t , so the tunnel resistance does not directly affect the responsivity at low temperatures. Finally, we would like to stress that the responsivity increases by decreasing the graphene temperature. This is also confirmed by Fig. 6b,c.
B. Noise equivalent power
We now focus on the noise equivalent power, which is defined as the signal power necessary to have a signal-tonoise ratio equal to 1 with a bandwidth of 1 Hz [102].
The total NEP of the SIGIS is given by different contributions [54] where the three terms are related to the junction, the e/ph coupling and the amplifier read-out, respectively.
The factor 2 in front of N 2 GIS takes into account the two junctions, assuming their noises to be uncorrelated [103], which is related to the fact that temperature fluctuations, as the one induced by heat noise, are small in comparison to the stationary value of T G [20].
The contribution to the junction NEP is given by fluctuations in both the electric and heat currents: where the quantities in angled brackets are the low frequency spectral densities of fluctuations [54]. I 2 is the current fluctuation given by [54] The fluctuation of the tunneling rate is mirrored in a fluctuation P 2 of the tunneled heat Since the two fluctuations I 2 and P 2 are given by the tunneling of the same carriers, a non-null correlation exists [54]: In these integrals, the energy dependence of graphene has been neglected, according to the approximation done in Sec. II. Figure 7 reports the NEP components for T S = T B = 0.3 K. Panel (a) shows the contributions to N GIS in Eq. (37). For completeness, the NEP calculated by neglecting the cross-correlation between I 2 and P 2 is also reported By comparing N unc and N GIS we can notice that the IP term brings a correction that reduces the total NEP. The cross-correlation is positive except in the region above the gap voltage ∆/e + 0.6k B T G /e < V < ∆/e + 1.3k B T G /e. Outside this region, the cross-correlation partially cancels the shot noise and the heat noise [54]. The NEP due to the junction noise is smaller in a SIGIS bolometer compared to a SINIS bolometer. Indeed, N GIS scales as R −1/2 t and good cooling characteristics can be reached in a SIGIS with a tunnel resistance one order of magnitude greater compared to a SINIS. As a consequence, the N GIS is lower of a factor ∼ 3.
Let us consider the other NEP contributions. The contribution related to the noise in the e/ph channel can be roughly estimated by a generalization of expression in Ref. [54] N 2 e/ph = 2δk B AΣ δ (T δ+1 At equilibrium T G = T B = T , the NEP takes the standard form N 2 e/ph = 4k B G e/ph T 2 [14,74]. We notice that this term is smaller in a SIGIS compared to a SINIS, due to the lower e/ph coupling constant (see discussion in Sec. III). In the temperature range of 0.1K-1K, the e/ph thermal conductance is one order of magnitude lower, yielding a N e/ph decrease of a factor ∼ 3.
Finally, we consider the read-out NEP due to the amplifier noise I 2 and we assume I 2 amp ≈ 0.05 pA/ √ Hz [54]. Fig. 7 shows the different contributions to the total NEP at T B = 0.3 K versus V . Panels (a) and (b) show the same N GIS . We notice that N tot has a minimum close to the optimal bias. Here, the three contributions are of the same order of magnitude and yield N tot = 1.6 × 10 −18 W/ √ Hz. Away from the optimal point, the read-out N amp dominates. Hence, in order to optimize the total NEP, it is important to reduce the noise of the measurement circuitry.
The electronic cooling influences the NEP in two ways: on one side, it decreases the thermal fluctuations of electrons in graphene, on the other it enhances the responsivity (see Fig. 7b). The former effect is quantified by the low temperature expressions V opt [54]. The latter effect involves all the contributions that have R at the denominator. This is remarked by the total NEP versus (V, T B ) shown in Fig. 7c,d, that resembles the inverse of responsivity in panels 6b,c. In particular, the NEP improves of about two orders of magnitude moving from the zero-bias to the optimal-bias configuration.
We now investigate the effects of the carrier density n on the bolometric properties. The responsivity is not affected by n, since T T P ∝ G −1 tot ∝ n −1/2 and T IT ∝ R −1 t ∝ n 1/2 . The term N GIS ∝ R −1/2 t ∝ n 1/4 and similarly N e/ph ∝ Σ 1/2 δ ∝ n 1/4 . The read-out term instead does not depend on n. Hence, the NEP is a weakly increasing function of n. Considering that the gating can vary n from the residual charge n 0 of a factor 100 at most, the NEP can vary of a factor ∼ 3. Therefore, the bolometric properties can be considered stable under charge variations or fluctuations.
VI. DEPENDENCE ON DYNES PARAMETER
Let us discuss here the role of the Dynes parameter, introduced in Eq. (3). This phenomenological parameter takes into account the finiteness of the superconducting peaks and the subgap tunneling [104]. The latter strongly depends on different issues, e.g., the fabrication quality of the junction [105] and, more generally, on environmental effects [106]. For this reason, Γ D is frequently used as a parameter to quantify the quality of a tunnel junction with a superconductor. Indeed, realization of highquality tunnel junctions is an important requirement to avoid effective sub-gap conduction channels. The value of Γ D can be extracted experimentally from a fit of the measured electrical differential conductance G e (V ) at low The sub-gap density of states is which implies that for eV, k B T G , k B T B ∆ 0 the junction behaves as a NIN with effective resistanceR t = R t ∆ 0 /Γ D , with current I V /R t and Joule heating V 2 /R t .
In the previous sections, we assumed good quality junctions with Γ D = 10 −4 ∆ 0 . Such a value of Γ D has been experimentally realized in metallic NIS junction, while it has not been reached in graphene junctions yet. Quality of GIS junctions is improved over time, and it can be nowadays expressed by Γ D on the order of 10 −1 ∆ 0 . State of the art experiments hint that Γ D ≈ 7 × 10 −2 ∆ 0 [47].
In this section, we show how the Dynes parameter affects cooling and bolometric characteristics.
Effects on cooling. The cooling power is reduced by the increasing of the Dynes parameter since the smearing of the peaks in the BCS-Dynes DoS does not allow sharp filtering of the hot electrons [23,24,107]. Moreover, the sub-gap conduction implies a Joule heating V 2 /R t , half of which flows in graphene. Figure 8a shows the cooling power P GIS versus the bias for different values of Γ D , at the temperature T G = T B = 0.5 K. Up to Γ D = 10 −2 ∆ 0 , the cooling power is slightly affected by Γ D . From Γ D = 10 −2 ∆ 0 to Γ D = 10 −1 ∆ 0 , the cooling power is strongly decreased. This is mirrored in the graphene base temperature T G,b . Panels (b,c,d) of Fig. 8 show T G,b /T B versus the bias V and the bath temperature T B , for Γ D /∆ 0 = 0.05, 10 −2 , 10 −3 , respectively. In particular, the region of (V, T B ) where the temperature is decreased depends on Γ D . Anyways, the simulations suggest that cooling can still be observed for Γ D = 0.05∆ 0 , where T G,b /T B can reach the value of ∼ 0.8. For Γ D = 10 −2 ∆ 0 , the cooling is well operating. For Γ D = 10 −3 ∆ 0 , the T G,b /T B plot resembles the one in Fig. 3a.
Effects on the response time. The value of τ th is weakly affected by Γ D at V opt . Indeed, when the junction is biased, the sub-gap contribution to the thermal conductance plays a marginal role compared to the contribution of the states above the gap. In Fig. 8e, we report instead what happens at finite bias, plotting τ th at T B = 0.1 K versus V for different values of Γ D . At eV ∼ ∆ 0 , the response time is weakly affected by Γ D , keeping on the order of 10 ns. The response time is affected by Γ D only around V ∼ 0 and at low temperatures T G , T B 0.2 K, since the contribution of the sub-gap conduction and the electron-phonon coupling are comparable. Anyways, we remark that for T G , T B 0.2 K, the dependence on Γ D is negligible, independently on the bias V .
Effects on responsivity. The value of R is affected by Γ D through the T G,b increase and, at the same time, by the reduction of ∂I/∂T , since the smeared DoS peaks are translated in less sharp features of the I(V ) characteristics in temperature. Figure 9a shows R versus Effects on NEP. The behavior of R on Γ D is reflected in the NEP characteristics. Indeed, R is present in the denominators of the NEP components in Eqs. (37) and (43), while the numerators are weakly affected by Γ D at eV ∆ 0 . Panels (c) and (d) of Fig. 9 report the single junction N GIS and the total NEP at T B = 0.3 K, calculated in the same manner of section V. Like the responsivity, the NEP worsen one order of magnitude to Γ D = 0.05∆ 0 .
In summary, in this section, we have shown that the quality of the GIS junctions might play a role in the characteristics of the studied device. In particular, the Dynes parameter is detrimental for cooling and bolometric applications only when Γ D 10 −2 ∆ 0 .
VII. COMPARISON WITH OTHER BOLOMETRIC ARCHITECTURES
Bolometric technology is a very wide topic, stimulated mainly by challenges in astroparticle physics, e.g., study of the cosmic microwave background [108,109] or axion detection for dark matter investigation [110][111][112][113]. The differences among bolometers concern many experimental features, such as fabrication issues, working temperature, read-out schemes, figures of merit. Among all the different characteristics, detectors combining low noise with fast response speed are highly desirable. Nevertheless, in bolometers technology, there is a trade-off between NEP and response time. Indeed, a fast response time is associated with a fast heat dissipation through thermal channels. However, a large thermal dissipation corresponds to a low responsivity and to a large thermal coupling with external systems, both deteriorating the NEP. Hence, in an experimental setup, it is important to choose the right compromise between τ th and the NEP N on the base of the specific requirements.
A comparison based on the various experimental features of all the different bolometric technologies is beyond the scope of this article. Here, we compare our SIGIS with three bolometric architectures, similar in working principles or materials. The first architecture concerns SINIS bolometers with built-in electron refrigeration [52-55, 57, 97, 114]. Second, we consider SIGIS bolometers based on power-to-resistance conversion at V = 0 bias [14,74,98]. Finally, we consider also bolometers based on proximity effect in SNS [115][116][117] and SGS junctions [16,87].
SINIS bolometers. Similarly to our device, SINIS bolometers exploit the capability of a voltage bias to provide both cooling and extraction of the bolometric current signal. The theoretical work in Ref. [54] predicts τ ∼ 0.2 µs and N ∼ 4 × 10 −18 W/ √ Hz at temperature ∼ 300 mK. Recent experiments have shown a response time τ ∼ 2 µs and N ∼ 3 × 10 −18 W/ √ Hz at temperature ∼ 300 mK, with a good accomplishment of the theoretical predictions. The response time of our device is faster than a SINIS due to the very reduced heat capacity of graphene compared to metals. The NEP in our device and in the theoretical device of Ref. [54] are on the same order of magnitude, with a lower value in SIGIS due to the combined effect of a lower base temperature and lower heat dissipation. Another advantage of our device is the reduced heat leakage from the phonons, that is mirrored in low heat transport into the superconducting leads. This prevents the leads overheating, which is a problem present in SINIS systems [43]. On the other hand, SINIS systems take advantage of well-established fabrication techniques that guarantee high-quality junctions, while techniques for GIS junctions are still in development.
Zero-bias SIGIS bolometers. Another similar architecture consists of SIGIS devices biased at very low voltage [74,98]. In this case, the electronic refriger-ation is absent, and bolometry is performed through the temperature-to-resistance transduction. Theoretically, these devices are predicted to have τ ∼ 1 µs and N ∼ 2 × 10 −19 W/ √ Hz at 100 mK [74]. In comparison with the theoretical device in Ref. [74], our device shows a NEP that is one order of magnitude larger but a faster response time. This because the voltage bias increases the junction thermal conductance, thus increasing the noise contribution from the junctions but allowing a faster thermalization. Our device and the zero-bias SIGIS bolometers share the same fabrication issues concerning the quality of the tunnel junctions. At the state of the art, the measured NEP reached in 0V-SIGIS is on the order of ∼ 10 −17 W/ √ Hz [98]. SNS and SGS Josephson junction bolometers. Finally, we compare our system with another class of bolometers, based on clean-contacted SNS [115][116][117] or SGS [16] forming hybrid Josephson junctions. These systems exploit completely different physical phenomena and share with our V -biased SIGIS only the materials composing the detector. The transduction involves the temperaturedependence of the junction kinetic inductance or the switching current. A recent paper reports a SNS bolometer that, at bath temperature 25 mK, shows a very low NEP N ∼ 6 × 10 −20 W/ √ Hz and a quite long response time τ = 30 µs. Though, this response time is more than one order of magnitude faster in the class of low noise bolometers [117]. Compared to our device, the SNS ultimate experiment shows a longer response time but a better NEP.
A recent pre-print [16] reports a very promising bolometer based on an SGS Josephson junction. The experiment is based on the measurement of the statistic distributions of the switching current (Fulton-Dunkleberger) versus the input power. Then, the NEP is estimated from the width of the distribution, since a larger standard deviation is associated with a larger uncertainty on the power signal measurement. In this way, the Authors estimate a NEP N ∼ 7 × 10 −19 W/ √ Hz, reaching the fundamental limit imposed by the intrinsic thermal fluctuation of the bath temperature at 0.19 K [16]. The SGS-based architecture seems a promising path for further research in the field of low-noise bolometers.
VIII. CONCLUSIONS AND FURTHER DEVELOPMENTS
In this paper, we have investigated electron cooling in graphene when tunnel-contacted to form a SIGIS device and its application as a bolometer.
We have studied electron cooling by voltage biasing the junctions, exploiting the same mechanism of a SINIS system. The low electron-phonon coupling in graphene allows having a sensible temperature decrease even for a large area graphene flakes and a high tunnel resistance (100 µm 2 , 10 kΩ), differently from a SINIS where a low tunnel resistance is required for adsorbing the larger phonon-heating.
We have then studied the dynamics of the SIGIS cooler. We obtained the dependence of the thermal relaxation time on temperature and voltage bias and estimated its magnitude (τ th ∼ 10 ns).
Finally, we have investigated the possibility of employing the cooled SIGIS system for bolometric applications. We found out that electron cooling enhances the responsivity and decreases the noise equivalent power. Moreover, the small electron-phonon coupling and the possibility of using high values of tunnel resistance allow reaching low noise equivalent power of the order 10 −18 W/ √ Hz. At the same time, the cooling mechanism increases the operation speed of the bolometer of more than one order of magnitude. Compared to the unbiased case, this makes the cooled SIGIS a suitable detector for THz communication [118][119][120] and cosmic microwave background [121,122] applications.
Further developments for our system could be explored. In particular, many known strategies already employed to the SINIS coolers/bolometers can be inherited. Among them, suspended graphene can show very interesting cooling characteristics due to the combined refrigeration of electrons and phonons, since in this case the latter are not connected to the substrate thermal bath [123][124][125][126].
|
2019-06-26T11:44:12.000Z
|
2019-06-26T00:00:00.000
|
{
"year": 2019,
"sha1": "c9eb10d3157b3e1f74f3c1f1dd3fd80089122672",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1906.10988",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f9291209d8afdd1bc829d1caa7b86ac387cc8d93",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
5791416
|
pes2o/s2orc
|
v3-fos-license
|
A rare form of anemia in systemic lupus erythematosus
Mechanisms responsible for anemia in systemic lupus erythematosus (SLE) can be immune or non-immune. A 27-year-old previously healthy woman was admitted with echymotic patches over the lower limbs for six months, multiple joint pain and fatigue for 2 months. She had severe pallor and multiple echymotic patches over the lower limbs. She was diagnosed with SLE with pernicious anemia and iron deficiency anemia. The rare association of SLE with pernicious anemia was reported previously in few patients. Treatment of SLE along with B12 supplementation is necessary for such patients. Since etiology for anemia in SLE can be of various kinds, a detailed workup for identifying the underlying mechanism is necessary.
n INTRODUCTION
H ematological abnormalities are com- mon in systemic lupus erythematosus (SLE).Anemia can be seen in about 50% of patients, with anemia of chronic disease being the most common form (1). Various immune and non-immune mechanisms are responsible for anemia in SLE and include inflammation, renal insufficiency, blood loss, dietary insufficiency, medications, haemolysis, infection, hypersplenism, myelofibrosis, myelodysplasia, and aplastic anemia (2, 3).Pernicious anemia and SLE are both disorders of autoimmune etiology.Coexistence of SLE with other autoimmune disorders is common but association of pernicious anemia and SLE is rare.
n CASE REPORT
A 27-year-old previously healthy woman was admitted with echymotic patches over lower limbs for six months, multiple joint pain and fatigue for 2 months.She had menorrhagia for 5 years.She denied history of weight loss, had no sick contacts and had no history of addictions.She had
Prothrombin time and partial thromboplastin times were normal.HIV, Hepatitis B and Hepatitis C serologies were negative.Bone marrow aspirate showed hypercellular marrow with erythroid hyperplasia (micronormoblastic and megaloblastic maturation) (Figure 2A and B).Bone marrow biopsy was hypercellular, showing erythroid hyperplasia with micro normoblastic and megaloblastic maturation; the myeloid series showed normal maturation and differentiation, giant metamyelocytes, giant myelocytes and hypersegmanted neutrophils; mild increase in megakaryocytes severe pallor and had multiple echymotic patches over lower limbs.There were no other bleeding manifestations.Hemoglobin was 6.2 g/dl, total leucocyte count 4600/μl, platelet count 0.20x10 9 /L, erythrocyte sedimentation rate 60 mm in 1 h and C-reactive protein was normal.The Hematocrit-corrected erythrocyte sedimentation rate was 24 mm/h.In the peripheral smear, the RBCs showed moderate hypochromia, marked anisopoikilocytosis with microcytes, macrocytes, pencil shaped cells, macro ovalocytes, elliptocytes, tear drop cells but no hemolysis (Figure 1); WBC's were normal in number with few hypersegmented neutrophils and the platelet count was very low.Urinalysis was normal.Biochemical parameters showed random blood sugar 101mg%, urea19 mg/dl, creatinine 0.8 mg/dl, sodium
Figure 1 -
Figure 1 -A and B) Peripheral smear showing moderately hypochromic RBCs, marked anisopoikilocytosis with microcytes, few hypersegmented neutrophils and severely reduced platelet count.
of autoimmune etiology.o n l y o n l y the lower limbs.She was diagnosed with SLE with pernicious anemia and iron deficiency anemia.The rare as o n l y the lower limbs.She was diagnosed with SLE with pernicious anemia and iron deficiency anemia.The rare as sociation of SLE with pernicious anemia was reported previously in few patients.Treatment of SLE along with o n l y sociation of SLE with pernicious anemia was reported previously in few patients.Treatment of SLE along with supplementation is necessary for such patients.Since etiology for anemia in SLE can be of various kinds, o n l y supplementation is necessary for such patients.Since etiology for anemia in SLE can be of various kinds, C.A. Mansoor, R. Narayan
Figure 1 -
Figure 1 -A and B) Bone marrow aspirate showing hypercellular marrow with erythroid hyperplasia (micronormoblastic and megaloblastic maturation); C and D) Bone marrow biopsy showing erythroid hyperplasia with micro normoblastic and megaloblastic maturation, hypersegmented neutrophils and mild increase in megakaryocytes.
|
2018-04-03T03:29:52.322Z
|
2017-09-21T00:00:00.000
|
{
"year": 2017,
"sha1": "689c4311e538f7886e97bb3d4a709ce8a8b37c11",
"oa_license": "CCBYNC",
"oa_url": "https://www.reumatismo.org/index.php/reuma/article/download/982/760",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f438a85875cb0d9b0978c9363832bcf37ed8408b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209331601
|
pes2o/s2orc
|
v3-fos-license
|
The effect and mechanism of YH0618 granule on chemotherapy- induced hair loss in patients with breast cancer: study protocol for a randomized, double-blind, multi-center clinical trial
Background Hair loss is one of the most common side effects of chemotherapy, and can cause persistent negative emotions, further affecting therapeutic effects and reducing the quality of life. However, there are no clinically safe and effective methods to solve the problem at present. Our previous clinical and animal studies showed that a medicinal and edible decoction, YH0618, could significantly promote hair growth in cancer patients after chemotherapy, without interfering with the anti-tumor effects of chemotherapy. Besides, the theory of Chinese Medicine believes that the “Essence of the kidney is reflected on the hair”. Therefore, this study will further explore the efficacy of YH0618 granule on chemotherapy-induced hair loss in patients with breast cancer by a randomized, double-blind, multi-center clinical trial and elucidate the potential mechanism from the aspect of kidney deficiency or renal dysfunction. Methods/design Eligible breast cancer patients who will start chemotherapy will be randomly divided into group A (YH0618 granule) and group B (placebo). The chemotherapeutic agents contain taxanes or/and anthracyclines, and the chemotherapy regimen will be for at least six cycles with a cycle every 3 weeks. Subjects assigned to group A will receive YH0618 granules twice a day (6 g each time), 6 days a week, mixed with 300 ml warm water from the first to the fourth chemotherapy cycle. Subjects in group B will receive the placebo granule in the same manner. The primary outcome is the time point of occurrence of hair loss reaching grade II as assessed by the WHO Toxicity Grading Scale, and objective indices of hair quality and hair-follicle growth recorded by a hair and scalp detector before the fifth chemotherapy cycle. Secondary outcomes include changes of facial color and thumbnail color, grading of thumbnails ridging, assessment of quality life, level of fatigue, routine blood test results, hepatic and renal function, and certain medical indicators which can reflect kidney deficiency in Chinese Medicine. Discussion This research is of great significance for the treatment of cancer and improving the quality of life of cancer patients. The study may provide the most direct evidence for meeting clinical needs and lay a solid scientific foundation for later product development. Trial registration Chinese Clinical Trial Registry, ID: ChiCTR1800020107. Registered on 14 December 2018.
(Continued from previous page)
Discussion: This research is of great significance for the treatment of cancer and improving the quality of life of cancer patients. The study may provide the most direct evidence for meeting clinical needs and lay a solid scientific foundation for later product development.
Keywords: Medicinal and edible compound prescription, YH0618 granule, Chemotherapy-induced hair loss, Taxanes, Anthracyclines, Kidney deficiency and renal dysfunction, Quality of life Background Chemotherapy is a major type of cancer treatment using chemical medications to affect cancer cell growth, division and reproduction. Regardless of the route of administration, chemotherapy drugs are introduced into the blood stream, so that chemotherapy can cause various degree of damage to other normal organs and tissues while killing cancer cells, further causing a series of serious adverse effects/toxicity.
Hair loss is an obvious side effect of chemotherapy. The incidence of chemotherapy-induced hair loss is as high as 65% in patients receiving chemotherapy and in some is even up to 80-100% in patients receiving specific agents, such as doxorubicin and docetaxel [1][2][3]. Although hair loss itself does not cause damage to the body and threaten life, it can induce persistent negative emotions such as anxiety, depression and negative evaluation of self-image, which in turn reduces quality of life [4]. The hair loss caused by chemotherapy is usually reversible; however, in most cases, the color of the new hair is grayish or different from the previous color, and the hair texture also shows some changes, such as being rougher, slower growing and sparser [5,6]. Besides, contemporary social media and excessive attention to appearance have put more pressure on patients, with 8% of patients saying that they refuse chemotherapy because of fear of alopecia [7]. Even some female patients said that having no hair is more difficult to tolerate than mastectomy [8].
The mechanism of chemotherapy-induced hair loss is still unclear because of the difference between animal models and the actual human body, and the human scalp cannot be extracted for research. The current reported mechanisms of chemotherapy-induced hair loss mainly involve deoxyribonucleic acid (DNA) damage, hair-follicle cell-cycle inhibition, hair-follicle-cell apoptosis, and reactive oxygen species and signal transduction, etc. Accordingly, animal-model studies have found that vasoconstrictors, antioxidants, hair-growth cycle regulators and parathyroid hormone can improve hair loss caused by chemotherapy [9]. In clinical practice, it has been reported that minoxidil, AS101 and vitamin D3 can treat chemotherapy-induced hair loss, but the effect is not significant [10]. Currently, scalp-cooling is the only method approved by the US Food and Drug Administration (FDA) for use for chemotherapy-induced hair loss, and the hypothesis about its mechanism is that the low-temperature-induced rapid contraction of blood vessels can reduce blood flow into hair follicles, and cause a general reduction in cutaneous-cell metabolism, which makes the hair less affected by the chemotherapy [11]. Unfortunately, the success rate of scalp-cooling is also only 50%, and patients with cold allergy, cold agglutination, and cold globulinemia are not suitable for this method [9]. Although some progress has been made in the mechanism, research and management of alopecia, there is no very effective way in solving the hair loss caused by chemotherapy so far. Therefore, it is necessary for clinicians and researchers to pay more attention to chemotherapy-induced alopecia and a series of relevant psychological problems to further elucidate the mechanism of hair loss and develop safe and effective solutions.
YH0618, a medicinal and edible compound prescription, is developed based on the "homology of medicine and food," theory ancient prescription, and long clinical practice. Our previous animal studies have shown that YH0618 decoction did not interfere with the anti-tumor effects of chemotherapy drugs [12]. Additionally, a randomized clinical trial also showed that YH0618 significantly accelerated hair regrowth and reduced thumbnail pigmentation in cancer patients who have completed chemotherapy (data was not showed, but the protocol was published in [13]). Therefore, this study will further explore the efficacy of YH0618 granule on chemotherapy-induced hair loss in patients with breast cancer by a randomized, double-blind, multi-center clinical trial. YH0618 consists of five medicinal and edible foods (black soybean and liquorice, etc.) which are recommended by clinicians for cancer patients and all components have a history of safe use in other foods. Besides, each of the components possesses a distinct pharmacological profile, including removing free radicals in the body, regulating the immune system, preventing cancer, detoxifying and enhancing the sense of taste [14][15][16]. Black soybean and liquorice, as the main essential ingredients, have been used for detoxification for millennia in China. Based on the theory of the "Essence of the kidney is reflected on the hair" in traditional Chinese Medicine, the color, texture and growth of hair is believed to be closely related to the kidney. However, there is no in-depth research on the relationship between kidney and hair, and no research is combined with the comprehensive evaluation of renal function from the aspects of both Chinese and western medicine to explore the mechanism of chemotherapy-induced hair loss. The kidney is an important detoxification organ of the human body and helps to filter out toxins in the blood and other waste through urine. Chemotherapy drugs may not cause renal organic lesions, but they "consume" kidney essence and kidney qi, which breaks the balance of the body. Therefore, we believe that chemotherapy agents not only directly produce toxic effects on hair-follicle cells, but also deplete qi, blood and body fluids, especially kidney essence and kidney qi, and breaks the balance of yin and yang of the human body, which leads to the obstruction of microcirculation and the decline of immune function, further resulting in nutritional disorders of hair follicles and hair loss. Thus, the study will also elucidate the mechanism of YH0618 granule on reducing hair loss from the aspect of kidney deficiency or renal dysfunction. The hypothesis of the study is that YH0618 granules could delay chemotherapy-induced hair loss by improving kidney deficiency and renal dysfunction.
Study design
This is a randomized, double-blind, multi-center controlled trial which aims at exploring the efficacy of YH0618 granule on chemotherapy-induced hair loss in patients with breast cancer and elucidating the potential mechanism from the aspect of kidney deficiency or renal dysfunction. To achieve this goal, a total of 214 breast cancer patients who will receive their first chemotherapy will be recruited for the study. The patients will be randomly divided into group A (YH0618 granule) and group B (placebo) using a 1:1 allocation ratio, adhering to the Consolidated Standards of Reporting Trials (CONSORT) Statement [17] and the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement [18] (Additional file 1). The primary outcome of this study is the time point of occurrence of hair loss reaching grade II as assessed by the World Health Organization (WHO) Toxicity Grading Scale, and objective indices of hair quality and hair-follicle growth recorded by a hair and scalp conditioner (CBS-603, CBS-Medical Skin Analysis, Taiwan). Secondary outcomes include changes of facial color and thumbnail color, grading of thumbnail ridging, assessment of quality life, fatigue level, routine blood test results, hepatic and renal function, and some medical indicators which can reflect kidney deficiency in Chinese Medicine. The flow chart of the study is shown in Fig. 1 medical sciences, Chinese medicine and statistics). The DSMB is for quality control of this research data and ensures the integrity of the study. The protocol compliance, safety, and on-schedule study progress are also monitored by the DSMB. An auditing trial will be conducted every 3 months and the process will be independent from investigators. Study documents (soft and hard copies) will be retained in a secure location for 5 years after trial completion.
Subjects
A total of 214 eligible patients will be recruited at different clinical centers. Inclusion criteria include: (1) women with stage-I or -II breast cancer aged between 18 and 75 years; (2) receiving first chemotherapy; (3) planning to receive chemotherapeutic agents containing taxanes or/ and anthracyclines; (4) the chemotherapy regimen will last for at least six cycles with every 3 weeks a cycle; (5) adverse events assessed using WHO toxicity classification criteria < grade II; and (6) a life expectancy of at least 6 months. Exclusion criteria are: (1) patients with a medical history of hair transplantation; (2) patients suffering from psoriasis or severe scalp infection; (3) hair loss induced by alopecia areata, alopecia totalis or scalp injury, etc.; (4) pregnancy, lactation or potential pregnancy; (5) allergy to some specific foods, like black soybean; (6) severe cardiac, hepatic, renal, pulmonary and hematic lesions or other diseases which will affect their survival; (7) those who have any severe mental or behavioral disorders who cannot be fully informed; (8) suspected of or with a history of alcohol and/or drug abuse; (9) cannot understand or fill in questionnaires because of cognitive disorders or a low level of literacy; and (10) a variety of factors affecting drug taking and absorption, such as the inability to swallow, chronic diarrhea, intestinal obstruction, etc. Eligible patients will be invited to participate in this study after obtaining their written consent form. All participants will be closely monitored in the study.
Estimation of sample size
The primary outcome in this study is the time point of occurrence of chemotherapy-induced hair loss reaching grade II measured by the WHO Toxicity Grading Scale for Determining the Severity of Adverse Events. Our previous results showed that YH0618 could cause the incidence of hair loss grade < II to reach 50% for patients who have completed chemotherapy, and the difference between the incidence of hair loss grade < II in YH0618 group and the control group was 15%. Thus, the difference in proportion between the two groups will be measured by a Z test. To achieve a two-sided, type-I error alpha = 0.05 and power: (1 − beta) = 80%, the minimal number of subject need in each group is 85. We estimated a 20% attrition rate at the end of follow-up; hence, a sample size of at least 107 in each group (214 in total) is planned for this study.
Randomization and blinding
Each subject will obtain an unique number after completing written consent. A computer-blocked random number sequence with a block size of four will be generated centrally by a statistician not involving in this study. As YH0618 granule and placebo show the same appearance, a double-blind model will be adopted. Therefore, the randomization sequence and different groups will be kept hidden from subjects, practitioners, data collectors and statisticians.
Intervention and control condition
Prior to intervention, baseline data will be collected including demographics, medical characteristics, assessment of chemotherapy-induced hair loss, facial color, thumbnail color, grading of thumbnails ridging, quality of life, blood routine test results, and hepatic and renal function. After that, subjects assigned to group A will receive YH0618 granules three times a day (6 g each time), 6 days a week, mixed with 300 ml warm water from the first to the fourth chemotherapy cycle. Subjects in group B will receive the placebo granule in the same manner. Then, all the subjects will be followed up for 1 month after chemotherapy. All specific methods, such as scalp cooling, used for reducing hair loss will be prohibited during the clinical trial. Both YH0618 granules and placebo are produced by Guangzhou Kanghe Pharmaceutical Co., Ltd., which meets national standards.
Outcome evaluation Primary outcome
The primary outcome is the time point of occurrence of hair loss reaching grade II as assessed by the WHO Toxicity Grading Scale, and objective indices of hair quality and hair-follicle growth recorded by a hair and scalp detector (CBS-603, CBS, Taiwan) before the fifth chemotherapy cycle.
Grading of chemotherapy-induced hair loss
The WHO Toxicity Grading Scale is commonly used to monitor and rate the severity of anticancer drug-induced toxicity [19,20]. The grading criteria for hair loss is shown in Table 2. Alopecia assessments will be conducted by a clinician blinded to treatment assignment, and by the participant.
Objective measurement of hair loss
In order to objectively evaluate the hair quality and hairfollicle growth, a hair and scalp detector (CBS-603) will be used. The detector obtained patents in the United States, German, Japan, China, and China Taiwan, and many international authentications from Conformité Européenne (CE), Federal Communications Commission (FCC), and Restriction of Hazardous Substances (RHoS). The detector is composed of a 10X-200X Hair and Scalp Camera and software. The whole top of the head, a wide range of hair loss and the condition of hair follicles could be clearly filmed at 10X, 50X and 200X, respectively. The software has a function of testing through which hair testing and analysis can be conducted. In this study, identification and classification of the level of hair loss, hair diameter and quality will be analyzed.
Secondary outcomes
Secondary outcomes include changes of facial color and thumbnail color, grading of thumbnail ridging, assessment of quality life, fatigue level, routine blood test results, hepatic and renal function, and certain medical indicators which can reflect kidney deficiency in Chinese Medicine.
Facial color and thumbnail color
The assessment of facial and thumbnail color is performed using the L*a*b system, which is the same as for the clinical trial that we conducted previously [13]. In the fixed surroundings, the skin color of the forehead, right and left cheeks, and jaw, and the thumbnail color will be recorded by the hair and scalp detector at 50X.
Grading of thumbnail ridging
The grading of left and right thumbnail ridging will be measured by the National Cancer Institute Common Terminology Criteria for Adverse Events (NCI CTCAE). The definition of nail ridging is a disorder characterized by vertical or horizontal ridges on the nails. The grading criteria for nail ridging is shown in Table 3.
Quality of life measurement
Quality of life has been regarded as an important index to measure and monitor cancer patients' treatment outcomes [21]. The Chinese version of the Functional Assessment of Cancer Therapy-Breast Cancer (FACT-B) with good reliability and validity will be used to measure breast-cancer-specific quality of life [22]. The tool includes 37 items scored on a 5-point Likert scale, ranging from 0 to 4 with higher scores indicating better quality of life [23,24]. The items are classified into five subscales: Physical Well-Being, Social/Family Well-Being, Emotional Well-Being and Functional Well-Being, which constitute the FACT-General, and the additional concern for breast cancer, which is called the Breast Cancer Subscale. A total score is calculated by summing all subscale scores.
Fatigue measurement
Fatigue will be measured by the Chinese version of FACIT-Fatigue version 4, a 13-item FACIT Fatigue Scale, which could be used for patients with any tumor type [25]. Each item scored on a 5-point Likert selfreport scale, ranging from 0 to 4. A total score is obtained by summing all item scores and a high score indicates less fatigue.
Clinical objective examination
Routine blood tests and assessment of liver and kidney function are the same as in our previous trial [13]. Based on the evaluation standard of kidney deficiency in Chinese Medicine, kidney deficiency will be divided into deficiency of kidney qi, deficiency of kidney yang and deficiency of kidney yin. Modern studies also found that kidney deficiency syndrome has a modern pathophysiological basis, clinically manifested as changes in the relevant medical indicators such as the adrenal axis, thyroid axis, gonadal axis, renin-angiotensin system, immune energy, liver and kidney function and hematopoietic function [26]. So, in this study, immune indices include immunoglobulin M (IgM), alexin C3, helper T cells CD4+, CD8+ T cells and certain metabolic indices of microelements such as Mg 2+ , Cu 2+ , Zn 2+ , and Fe 3+ . All participants will be assessed within 3 days before every chemotherapy cycle from the first to the fifth cycle. Then, all the subjects will be followed up and the final assessment will be conducted at 1 month after the last chemotherapy cycle. A professional research assistant will assign YH0618 granules or placebo, and notify the subjects of dosage and time. Quality and compliance to the intervention will be achieved by checking attendance records and the self-record diary kept by each participant.
Adverse events
Adverse events will be recorded spontaneously through self-reports by participants or asking the participants the open-ended question "How are you feeling?" via phone or face to face. Any questions concerning adverse events will be reported regardless of whether they were deemed to be related to the treatment be assessors and will be sent to the Institutional Review Board of every clinical center.
Statistical analysis
All analyses will be performed based on intention-totreat principles, any missing data in the follow-up visits will be imputed using multiple imputation. Descriptive analyses as means and standard deviations (SDs) will be used to describe the demographics and clinical characteristics of the participants. The primary efficacy analysis compared the hair-loss grading between YH0618 granule and control before the fifth chemotherapy using Fisher's exact test. The changes of hair diameter between the two groups after four cycles of chemotherapy will be compared by an independent samples t test. A multivariable logistic regression model will be used to explore the treatment effect. Potential confounding variables will be identified as those that differ among treatment groups at baseline and are significantly associated with outcomes. Changes from baseline to the final assessment in quality of life assessed by the FACT-B and objective indicators in the blood will be compared using Wilcoxon rank sum tests. Unless otherwise specified, two-sided statistical tests will be used and the significance level will be set at p < 0.05.
In this trial, an interim analysis will be performed when approximately two thirds of the planned observations are enrolled. The results are analyzed by the statistician and only DSMB members have access to the results to test for futility, safety and efficacy of the trial. A predefined stopping rule will be applied to the data to determine whether it is futile to continue enrollment.
Discussion
Chemotherapy-induced hair loss occurs usually due to the high mitotic rate of hair follicles instead of a non-androgenic mechanism, and can manifest as alopecia totalis, telogen effluvium, or, less often, as alopecia areata [27]. Severe hair loss occurs most often with drugs such as doxorubicin, daunorubicin, docetaxel, paclitaxel, cyclophosphamide and etoposide, which are common chemotherapy agents used for breast cancer patients. Even some standard chemotherapy regimens can induce permanent thinning or hair loss. Although scalp-cooling is a method approved by the FDA for preventing both permanent and temporary hair loss, concerns about this method have been raised [6,28]. Therefore, this is the first strict randomized, doubleblind, multi-center controlled trial to evaluate the effect of a medicinal and edible compound prescription on chemotherapy-induced hair loss. The proposed study may provide direct and convincing evidence to support YH0618 as an adjuvant treatment for reducing chemotherapy-induced toxicity, which could be introduced into clinical settings. Our achievements will provide a safe and effective way for reducing chemotherapyinduced hair loss and improving patients' quality of life.
Trial status
The protocol version 1. Recruitment will start in June 2019 and the trial is expected to be completed in December 2020.
|
2019-08-18T23:33:52.467Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2b2fbb558e45d36cfb308500023ab2915d1754b4",
"oa_license": "CCBY",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/s13063-019-3893-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07f5657f21306e36f908619a72fd061d07c64119",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1446254
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative study of the transverse correlation of soft gluons in high energy QCD
We examine both analytically and numerically the validity of factorization for the double dipole scattering amplitude T^{(2)} which appears on the right hand side of the BK--JIMWLK equation. We demonstrate that, if one uses a dilute object (e.g., a proton in DIS) as the initial condition, the correlation in the transverse plane induced by the leading order BFKL evolution is generally strong, resulting in a violation of the mean field approximation T^{(2)} \approx TT even at zero impact parameter by a factor ranging from 1.5 to O(10) depending on the relative size of the scatterers and rapidity. This suggests that, within the experimentally accessible energy interval, the transverse correlation can significantly affect the nonlinear evolution of the dipole scattering amplitude. It also suggests that the nonlinear effects may set in earlier, already in the weak scattering regime. In the case of the simulation with a running coupling, the violation of factorization is somewhat milder, but still noticeable.
Introduction
High energy scattering near the unitarity limit is a delicate problem which deserves intense theoretical efforts in view of its phenomenological importance at hadron colliders. There is a clear goal of including nonlinear, saturation effects due to the high density of gluons into the energy evolution of scattering amplitudes, but a precise determination of when and how these effects should be treated is subject to various uncertainties depending on the process of interest. The problem appears to somewhat simplify if one considers scattering of a small object (e.g., a photon at high virtuality in DIS) off a very heavy nucleus where saturation is important already at relatively low energy. For such a process the Balitsky-Kovchegov (BK) equation [1,2] is the most commonly studied equation which provides a concrete scenario for an approach towards unitarity, (1.1) Here T Y (x, y) is the forward amplitude of a dipole of size |x − y| at rapidity Y . The first three terms on the right hand side contain the BFKL physics [3,4] while the last term ∼ T T ensures that the amplitude saturates the black disc limit T → 1 which is a fixed point of the equation. Being a closed equation, (1.1) is amenable to both analytical and numerical approaches, and the properties of the solution as well as their phenomenological consequences have been discussed extensively over the past several years (see, reviews [5,6] and references therein). However, it is not often emphasized that the BK equation is a mean field approximation to a more general equation, namely, the B-JIMWLK equation [1,[7][8][9][10] nor is the validity of this approximation fully appreciated. Here the brackets . . . denote averaging over the target configurations. The difference between these two equations is usually considered to be minor: Although the former obviously discards any kind of existing correlations in the target wavefunction, this would be justified for a large nucleus at low rapidity (see, however, [11]). The subsequent quantum evolution then generates correlations which vanish in the large N c limit, Indeed, the only existing numerical simulation of the B-JIMWLK equation [12] starting from uncorrelated initial conditions shows little difference from the corresponding solution to the BK equation. The purpose of this work is to demonstrate that the factorization (1.3) is violated when one considers a dilute target consisting of a few partons (e.g., a proton) instead of a heavy nucleus as the initial condition. Of course, there is a priori no reason to expect that factorization should work in this case, but there has not been any quantitative study of the degree of its violation either. For a dilute target, a significant part of the rapidity evolution in realistic experiments is in the linear BFKL regime where the amplitude is rapidly growing but still much less than unity, whereas saturation is considered to be relevant only in the late stages of the evolution. 1 The fluctuations and correlations developed in the linear regime are so strong that the initial condition that should be used for the nonlinear evolution equations is a highly nontrivial system of gluons for which the difference between (1.1) and (1.2) may turn out to be crucial, especially for phenomenology. Specifically, in the framework of the QCD dipole model ref. [13] found a power-law correlation in the double scattering amplitude 2 under the condition that the distance between the two dipoles are much larger than their sizes, |z − w| ≫ |x − z|, |w − y|. (γ is a positive, calculable number related to the anomalous dimension.) In the exemplary cases studied in [13], this power-law always leads 1 However, we have found some evidence that nonlinear effects might set in earlier due to the correlation.
See the discussion in section 3.2. 2 See also [14], though there seem to be disagreements in the results.
to a parametrically large ratio Due to a technical reason, in [13] it was not possible to take the interesting limit w → z to evaluate R for the 'BK configuration', although it was tantalizing to conclude from (1.4) that that the correlation would become even larger in this case. Here we circumvent this difficulty and present an analytical insight into the behavior of R as a function of the initial dipole sizes.
However, analytical calculations are often quite difficult, and one can usually only deal with special configurations which are set by hand. Besides, for our purpose it is important to know the actual numerical value of T and T 2 to make sure that one evaluates R in a regime where the nonlinear corrections just start to be important. We will therefore also perform a Monte Carlo (MC) simulation of the QCD dipole model [15] which contains the exact leading order BFKL dynamics. In this framework one generates a cascade of dipoles keeping track of their sizes and positions in the transverse plane. Calculations of T k for any k, hence R, are completely straightforward for arbitrary configurations. We then compare the numerical results with analytic expectations and find that they agree satisfactorily. For zero impact parameter we find that R is much larger than 1 when the ratio of the projectile and target sizes is either small or large. The minimum value for R is attained when the projectile and target are of similar size, and in this case the value of R is around 1.5. This suggests that, in the leading logarithmic approximation on which both the BK equation and the dipole model are based, the replacement T T → T 2 is not valid for a proton target especially for a small dipole projectile (or in the high-Q 2 region of DIS), although it might be safe to do so for a nucleus target. In the former case one should rather use the B-JIMWLK equation with a strongly correlated initial condition, whose asymptotic solution can be different from that of the BK equation.
The fact that one finds large correlations in the leading order evolution for a dilute system is consistent with the early studies on fluctuations in [16,17]. In [16] it was found that T k ∼ (k!) 2 (or rather T k ∼ k! · (k + 3)!) at zero impact parameter. This implies that, for any m ≤ k, Note, however, that the definition of T k in (1.6) is different from the one considered in this paper, namely, T k appearing in the Balitsky hierarchy whose first equation is (1.2). In (1.6), one evolves the target and the projectile up to some energy, and then calculate the sum of all events in which there are k simultaneous interactions. In our case we rather fix k given dipoles in the transverse plane, and then consider their scattering off some target.
Only the latter contains information of the correlation resolved in the transverse plane.
In [18][19][20] the dipole model has been modified and extended to include various nonleading effects as well as saturation and confinement effects during the evolution. Generally speaking, these effects tend to reduce the correlation. For example, T k as defined in [16] behaves as (for k between 5 and 9) T k / T k−1 ≈ 1.2 · k once the nonleading effects are included [20]. This implies and thus the correlation is reduced with respect to (1.6). It should, however, be said that the fluctuations are still very important, and they have for example important consequences on the study of elastic and diffractive scattering in DIS and pp collisions [20]. In this paper we only show some of the preliminary numerical results with the running coupling effect to see if there is a similar suppression of the correlation, while a detailed study of the various additional effects is postponed to a future publication. The paper is organized as follows. In the next section we present analytical calculations of the double dipole scattering amplitude and the ratio R for the BK configurations mentioned above. In section 3.1 we outline our numerical approach to the calculation of the correlation. The results, including the running coupling case, are then presented in section 3.2 where we also make comparison with the analytical expectations. Finally, in section 4 we summarize our results and raise some open questions.
The dipole pair density
In the dipole model [15], the degree of the two-body correlation in impact parameter space is encoded in the dipole pair density [21,22] whose integral representation reads (keeping only the zero conformal spin sector) [23,24] n (2) Y (x 01 , x a 0 a 1 , where x 01 = x 0 − x 1 denotes the coordinate of the parent dipole, and x a 0 a 1 = x a 0 − x a 1 and Fig. 1). We shall use the letter x for both two-dimensional real vectors and their magnitude. χ is the BFKL eigenvalue with γ being the anomalous dimension, and E is the eigenfunction of the SL(2,C) group The γ-integrals are along the imaginary axis. With the usual representation γ = 1 In ref. [13], the multi-dimensional integral in (2.1) has been carried out in the limit The result shows a power-law correlation between the two child dipoles. In the case of x 01 ≫ x ab , ref. [13] found where n is the single dipole density, and γ a and γ are the saddle point values determined from certain conditions. The breakdown of factorization is carried over to that of the two-dipole scattering amplitude as already noted in the introduction. (From now on we use the notation T (2) in place of T 2 .) On the other hand, the quantity of interest for us is the two dipole scattering amplitude for contiguous dipoles, namely, Although it is not legitimate to extrapolate the result (2.6) to the case x ab → 0, it does suggest that the correlations would become even larger for such 'BK configurations'. (The numerical evaluation of this case is presented in section 3.2.) In this section we attempt at an analytical evaluation of n (2) for x a 1 = x b 1 in certain limits and discuss the behavior of the ratio R defined in (1.5). The result will be confronted with numerical Monte Carlo simulations in the next section.
Calculation of n (2) for contiguous dipoles
The last line of (2.1) is a known integral whose overall structure is fixed by conformal symmetry. After performing this integral, the last two lines of (2.1) become where the function f -the 'triple Pomeron vertex'-can be found in [25,26], and we have already set To make progress we assume that γ a = γ b , which is a good approximation when the configuration of the two child dipoles is more or less symmetric. (The saddle points γ a and γ b depend only logarithmically on dipole sizes.) Then the x γ integral can be done [27] 1 where F is the hypergeometric function, is the anharmonic ratio of the four points (x 0 , x 1 , x α , x β ) (z is the complex coordinate representation of x), see fig. 2. The remaining integrals are difficult to perform in full generality. As in [13], we shall restrict ourselves to two limiting cases x 01 → 0 (small parents) and x 01 → ∞ (large parents). In both limits, |ρ| ≪ 1, so we may approximate F (..., ρ) ≈ 1. The two terms in (2.10) give equal contributions due to the symmetry γ → 1 − γ. Taking this into account, we can write The integrand is a product of anharmonic ratios weighted by the conformally invariant measure d 2 x α d 2 x β /x 4 αβ , so it is invariant under conformal transformations of the external points. However, since there are five of them (x 0 , x 1 , x a 0 , x b 0 and x c ), conformal symmetry is not strong enough to constrain the solution, and our assumption x 01 → ∞ or x 01 → 0 will be crucial in the following.
Large parents
Suppose the parent dipole is large and the points x a 0 ,b 0 ,c are all located near the center of the parent dipole as illustrated in fig. 3(a). This may be regarded as a situation relevant to DIS on a hadron at high photon virtuality. Without loss of generality, we can set x c = 0. The integrand vanishes as x α,β → ∞ very fast, so that a finite region of x α,β near the origin is important. Therefore we may approximate (2.14) Under this assumption, (2.13) takes the form (2.15) For simplicity, we assume that the two dipoles have the same size: we get where θ = θ a − θ b is the relative angle between the two child dipoles. We have not been able to determine the function g(θ) for θ = 0 in a closed form (g(0) is a known integral in the conformal field theory literature [28][29][30]). But since this function has no singularity and depends only on the angle, it will not affect the evaluation of the saddle point below.
Neglecting this angular dependence and other prefactors, we can estimate the two dipole scattering amplitude as After performing the y integral, we get the two contributions and .
The saddle point for the γ a and γ b integrals in (2.19) is simply the BFKL one γ a = γ b = 1/2, leading to where γ solves For the contribution (2.18) we can use the saddle point for the γ integral, 22) and the leading rapidity behavior of this contribution is then given by As we discuss in section 2.3, it holds that 2χ(1/2) > χ(γ s ), i.e., γ s < 0.82 for all configurations we are interested in. (In the limit Y → ∞, γ s → 1/2.) The contribution which dominates is thus given by (2.20), and we therefore have On the other hand, the single dipole scattering amplitude is given by whereγ is the solution to Taking the ratio, we arrive at Since 2γ > 1 > γ, the first factor is larger than 1 and predicts that the correlation increases as the asymmetry becomes larger x 01 ≫ r. Since χ(γ) > χ(1/2), the second, exponential factor tends to decrease the correlation at high values of rapidity. Comparing this with (2.7), we infer that R monotonously increases and eventually saturates to the expression (2.27) as x ab → 0.
Small parents
Another tractable example is the limit of a small parent dipole x 01 → 0. In this case we may approximate x 1β ≈ x 0β , after which the point x 1 drops out from the integral. Rewriting we see that, apart from the prefactor, the integrand is conformally invariant, so it can be written as where η is an anharmonic ratio In order to evaluate the function h, one can set, using a conformal transformation, and therefore, (2.32) Remarkably, the same integral as in (2.15) appears, as a consequence of the symmetry between the limits x 01 → ∞ and x 01 → 0 found in [13]. First consider the case of large impact parameters b ≡ |x 0a 0 | ≈ |x 0b 0 | ≈ |x 0c | ≫ r (see, fig. 3(b) and related calculations in [13,31]). Then η is approximately a phase η ≈ e iθ where θ is the relative angle as before. We find 3 br), and xa 0 c = r → r ′ ≈ b 2 r. Therefore, 4r/x01 = x ′ 01 r ′ /b 2 as expected. Note finally that by definition a conformal transformation does not change the angle θ. and Again, the saddle points are given by γ a = γ b = 1/2, and we have the pole at χ(γ) = 2χ(1/2). On the other hand, the single scattering amplitude at large impact parameter is whereγ is the solution to (2.37) Taking the ratio, we find So in this case the correlation R decreases as either x 01 or r (or both) is increased (keeping x 01 , r ≪ b).
In order to exhibit a symmetry with respect to the large dipole case, let us look at the case of small impact parameters, typically, b ∼ r ≫ x 01 . We find with γ determined from ≪ r), we see that R is enhanced when the asymmetry (x 01 vs. r) is large, and it presumably takes a minimum value around x 01 ∼ r.
Estimates and comments
Regarding the rapidity dependence, we note thatγ → 1/2 as Y → ∞. Thus for large Y , the coefficient multiplying Y in the exponent in (2.27) and (2.42) tends to zero. For a fixed Y , this coefficient again tends to zero when x 01 → r, as can be seen from (2.26) and (2.41). Therefore the results (2.27) and (2.42) predict that the correlation R decreases faster with Y when x 01 /r ≫ 1 and x 01 /r ≪ 1, while if we extrapolate our results towards the symmetric limit x 01 ≈ r, we see that R is almost constant in Y . From (2.26) we can guess thatγ is quite close to 1/2. Let us therefore setγ = 1/2 + ǫ and expand the BFKL eigenfunction to linear order in ǫ. One then finds that Thus if, for a fixed Y , we try to fit R as a function of x 01 /r using a single effective power, ω, we would expect this fit to give a too strong increase close to the minimum, x 01 /r ∼ 1, whereas it should give a too slow increase further away from the minimum. As 2(2γ − γ) varies stronger for smaller Y , we would expect the fit to work better for higher Y . We would also expect ω to be larger for smaller Y .
In the next section we will see that these analytical estimates are all in quite good agreement with the numerical results. In particular, the numerical analysis will confirm that the minimum of R (for zero impact parameter) occurs at x 01 ≈ r. Moreover, the estimates forγ given above agree very well with the numerical results, and also the Y dependence turns out to be correct.
Before moving on to the numerical analysis, we would like to address one more point. So far we have been able to make analytic estimates only for specific configurations. In particular, we assumed that the dipoles x a 0 c and x b 0 c are more or less equal in size. In going from (1.2) to (1.1), however, the question is whether the replacement is valid. (We have here returned to the notation used in the introduction using x, y and z.) What we have shown above is that T (2) (x, z; z, y) ≫ T (x, z)T (z, y) for some specific regions of z, and also for specific relations between (x, y) and the target, but this is not sufficient to see the integrated effect of the correlation. Although one can use the MC code to do the integration over z, this can be quite time consuming. Leaving the numerical integration for future work, we here crudely identify the configurations which dominate the integral in (1.1). Consider the large parent case where |x − y| ≪ x 01 and assume that |x − y| is smaller than the saturation length Q −1 s . This means that we may set T (x, y) = (x − y) 2 Q 2 s . (We could also introduce an anomalous dimension γ = 1 but this is not essential.) We then divide the integral into three regions: • Region A: |x − z|, |y − z| |x − y| .
In region A we have (Note that there is no logarithmic singularity at either z = x or z = y.) In region B we instead have where the integral is dominated by the lower limit |x − z| ∼ 1/Q s . Thus for a small projectile which has not yet reached saturation |x − y| ≪ Q −1 s , the dominant contribution comes from region B where we indeed have |x − z| ≈ |z − y|. As |x − y| → 1/Q s , region B shrinks, and the dominant region is simply |x − z| ∼ |z − y| ∼ |x − y|. Therefore, we expect that the configurations we are using are relevant, and the large correlation found there should survive after integrating over z in the evolution equation.
Outline of the approach
In this section we will perform a numerical analysis to compute the quantities T (2) and (T ) 2 . This can be done rather easily in a Monte Carlo implementation of the dipole model, and we will here use the C++ code developed in [18]. The calculation we will perform is straightforward, no matter which configuration we have. Recall that the definitions of T and T (2) are where A 0 is the elementary dipole-dipole scattering amplitude. (The second term on the right hand side of (3.2) represents scattering of two dipoles off the same dipole in the target.) Starting from any initial dipole distribution, the MC code evolves the initial state up to a given value of Y , after which one can calculate all possible scatterings between the dipoles. The Monte Carlo estimate of equation (3.2) is simply given by where Γ n is the configuration of the evolved target for the nth event. Writing i,j = i =j + i we see that (3.3) contains both contributions in (3.2). In writing this formula we only evolved the target but we can obviously do the computation in any given frame. Similarly the product T (x 1 , y 1 ) · T (x 2 , y 2 ) is calcuated as
(3.4)
In the next section we will start by checking the predictions from [13] as stated in equations (1.4) and (2.7). As in the analytical approach we consider a target which initially consists of a single dipole (x 0 , x 1 ) (for the numerical calculation we could start from any configuration if we so wish) For the configurations in [13], the phenomenologically more relevant configuration is the one in which the target x 01 is much larger than the projectile dipoles. We fix the projectile dipoles to have the same size, r = x a 0 a 1 = x b 0 b 1 (for the above formulas this means we have x 1 = x a 0 , y 1 = x a 1 , x 2 = x b 0 , y 2 = x b 1 ), while the distance between them, x ab , will be varied.
For the BK configurations, we have x 2 = y 1 = x c , and again we fix the two projectile dipoles to have the same size, r = |x a 0 − x c | = |x c − x b 0 |. The target dipole (x 0 , x 1 ) is placed at zero impact parameter, as in figure 3 (a), while its orientation is chosen randomly for each event. We will always keep x a 0 , x b 0 and x c fixed while we vary x 01 and the impact parameter.
One technical point is that one has to introduce a cutoff, ρ, for the minimal size of dipoles generated during the evolution since the dipole kernel M(x, y, z) diverges at z = x and z = y. Such a cutoff explicitly breaks conformal symmetry, and one should therefore ideally choose a cutoff which is much smaller than the relevant scales (the initial dipole sizes) involved in the process. On the other hand, simulations with too small values of ρ are very time-consuming. If one is studying symmetric collisions r ∼ x 01 , then the choice ρ = 0.01r = 0.01x 01 is good enough. Choosing an even smaller ρ in this case is not useful since one is then wasting a lot of time to generate many very small dipoles which do not interact and do not contribute much to the scattering amplitude. However, here we wish to study the correlation as we vary x 01 , and then the choice of ρ is more subtle. For example, for a very asymmetric collision, say x 01 ∼ 100r, ρ has to be much smaller than 0.01x 01 so that we do not suppress important dipoles with size of order r. Besides, in the absence of saturation effects, smallness of ρ is also required for the frame-independence of T (2) , hence that of R. As a compromise between these requirements (reducing simulation time and ensuring frame-independence) we shall choose ρ(x 01 ) = 0.05 r throughout. With this choice we confirmed that the results presented in what follows are reasonably frame-independent even up to the center-of-mass frame.
Results
As mentioned above we start by checking the results from [13]. The target will be fixed at the origin, with random orientation, and the projectile dipoles are placed symmetrically along the horizontal axis, one on the positive axis and the other on the negative axis, with random orientations. We chooseᾱ s = 0.2 throughout, except in the running coupling case to be presented later.
The results for this configuration are shown in figure 4. Here we choose x 01 = 20 r in the left plot, and x 01 = 30 r in the right plot keeping x 01 > x ab . The former case would in DIS correspond to a virtuality of Q 2 ∼ 60 GeV 2 . In both cases we also show fits of the form R = α/(x ab + β) γ . We thus confirm the power-like behavior in (1.4), and also see that R converges to a finite value as x ab → 0 in agreement with the analytical prediction (2.27). For the left plot the fit gives the values β = 0.09 and γ = 0.70 while for the right plot we get β = 0.09 and γ = 0.72.
Next we turn to the BK configuration described in the previous section. In figure 5, we plot R as a function of x 01 /r for Y = 6, 8 and 10, at zero impact parameter. We can see a behavior of R consistent with the analytical formulas, equations (2.27) and (2.42). The minimum of R indeed occurs at x 01 ≈ r with the minimal value R ≈ 1.5. From our analysis in the previous section we know that R decreases as Y increases, and the rate of decrease is larger for asymmetric scattering. This tendency can be clearly observed, though the ratio R doggedly stays 1.5. In the current simulation we cannot go to larger values of Y because the single dipole amplitude T for x 01 ∼ r reaches order unity around Y = 10. Therefore, in the entire domain of Y values where our approach makes sense, the mean field approximation R = 1 is nowhere valid even in central collisions. Since this persists up to the onset of the strong scattering regime T ∼ O(1), it is unlikely that saturation effects immediately wash out the correlation. Rather, one has to carefully study the effect of correlations when solving nonlinear equations.
Another, perhaps more striking consequence of the correlation emerging from our analysis is that it makes the nonlinear term T (2) comparable to T even when T ≪ 1. these estimates suggests that one might have to include the nonlinear effects in the evolution already in the dilute regime where T ≪ 1. We did not include such a back-reaction into our linear dipole evolution, and in this regard our analysis is not complete. This point certainly deserves further study. So far we have studied only configurations with zero impact parameter b = 0. At finite impact parameter the correlation becomes larger as suggested by (2.38). Of course if we think of x 01 as representing the proton radius then one should be careful in interpreting results for b ≫ x 01 where confinement effects are certainly important. As a check of the analytical prediction, and also for the sake of demonstration, we nevertheless present some results when b > x 01 . Figure 6 shows the b dependence of R for x 01 /r = 10 and x 01 /r = 20. We see that R is almost constant as long as b is smaller than x 01 and that it grows rapidly when b x 01 .
Numerical simulation with a running coupling
One of the non-leading effects which we can easily incorporate into the numerical simulation is the running coupling as has already been done in [18][19][20]. Although in this paper we mainly concentrate ourselves on the fixed coupling case, we would here like to briefly mention some of the preliminary results obtained when the running coupling is used.
Technically, the inclusion of the running coupling is completely straightforward and we shall use the one-loop expression for α s , where we fix Λ QCD = 0.22GeV. The running coupling enters both in the dipole evolution (asᾱ s ) and in the individual dipole-dipole scatterings (as α 2 s ). We will set N c = 3 and n f = 3 as in [19,20].
To avoid the IR singularity we shall freeze the coupling below a minimum scale Q min corresponding to a maximum dipole size r max = 1/Q min . As in [20], we choose r max = 3.5GeV −1 . In [20], α s was evaluated at the scale 1/Q = min(r, r 1 , r 2 ) for the splitting r → r 1 , r 2 , and this choice roughly follows from next-to-leading log (NLL) studies of the dipole evolution [32,33].
[See Section VII of [34] for a compact discussion.] Thus we continue to use this scale in the evolution of the dipole cascade. For the dipole-dipole interaction the correct choice of the scale is more subtle, and we here use the option described in [20].
In practice, simulations with the running coupling are quite time-consuming, and we have therefore not been able to check as many configurations as in the fixed coupling case. In figure 7 we show the results obtained at Y = 6 both at zero (left plot) and nonzero (right plot) impact parameter, together with the fixed coupling results. We see that R is somewhat reduced, but its minimum value is still around 1.5. We also see that the qualitative behavior of R does not change, the minimum again occurs when x 01 ≈ r although it is of course difficult to determine the exact behavior of R since we do not have enough data points. At Y = 8 for b = 0, we find the value R = 1.5 at x 01 = 2 r, while in the fixed coupling case we found R = 1.6. For b = 5 r, R reduces from 11.6 in the fixed coupling case to 9.4 in the running coupling case for the same configuration.
Conclusions
In this paper we have studied both analytically and numerically the correlations induced by the leading order BFKL dynamics in the high energy evolution of a dilute system (such as a proton). Our main analytical results are given in equations (2.27), (2.38) and (2.42). All these results indicate that one should expect power-like correlations which lead to a strong violation of the factorization T (2) ≈ T · T . The analytical estimates have been demonstrated to be qualitatively correct by a numerical analysis with which we have also been able to quantitatively study the behavior of the ratio R = T (2) /T 2 . We have found that R is always larger than ∼ 1.5 and it can easily reach ∼ O(10) when the asymmetry is large.
Physical consequences of the correlation remain to be explored. The first and obvious intuition is that it opens an intriguing possibility of the 'grey disc' limit in which a scattering amplitude saturates to a value less than 1. 4 However, since R is not a constant, and the nonlinear equations involve an integration over the transverse plane with a nontrivial weight, a more detailed analysis would be required in order to draw any conclusions. Another interesting problem is the interplay with the gluon number fluctuation which has attracted considerable attention lately (see [36] and references therein), but which has so far mostly been studied in simple toy models where the transverse dimensions are suppressed. Though it typically requires unrealistically large energies to see the impact of the gluon number fluctuation on the nonlinear evolution of large nuclei, this is probably not the case for a dilute target. The BFKL evolution generates a very strong number fluctuation as well as the transverse correlation in the dilute regime, and they can both affect the subsequent nonlinear evolution in significant ways.
There is plenty of room for improvements in the Monte Carlo simulation itself. In order to make a quantitative prediction for realistic experiments, one should include various NLL corrections and saturation effects into the target evolution. They have been incorporated in the dipole model in [18][19][20]. Among them, we have in this paper included some results with the running coupling effect. Since our simulations have been limited in size, it is difficult to determine the exact behavior of R. What we have clearly observed, however, is that R is somewhat reduced from the fixed coupling case, but is still large. This suggests that the large correlation may not be totally attributed to conformal symmetry of the leading order BFKL, but rather is a robust feature of the QCD evolution in the linear regime.
As mentioned in the introduction we would expect even larger correlations in the multiple scattering amplitudes T (p) (p ≥ 3) which enter the Balitsky hierarchy. In the dipole model, these amplitudes are directly related to the corresponding multiple dipole distributions n (p) [23,24], but analytical results for them are scarce [37]. The numerical evaluation of these amplitudes is straightforward, although the calculation of T (p) for large p would be time-consuming due to the need of good statistics.
|
2008-09-04T11:29:15.000Z
|
2008-05-06T00:00:00.000
|
{
"year": 2008,
"sha1": "4292157ffe2324759629cdc935fc02565c728612",
"oa_license": null,
"oa_url": "http://arxiv.org/abs/0805.0710",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1b1b9d3f0436fbf7ccf52ba7773406d0a7d662ff",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
231968647
|
pes2o/s2orc
|
v3-fos-license
|
A safe, reliable and regenerative electrical energy supply to power the future
— Today, three phenomena are developing into critical global problems, requiring urgent attention from leaders all over the world. The first of these is the increase in carbon dioxide (CO2) emissions due to the escalated use of coal, resulting in the gradual increase of the Earth’s average temperature. The second is the continually diminishing fossil fuel resources, e.g. oil and natural gas, which are the primary sources of energy for vital services like transportation and domestic heating. The third is the unforeseen and rapid “world population boom” since the beginning of the 19th century after the invention of the steam engine by James Watt. The combination of these three factors signal imminent danger. With the increase of the world population, there is a subsequent rise in the usage of both electrical and non-electrical types of energy. At the same time, not only are the sources of these energy forms, i.e. the fossil fuels, depleting because they are non-renewable but their usage is also severely detrimental to the environment. Therefore, it is of utmost importance to reduce the dependence on fossil fuels and switch to renewable or even nuclear resources as alternatives, in order to prevent an impending climate disaster. Hence, taking the aforementioned problems into account, a method is proposed in this paper to create a safe, reliable and regenerative electrical energy supply system using renewable wind and solar energy as well as hydrogen storages.
-Introduction
In fig. 1, a picture developed by two National Aeronautics and Space Administration (NASA) data visualizers is presented, showing the world illuminated by city lights during the night [1]. As can be seen easily, North America, Europe, Japan and South-Korea are brightly lit. These regions are the ones with the highest standards of living. In Europe this means that a two-person household consumes approximately 3.5 MWh of electrical energy per year [2][3][4][5]. Described in gasoline equivalents, this is roughly equal to filling a 100-liter car tank 3.5 times per year. Taking Europe's population and its total energy usage into account, an annual consumption of about 37 MWh per capita can be calculated, as displayed in table I [6]. In the United States of America (USA) this value is approximately doubled, i.e. 78 MWh per capita per year. This means that every in the year 2000 [11].
person in the USA consumes almost twice the energy compared to a resident in Europe on an annual basis. The next largely illumined regions which can be seen on the map are China and India. These so-called "emerging nations" have increased their energy consumption drastically in the recent years, leading to annual figures of about 25 MWh and 8 MWh per capita, respectively. Countries such as Russia and Australia also have high annual energy consumption per capita of roughly 57 MWh and 66 MWh, respectively. However, only certain parts of these areas appear to be brightly lit in fig. 1 owing to their massive size and nature of the population distribution. On the contrary, regions like Africa are nearly completely dark, despite possessing a high population. This is also true for a lot of countries in South America such as Bolivia and Peru. This confirms the fact that the "Earth Night Lights" are not a reflection of the world's population distribution, but instead represent the quality of life in different areas of the world. Unpopulated regions like deserts and large forests are not taken into account in this case.
Joint EPS-SIF International School on Energy 2019
The trends from fig. 1 are also reflected in fig. 2 which represent the Gross Domestic Product (GDP) per capita of the different regions of the world in the year 2000. Here, the sizes of the regions are adapted according to their total GDP. From this depiction, it can be observed that North America, Europe, Japan and South Korea have the highest GDP per capita values of more than US$ 20000 per capita. Today, these values are much higher; greater than US$ 50000 for North America and Australia, around US$ 40000 for Europe and Japan and close to US$ 30000 for South Korea [12]. This indicates a clear link between the electrical consumption per capita and distribution of global wealth.
-The role of fossil fuels in global overpopulation and climate change
When it comes to ensuring the distribution of the global wealth, one invention was of paramount importance as the driving force. As shown in fig. 3, this was the Steam Engine designed by James Watt. The Scotsman received the patent for his machine in 1769 [13]. This contributed to the onset of the first industrial revolution, a significant turning point in the human history, roughly within the next 50 years [14]. In fig. 4, the example of threshing is used to describe one of the many benefits of this invention for mankind. In the past, threshing was done manually for weeks on the farmyard and after the rise of the steam engine, the same work could be done in a matter of hours. However, not only [16,17].
were farming techniques revolutionized as a result, but also transportation, irrigation, manufacturing of medical products and of course military needs experienced a dramatic boom. Heavy and daunting manual work was reduced for the working class. Then again, the cost of this, being paid till today, is the rise of two other global nightmares; the rapid increase in population and the overuse of fossil fuels resulting in the greenhouse effect leading to the gradual increase of the Earth's average temperature. Figure 5 portrays the change in world population through time. As predicted in the diagram on the left, more than eight billion people will live on the globe in the year 2025. The primary reason for this, as exhibited in the zoomed figure on the right, is the accelerated population growth at the beginning of the 19th century. This was the time when not only were the number of steam engines increasing drastically but also operating reliably. The red arrow on the figure gives an estimate of the total world population without this invention. This indicates that less than one billion people would live on the planet today! The practice of harvesting energy from fossil fuels, based on their current rate of usage, is only expected to continue for approximately 200 more years [20]. Before the inception of such extensive usage of fossil fuels, all the energy was originating from humans, animals as well as wind and water forces. After these non-renewable sources are depleted, only such traditional energy sources along with the prospect of photovoltaics (PV) and nuclear energy will be available. So in the end, the all-important question arises. How can energy be provided to the future generations? Surely, the answer to this question will play a crucial role in the continued existence of life as we know it. While searching for a possible answer, one must also anticipate the possible energy supply and demand in the future. Figure 6 displays such data representing the expected growth of electrical energy in Europe and the entire world until 2040, forecasting an increase of 8% and 61%, respectively. Thus, it cannot be expected that the use of electrical energy will be reduced in the future. Table II lists the different forms of fossil fuel reserves, their consumption rate and their predicted lifetimes. The required data was accumulated from different sources and then used to calculate the lifetime values. As can be seen, only coal is expected to exist for nearly the next 150 years. Oil, natural gas and uranium reserves could come to an end within half a century. As reserves only those fossil energy sources are considered the extraction of which are economically viable. In contradiction, fossil fuel resources such as deep grounded coal, oil sand and shale etc. are energy sources which cannot be mined easily owing to geological, economic and technological limitations. The exploration of such types of primary energy would not only be very expensive, but also environmentally detrimental. Additionally, bearing in mind the current alarming increase in the atmospheric density of CO 2 , it would be best to discourage the extraction and subsequent usage of such resources. Figure 7 (left) represents a horizontal bar chart signifying the total and per capita CO 2 emissions due to the burning of fossil fuels in different regions. The percentage change in the emissions between the years 1990 and 2014 is also shown for each region. As can be seen, the largest increase is in China and India, which are home to the highest populations in the world. Only Europe has reduced its CO 2 emissions by 21%, while for USA, the figure is nearly unchanged (4%). On the figure to the right, the production of CO 2 from different power plant types is shown. Here of course the emission level from lignite plants is maximum, but that from natural gas power plants is also quite high. Thus, in the long run, combined cycle gas plants are the best option as power plants running on non-renewable fossil fuels. In the end though, considering the ultimate effect
-Transformation of the electrical energy system
So taking these issues into account, the goals to be met are shown in table III, using Germany as an example. Similar targets also hold true for other countries, since solving the world's energy crisis and simultaneously tackling an impending climatic disaster requires a global effort.
-Development of wind and solar resources in Germany
Table IV, displays the magnitudes of electrical power and energy generation from wind and PV resources in Germany between the years 2011 and 2025. The maximum electrical load in Germany is 80 GW and this value is expected to remain constant in the near future. On the other hand, the production of power from the renewables is projected to reach 100 GW in 2020. As a result, a significant part of the society believes that further installations to harness renewable power are unnecessary. This often results in strong opposition towards construction of new PV power stations or wind farms. However, analysis regarding the electrical energy requirement for the country leads to completely different conclusions. Germany needs about 600 TWh of electrical energy per year, but the renewable sources can only generate about 200 TWh [30]. It is predicted that the maximum amount of energy which can be harvested from renewable resources will be limited to 1000 TWh in the country due to limitation of free space. However, the total final consumption of energy currently per year is 2600 TWh [31]. Hence, there is still a significant difference between consumption and production. Nevertheless, basing the operation of the electrical energy supply solely on wind and solar resources is also not possible since it introduces additional problems in the electrical grid. An overview of this is presented in fig. 8. Here, the projected total consumption and corresponding generation of electrical power is shown in Germany for a duration of four weeks in May 2020 [32]. As can be seen, the power generation from resources consisting of hydropower, wind onshore and offshore, combined heat and power, as well as photovoltaics is completely unrelated to the load demand. This means that some storage power plants (SPP) are needed to bridge the gap between consumption and production.
-Operation of a storage power plant (SPP)
The SPP consists of converters and storages. The power plant does not contain any flywheels or rotating masses. Similar to wind turbines and solar panels, the SPP is connected to the grid via power electronic converters and hence does not have any inertia. However, in order to function effectively in the current electrical network, the SPP converter systems have to adapt to the rotating masses and respective frequency of the conventional power plants (CPP). This can be done by synthetically generating rotating inertia and primary reserve power. To achieve this, the converters have to measure the instantaneous active power at the connecting node so they can properly feed their angle-oriented regulating power into the grid. This way, these new converter systems can also function as power plants and can hence be integrated into the grid.
In the future, when the number of CPP reduces significantly in the electrical power supply due to the lack of fossil fuels, the need for such power converters to adapt to rotating masses will diminish and frequency control may become obsolete. A new method of grid control can then be introduced, known as the nodal voltage angle control [33,34]. Under this control method, the SPP can function either in grid-forming or grid-supporting mode with a constant grid frequency. All the fundamental principles of electrical energy supply and power system control, which are satisfied today with the CPP running on fossil fuels, can also be met by the SPP. An overview of this is presented in fig. 9 where the component chains of the two types of power plants are compared. The mode of operation of a SPP operating in grid-forming mode is explained next and compared with the output response of a coal fired thermal power plant, when there is a step increase in the active power requirement in the network.
Conversion/adaptation. The step increase in the active power requirement at the DC/AC converter with a constant nodal voltage angle (grid-forming) leads to an instantaneous increase of three-phase AC current and therefore also an instantaneous increase of direct current on the DC side of the adjacent converter. Storage. The supercapacitor instantaneously accesses its stored electrical energy and supplies this as output power. A capacitor is chosen for this purpose since it can immediately supply large magnitudes of power. As a result, the voltage of the supercapacitor decreases, which signifies the amount of its stored energy. These features are similar to that of the spinning reserve in a thermal power plant, which is provided by the decrease in the speed of the rotating masses in the system.
Conversion/adaptation. The downstream DC/DC converter's governor between the battery and the supercapacitor in fig. 9(b) has to keep the capacitor voltage constant. To this end, it accesses the battery increasing the battery output power within a few seconds. As a result, the supercapacitor charging current increases and this recharges its energy storage. These properties are similar to that of the primary control of thermal power plants where the opening of the steam valve in the boiler is adjusted to increase the flow of live steam, restoring the speed of the turbine prime mover.
Storage. Due to the increase in battery output power there is a decrease in battery voltage resulting in a decline in the amount of stored energy as well.
Conversion/adaptation. The DC/DC converter, on the upper branch between the fuel cell and the battery, adjusts the required voltages enabling the charging current to flow from the fuel cell to the battery. The fuel cell's control unit increases its activity and synthesizes more water from hydrogen and oxygen and as a result produces more energy to replenish the battery storage as well as satisfy the power demand in the network.
Storage. The fuel cell's control unit accesses the hydrogen storage within a few minutes and increases the fuel input mass flux. The amount of hydrogen in the storage decreases.
It may be refilled autonomously by the plant via the electrolyzer. This is similar to secondary control in a coal fired thermal power plant where the fuel governor accesses the coal store to increase the fuel input. However, the coal storage cannot be reloaded automatically by the plant. The capacitor between the DC-DC converter and the fuel cell stores some energy and this is analogous to the heat stored on the pipe walls inside the boiler of a thermal power plant.
During steady state operation, the required power is effectively transferred from the hydrogen storage to the three-phase network. The battery or the capacitor storages only act, when the consumption or production in the network changes suddenly, to instantaneously respond and provide the necessary ancillary services autonomously. Contrary to current power plants, which are only able to reduce their output to a certain minimum, this new type of power plant can actually reverse its output. In case of a production surplus from renewable sources or decrease in load demand, there is a shock-free transition from fuel cell to electrolyzer operation to store the excess energy. The corresponding converters adjust the voltage of each component, while the electrolyzer produces hydrogen of the required pressure, which can also be used later for automobiles [35].
-Angle regulated operation of a storage power plant
When the power supply system will mainly rely on storage power plants, "Watt's speed control" will not be required anymore. The three-phase supply can be operated at a constant frequency, for instance at 50 Hz. The tasks of grid control like spinning reserve and primary control can be fulfilled using the nodal voltage angle at the storage power plant's connection point. The grid itself with its admittance matrix and voltage angles operates as a coordinating unit. All the required information is provided using the given load flow. Storage power plants can operate either in grid-forming mode, as slack power plants (voltage source), or in grid-supporting mode, as PV (constant active power and voltage magnitude output) power plants (current or power source). These features are present in the current conventional power plants with a certain time delay from either an integral acting angle control (slack behavior) or active power control (PV behavior). To that end, all power plants have to know the current voltage angle at their connected terminal with reference to the 50 Hz angle standard of their control area via an accurate radio controlled quartz clock. This clock can be synchronized via the time signal transmitter, DCF77, of the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig, Germany once each day.
The mode of operation of this new type of grid control is best explained with an example network shown in fig. 10. The grid consists of 25 equidistant nodes, each connected to either a generator or a load. The nodes are interconnected via transmission lines, each 50 km long and at a voltage level of 110 kV. The line impedances are identical and each has a magnitude of 0.3 Ω/km, with the resistance to reactance ratio being 0.1.
There are 11 power plants, of which 5 are slack storage power plants. The other 6 are PV power plants, i.e. generators at terminals where the active power (P) being supplied and the voltage magnitude (|V|) has a known value. The remaining 14 nodes are each connected to a PQ consumer, i.e. loads at terminals where the active (P) and reactive power (Q) being consumed are known.
It is assumed that each of the 14 loads consumes 10 MW of active power. The total consumption of 140 MW is equally shared by the 5 Slacks and the 6 PV Generators each producing 12.7 MW to meet this demand. Each load also consumes 3.33 MVAR of reactive power which is supplied later by the generators. Unfortunately, the reactive power results are not discussed here due to limitation of space. The network modeling and simulations are carried out in the software DIgSILENT PowerFactory. The slack and PV generators are modeled as AC Voltage Sources along with necessary control loops to represent the behavior of power electronic converters replacing the conventional Synchronous or Asynchronous generators. respectively, for slack storage power plants, PV power plants, and PQ consumers. As shown in the diagrams, the PQ consumer's voltage phasors follow the surrounding voltage phasors of slack and PV power plants, ensuring the load flow from the generators to the consumers. For the sake of clarity in voltage angles, the imaginary axis is shown in an overstretched manner in this depiction.
In the first case of investigation with the 25 node network, a ramp is implemented at the central node 13 to represent an increase in the power consumption by load. The consumer power demand at this node is increased from 10 MW to 110 MW in a duration of 80 s and the response of the five slack storage power plants is analyzed. Figure 12 (right) shows this power increase of the consumer at node 13 and the corresponding reaction of the constant voltage slack storage power plants. The depiction shows how each of the slack storage power plants supply the additional required power according to their electrical proximity to the consumer. These results are further supported by fig. 13 (left), which shows the maximum power increases of the consumer and storage power plants, depicted as bars. Consumption is shown as a negative value, and generation as positive values. Due to the resistance in the transmission lines, there will be some losses during the power flow and the total additional power supplied will be slightly greater than the additional demand of 100 MW. Such behavior of storage power plants is analogous to the combined effect of spinning reserve and primary control. This type of primary control is load flow oriented, since the storage power plants closer to the origin of the disturbance generate more power than the remote ones. Hence, in the event of a disturbance, the load flow mainly emerges at that location while remote storage power plants contribute little in terms of power supply. Figure 13 (right), exhibits the angular torsion of all the non-slack nodes in the investigated grid. All angular changes are depicted from their initial operating point as reference. The voltage angle of the load in node 13 has a maximum decrease by 2.45 • owing to the large increase in power consumption. The resulting angle torsions in the rest of the grid due to this power decrease are required by the slack power plants to provide the necessary additional power.
In the next case of investigation with the 25 node network, a ramp is implemented at node 12 containing a PV generator in order to increase the generator power output from 10 MW to 110 MW in a duration of 80 s. This depicts a situation that could possibly arise from the increase in power generation of wind turbines in a particular region due to an increase in wind speed. The response of the five slack storage power plants to this change is then analyzed. Figure 14 depicts how each of the slack storage power plants reduce their power generation or increase their power storing capability due to the generation of additional power in the system. Once again, the response of the slack storage power plants are according to their electrical proximity to the node with changing power, in this case node 12. This is also supported by fig. 15 (left). These results show that not only can slack storage power plants increase their power generation to respond to increased load demands, but they are also able to store active power if there is excessive generation from renewable sources in the system. This truly exhibits the possibility of bidirectional active power flow for the slack storage power plant system, as shown in fig. 9(b). Figure 15 (right), depicts all angular changes of the non-slack nodes from their initial operating point. Since the grid is organized in such a way that there are more slacks to its northern and eastern part, the angular torsions have higher values in the zones with fewer slacks, i.e. the southwestern part of the grid. The voltage angle of the generator in node 12 has a maximum increase of 2.26 • owing to the large increase in its power generation. The resulting angle changes in the nodes due to the power increase in node 12 are used by the slack power plants to reduce their output and accordingly store the excess energy to counter the presence of additional power in the system.
-What could be the consequences of ignoring reason?
From the information provided till this point, it is clear that the problems mentioned previously have to be solved not only to ensure a reliable energy supply in the future but also for the overall development of our planet. The use of fossil fuel energy resources has to be reduced and, if possible, even stopped in the future. The consumption of energy should not significantly increase any further and the growth of the world population needs to be limited. To achieve all these, it is necessary that a significant part of the younger generation is not only aware of these issues but is also able to understand them and offer possible solutions to tackle these problems.
What happens when unqualified and, in certain cases, unethical people are in charge of decisions under crisis situations can be exemplified with the incident of the "Vasa" warship sinking in the year 1628 in Stockholm, Sweden during the Thirty Years' War. To construct this ship, an equivalent amount of money close to the entire Gross Domestic Product of one year in Kingdom Sweden was used, to gain warfare advantage in the Baltic Sea against Poland. In fig. 16 a model of the ship can be seen, now present in a museum in Stockholm. Initially, it was decided that the ship should only have one deck of cannons. However, King Gustav Adolf recommended to install a second set in an additional higher deck. Due to this decision, the center of mass of the warship became too high and the ship was unstable. As a result of the death of the initial Dutch constructor during the construction phase, new unskilled people took over the job. After finalizing the construction of the warship, a stability test was performed with 30 sailors running back and forth from port to starboard five to ten times. During this time, it was observed that the ship was inclining sideways beyond expected limits and thus the test was stopped. The two new contractors understood the problem and could conclude that the ship would sink immediately if it was allowed to sail. However, neither of them informed the King nor did anything stop the maiden voyage. Under this situation, the only solution would have been to inform the responsible authority and dismantle the ship in order to salvage its parts so a better one could be constructed later. However, this did not happen and thus the ship started its voyage from the shipyard as can be seen in fig. 17 (left). After about 20 minutes, there was a strong gust of wind and the glamorous ship tilted sideways allowing water to flood into the open lower gundeck causing the ship to sink rapidly, as shown in fig. 17 (right).
Today, we are nearly in the same situation concerning our energy supply, CO 2 emissions, global warming and the increasing global overpopulation. We know the problem, we know that the current system is unstable and we know the possible alternatives to using fossil fuels. Thus, it is completely in our own hands if we opt not to make any changes and continue in our present course or whether we decide to take action against such impending threats and give our future generations a chance to live on this planet with at least the same standards of living as ours.
|
2021-02-20T05:52:11.976Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "834d29089ec92493669298209969424de1ac4fa9",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/22/epjconf_lnes2020_00017.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "dc101265ac3069c418c6575873025511c0e90100",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
254307627
|
pes2o/s2orc
|
v3-fos-license
|
A Cytopathologist Eye Assistant for Cell Screening
: Screening of Pap smear images continues to depend upon cytopathologists’ manual scrutiny, and the results are highly influenced by professional experience, leading to varying degrees of cell classification inaccuracies. In order to improve the quality of the Pap smear results, several efforts have been made to create software to automate and standardize the processing of medical images. In this work, we developed the CEA (Cytopathologist Eye Assistant), an easy-to-use tool to aid cytopathologists in performing their daily activities. In addition, the tool was tested by a group of cytopathologists, whose feedback indicates that CEA could be a valuable tool to be integrated into Pap smear image analysis routines. For the construction of the tool, we evaluate different YOLO configurations and classification approaches. The best combination of algorithms uses YOLOv5s as a detection algorithm and an ensemble of EfficientNets as a classification algorithm. This configuration achieved 0.726 precision, 0.906 recall, and 0.805 F1-score when considering individual cells. We also made an analysis to classify the image as a whole, in which case, the best configuration was the YOLOv5s to perform the detection and classification tasks, and it achieved 0.975 precision, 0.992 recall, 0.970 accuracy, and 0.983 F1-score.
Introduction
Pap smear has been valuable for the prevention of cervical cancer for decades [1].Unfortunately, this neoplasm is still one of the main causes of death from cancer among the female population in developing countries [2].Manual Pap smear analysis presents several issues, such as the number of false-negative results of up to 62%, which is a serious problem because it can delay a treatable early detection case [3][4][5][6][7][8][9].
One of the leading causes is the visual inspection of cytopathological lesions under an optical microscope, which comprises two main tasks performed by the cytopathologist: locating and classifying pre-neoplastic cells.At this examination stage, scrutiny and interpretation errors may occur, respectively [10].
Excellence in the performance of these tasks is closely related to the professional's qualification and experience and can also be hampered by the extensive daily workload, fatigue, qualitative cytomorphological criteria, and the wide variety of morphological lesions in different samples [11].Therefore, cytopathological diagnosis is linked to subjective factors that culminate in human errors [12].Thus, monitoring the quality of the Pap smear has allowed the application of a set of measures aimed at reducing deficiencies in the process [13].Computer-aided diagnosis (CAD) has gained evidence in this context as these systems can achieve promising results in minimizing diagnostic errors [14], acting as a supplementary tool for decision-making.
Computational approaches in this context need to deal with a different set of challenges.The main one is the overlapping elements within images from conventional cytology, which makes it challenging to perform classification and detection algorithms.This problem can be circumvented by using images collected in liquid-based cytology since they have fewer overlapping elements than in conventional cytology.However, this approach for generating images has high costs and is not widely used.
This work proposes a tool to support the cytopathologist's decision through automated analysis of images based on deep learning, which provides feedback to the user about the possible classifications of cervical cells from the Pap smear slide.The cytopathologist uses the CAD system output as a "second opinion" and makes the final decision.Our contributions are summarized as follows: (A) Detection and classification of cells in multi-cell images that do not require prior segmentation and hand-crafted features; (B) Evaluation of different YOLO architectures and their configurations to detect and classify the nuclei from the CRIC Cervix images; (C) Introduction of a computer-aided diagnosis system to detect and classify cervical cells, the Cytopathologist Eye Assistant (CEA).
The methodology proposed in this work is limited to databases with known ground truth nuclei.Furthermore, supervised machine learning models need to receive data similar to those they have been trained on; otherwise, they may perform poorly.So, in this case, for the models presented to continue with good results, the images must have the same characteristics, such as but not limited to resolution, zoom, and focus.Ideally, they should be collected on the same equipment used to construct the databases.
The remainder of this paper is organized as follows.Section 2 presents the related studies, followed by the details of the proposed methodology in Section 3. Section 4 shows our experiments, including the database, metrics, results, and discussions.Section 5 introduces the CEA tool, our proposal for a computer-aided cervical cell screening system.Finally, the conclusions are exposed in Section 6.
Background
This section is organized as follows.Section 2.1 reviews methods of cell detection and classification, while Section 2.2 presents an analysis of computational tools to support the cytopathologist's work routine.Finally, Section 2.3 presents a review of YOLOv5 architecture.
Detection and Classification Review
Recently, several studies have analyzed different approaches to support cytopathologists in their work routine.One of these approaches is automatic cell classification since this task is an essential part of the professional's routine and is also challenging.Rehman et al. [15] proposed a method to extract deep features in cells and used softmax regression, support vector machine, and GentleBoost ensemble of decision trees to perform the classification.Ghoneim et al. [16] also performed feature extraction for classification using Shallow, VGG-16, and CaffeNet architectures and they used Extreme Learning Machine and Autoencoder for classification.Diniz et al. [17] investigated ten deep convolutional neural networks and proposed an ensemble of the three best ones to perform the cell classification.
Some studies performed cell classification based on handcrafted features.Diniz et al. [18] investigated eight traditional machine learning methods to perform hierarchical cell classification.Di Ruberto et al. [19] used different descriptors to classify the images using k-nearest neighbors, and Bora et al. [20] classified cells based on their shape, texture, and color using an ensemble method.
However, performing the automatic classification of cervical lesions is a highly challenging task that becomes even more challenging when combined with automatic cell detection.
Matias et al. [21] presented a survey that addresses cytology assisted by computer vision in a whole slide image.They reported that all studies employed state-of-the-art object detectors instead of proposing a new architecture of Convolutional Neural Network (CNN).An example of this approach is presented by Xiang et al. [22], who used YOLOv3 followed by an additional task-specific classifier to perform the detection and classification.
The studies presented in this review use unicellular databases and/or private databases, and/or liquid-based cytology databases.Single-cell databases have the disadvantage of not representing reality since unit cells do not overlap.Private databases do not allow a good comparison for future work.Moreover, liquid-based cytology databases have images that are easier to automatically classify since the collected cells undergo a preparation to improve their spatial distribution.However, this preparation increases the exam cost, making it less used in middle and low-income countries.
Unlike the studies presented, we investigated convolutional neural networks to perform image detection and classification using a newly published multi-cell image database of conventional cytology, the most used in underdeveloped countries.Furthermore, we used the cell nuclei as the region of interest for the detection and classification tasks since Diniz et al. [18] showed that this region contains enough information to perform the classification process.In addition, we presented a decision support tool to complement the work, followed by feedback from cytopathologists who evaluated the tool.
Landscape Analysis
Despite many scientific publications on cervical cancer and artificial intelligence (AI) applied to cell lesion detection, designing and adopting diagnostic software is still a major challenge.Two medical companies stand out with the following software, BD Focal Point Slide Profiler (https://www.bd.com/en-us/products-and-solutions/products/productfamilies/bd-focalpoint-gs-imaging-system) (accessed on 1 November 2022) and Thin Prep Imaging System (TIS) (http://www.hologic.ca/products/clinical-diagnostics-and-bloodscreening/instrument-systems/thinprep-imaging-system)(accessed on 1 November 2022), both on the market since 1980.
The BD FocalPoint Slide profiler is attached to a slide scanner [23].The slides are analyzed, and the software returns information about the likelihood that the slide contains abnormalities.This information is sent to the professional for review.The system returns regions of interest to the user instead of the entire slide, facilitating the process and reducing the workload.
The TIS system combines image processing and slide scanning automation [24].It allows the analysis of 22 regions of interest and forwards the results to the professional who can intuitively track them.Each slide is sent to the professional for review, and complex areas can be marked for further analysis.
Recently, Hologic introduced Genius TM Digital Diagnostics System, a digital cytology platform that combines a new artificial intelligence algorithm with advanced volumetric imaging technology to help pathologists identify pre-cancerous and cancerous lesions in women [25].The AI algorithm identifies suspicious structures on the slide.The regions of interest are classified, and the most relevant lesions are presented to the pathologist for review.For now, Genius diagnostic software is only being sold in Europe.
We also highlight the company KFBIO (https://www.kfbiopathology.com/)(accessed on 1 November 2022), producer of the KFBIO Pathological Remote Diagnosis Platform software that works with cervical cancer and is coupled to a slide scanner, allowing high efficiency, sensitivity of 98.2%, and specificity of 63.5%.Artificial intelligence is applied based on deep convolutional neural networks to analyze liquid-based cervical cytology samples to confirm their feasibility in clinical practice.
All approaches are linked to the whole slide imaging (WSI) scanning equipment, a complete microscope slide scanning, and a high-resolution digital image.Thus, their use is restricted to the acquisition of high-cost equipment.In addition, their use is limited to liquid-based slides and Thin Prep tests, so it is impossible to use samples obtained by the conventional method.
In 2019, a startup (https://datexim.ai/)(accessed on 1 November 2022) presented the CytoProcessor TM software that uses artificial intelligence methods to differentiate normal cells from lesioned cells and then performs classification [26].Cells are shown in a table sorted from the most to the least diagnostically relevant.Each cell can be visualized in its context, similar to the microscope view.Compared to TIS, CytoProcessor TM significantly improves diagnostic sensitivity without compromising specificity [27].Some computational tools even analyze cell structures individually or in collections, which may be helpful during the diagnostic process.However, none are specific to support the diagnosis of cervical cancer.
Another free and open-source software is Cytomine (https://cytomine.com/)(accessed on 1 November 2022), a web platform that promotes collaborative image analysis through segmentation, manual cell annotation services and collection generation.In addition, it offers paid AI analysis services for images.
As mentioned, the software allows integration with image analysis algorithms.However, such use must be implemented separately and integrated into the platforms, which is not trivial.None of them have AI tools that allow support for diagnosing a cervical cell lesion quickly and easily for the pathologist without the support of an AI specialist.
YOLOv5 Review
The network architecture of YOLOv5 is composed of three components: backbone, neck, and head.The backbone component processes the image to generate feature sets at different levels.In YOLOv5, the backbone component is made by CSPDarknet [28] architecture with extracted features based on an adapted DenseNet [29] process.Its architecture is shown in Table 1.The neck component combines the features generated in the backbone and passes them to the prediction task.Therefore, this component uses the PANet [30] architecture to create feature pyramids, which can represent objects in different scales and work with images of different sizes.In addition, the use of PANet improves the information flow.
Finally, the head component, performed by the YOLO layer, consists of using the features extracted by the previous components to generate the bounding box, the probability of each class, and the anchors of each detected object in the image.The hidden layers on the network and the detection layer use as an activation function, the ReLU function, and the Sigmoid function, respectively.
YOLOR Review
The YOLOR architecture is adapted from the YOLOv4 and works as an architecture to perform multiple tasks in a single execution.It combines the YOLOv4 architecture with the concepts of implicit and explicit knowledge in its execution [31].An overview of its architecture is presented in Figure 1.According to Wang et al. [31]: (i) the backbone component is made by a focus layer (Stem D) followed by Stages B2, B3, B4, B5, and B6; (ii) YOLOR uses a CSP convolution as a down-sampling module in each stage to optimize the gradient propagation; (iii) the base channels and repeat number of the backbone layers are set to {128, 256, 384, 512, 640} and {3, 7, 7, 3, 3}, respectively; and, (iv) one difference between YOLOR and YOLOv4 is that YOLOv4 uses Mish as an activation function, while YOLOR uses SiLU.
Methodology
Our proposal applies a CNN to detect and classify regions of interest and presents the results to assist cytopathologists in interpreting cervical cells.We used neural networks based on the YOLO (You Only Look Once) [32,33] architecture, which presents excellent results and shorter times for training and inference.Furthermore, as YOLO considers the image as a whole, this also allows it to reduce the occurrence of background errors compared to other detection networks such as R-CNN and Fast R-CNN [32].
This work uses YOLOv5 [34] and YOLOR [31] implementations, which present better inference times than other detection and classification networks [35].In addition, they present a more modularized way to run tests and detections using the architecture.These characteristics were crucial for choosing these networks as they allow easier integration into a tool.
This modularization of the YOLO architecture also allows using other classification algorithms to classify the detected objects.Based on that, this work also evaluated the combination of the YOLO detection and the classification proposed by Diniz et al. [17], which obtained the best results for the database used.
Figure 2 presents the general workflow of this proposal, which consists of three steps: (1) Model training and evaluation; (2) Encapsulation of the model within a software interface; and (3) Assessment of the tool and model results with specialists.In Step (1), we use the CRIC Cervix to train and evaluate base models previously selected.We show these models in Section 4. This step is associated with our contributions A, B, and C. In Step (2), a tool named CEA was built with the best performance configuration obtained in Step (1) and is intended to offer the professionals the model results in a friendly and accessible interface.This step is associated with our contributions D and E. Finally, in Step (3), specialists evaluated the tool and provided feedback on its use.
Experiments
This section describes how our experiments were constructed and the results achieved.
Database
We tested the proposed method on the cervical cell classification collection available in the CRIC Searchable Image Database (https://database.cric.com.br)(accessed on 1 November 2022), the CRIC Cervix [36].This collection contains 400 images from slides of the conventional smear (Pap smear) captured using conventional microscopy in a bright field with a 40× objective and a 10× eyepiece through a Zeiss AxioCam MRc digital camera coupled to a Zeiss AxioImager Z2 microscope with the Axio Vision Zeiss Software.
The collection images have a totality of 11,534 marks of their cells represented by bounding boxes of 100 px × 100 px.Four cytopathologists classified and revised each mark according to the Bethesda System nomenclature for cell lesions, labeled as Negative for Intraepithelial Lesion or Malignancy (NILM), Atypical Squamous Cells of Undetermined Significance (ASC-US), Low-grade Squamous Intraepithelial Lesion (LSIL), Atypical Squamous Cells, cannot exclude HSIL (ASC-H), High-grade Squamous Intraepithelial Lesion (HSIL), and Squamous Carcinoma (SCC).NILM labels correspond to normal cells, and the others (ASC-US, LSIL, ASC-H, HSIL, and SCC) correspond to lesioned cells (distinct grades).Figure 3 shows an image sample from the cervical cell classification collection, where colored boxes represent different lesions.A more detailed database description may be found in [36].
Database Manipulations
For the image's configuration, we considered: the original format of the images, resizing the images to 561 px × 451 px, and the generation of new images for training using augmentation strategies and balancing classes.
We utilized an undersampling method for balancing the database.That method removed images that contained many samples of the negative class to balance the number of examples for training.
The augmentation was implemented using Python's Clodsa library [37].We used the horizontal, vertical, and vertical-horizontal flips, average blurring (kernel parameter equals 5), and raised hue (power parameter as 0.9).
With these image manipulations, the new database with the resized images had the same size as the original database, with 400 images.Simultaneously, the database with resized images and augmentation comprised 1800 images, and the database with resized images and balanced classes comprised 332 images.
Metrics
We apply precision, recall, and mAP (mean average precision) metrics to evaluate the proposed solution using CNN.Precision (see Equation ( 1)) measures the model's ability to detect altered cells only in lesioned slides; recall (see Equation ( 2)) measures the model's ability to detect altered cells; mAP (see Equation ( 3)) measures the model's performance in detecting and retrieving information in the images.These metrics are typically used in object detection and classification scenarios.
In these equations, |TP|, |FP|, |FN|, and |Classes| represent the number of true positives, false positives, false negatives, and the number of classes, respectively.
Development and Test Environment
We used a computer with an Intel Core i7-9700F processor with a GPU GeForce RTX 2080 and 16 GB of RAM in a Windows 64-bit system to perform the experiments.Algorithms were written in Python and leveraged the PyTorch framework, versions 3.7.9 and 1.7, respectively.
We split the database into training, testing, and validation sets.The distribution of each database setup is shown in Table 2.We also performed a 10-fold evaluation as suggested by Dhurandhar et al. [38].All results are the average obtained by cross-validation.
Configuration and Evaluation of the YOLO Models
In the first four columns of Table 3, we present the combinations between the base model, the image configuration, and the model's fitness formula used in our experiments.The models considered were the small (s), medium (m), large (l), and extra-large (x) YOLOv5 configurations and the YOLOR-P6 model.We use pre-trained networks, which means that the networks used have been previously trained on a large dataset.The advantage of this pre-training is the improvement of the model's capability to extract basic features.This happened because, in CNNs, the features are more generic in the early layers and more specific to the treated problem in the last layers [39].Therefore, it is possible to fine-tune a generic network on a smaller dataset of the target domain using transfer learning.So, the YOLOv5 was pre-trained with the ImageNet [40] database and the YOLOR with MS COCO [41] database.
For the fitness formula, we considered: the default formula ("Default" in Table 3) that combines mAP@0.5 and mAP@0.5:0.95; a variation that considers equal weights for all metrics ("Equal" in Table 3), therefore no priority in metrics; and a variation prioritizing the recall ("Recall" in Table 3), since this metric is the most important for the problem addressed as it measures the amount of lesioned cells that were classified as normal.
The networks were trained with the following hyperparameters: learning rate of 0.01, momentum of 0.937, weight_decay of 0.0005, and IoU threshold (in training) of 0.2 and considering rectangular bounding boxes.An IoU threshold of 0.45 and a confidence threshold of 0.25 were considered for the evaluation.Values for these parameters are from the default network configuration.
In addition, the highest possible value was used for the batch, which is limited by the model size in the GPU memory.We also emphasize that not all combinations could be tested due to memory constraints.
The last four columns of Table 3 summarize our results ordered by the F1-score metric.The best result for each metric is highlighted in bold.Each model was trained for 200 epochs.The best result, considering the F1-score metric for the evaluated combinations (see Table 3), was obtained by the YOLOv5s model with the original image and the model's default fitness.Under these conditions, the recall was 0.777 for the test set.However, the best recall value obtained among the evaluated combinations was 0.884.This result was achieved through the YOLOR model using recall as the only metric in the model's fitness.In contrast, when performing this prioritization, there was a sharp loss of precision.
The same observation holds for precision.The model with the best F1-score metric has a precision of 0.736, while the best precision found was 0.767.However, this increase in accuracy resulted in a loss in the recall.When considering the F1-score, the result of the YOLOv5s model was better than those obtained by the YOLOv5l model.As previously stated, recall is more important for the considered problem, mainly because, in an actual scenario, the tool detections would go through the validation of a professional.The false positives that influenced the accuracy value would be easily excluded by the professional.
It is also important to emphasize that the resized image allows larger models (due to memory allocation).However, the use of these models did not lead to better results.Looking at Table 3, we can see that the configurations m, l, and x of the YOLOv5 model were better than those of the configuration s for the resized images.However, the best result was obtained with the images in the original size.Thus, the benefit of resizing the images is restricted to reducing the model training time and the necessary memory allocation.
Comparison between Different Classification Algorithms
Considering only the nuclei classification of the CRIC Database images, we found that in the literature, the best result was achieved by Diniz et al. [17].In their work, the authors proposed an ensemble of EfficientNets to perform the classification.Thus, we analyzed if using this ensemble as a classifier of the objects detected by YOLO improves its results.We also investigate each EfficientNet that composes the ensemble method as a classifier.Table 4 reports the obtained results of this investigation.As Table 3, the results are ordered by the F1-score metric, and the best ones are highlighted in bold.
As we can see in Table 4, performing nuclei detection using YOLOv5s and the classification using the ensemble proposed by Diniz et al. [17], there was an improvement in recall (from 0.777 to 0.906) and F1-score (from 0.756 to 0.805).Precision dropped from 0.736 to 0.726, and AUC dropped from 0.825 to 0.816.We also evaluate the whole image classification; that is, if at least one nucleus of the image is classified as lesioned, the image is also classified as lesioned.On the other hand, if all nuclei are classified as normal, the image is also considered normal.This evaluation corresponds to an image classification as a whole, similar to how professionals do their daily work.So, we performed and evaluated the whole image classification for the best approaches presented in Sections 4.5 and 4.6: the YOLOv5s performing the classification and detection of the nuclei; and the combination of the YOLOv5s for the detection and the EfficientNets ensemble for the classification.
Table 5 presents the results (ordered by F1-score with the best ones highlighted in bold) for this general image evaluation.Using YOLOv5s for detecting and classifying nuclei, it achieved the best precision and F1-score results.Its recall was lower than the other approach, but it remained high.For the AUC metric, both scenarios obtained unsatisfactory results.However, this may have been caused by the distribution of samples of each class in the test set.When we modify the problem to classify the image as a whole, we have very few samples of entirely negative images.This limitation is due to the database that was planned for classifying lesioned nuclei in an image since images containing only non-lesioned nuclei are underinvestigated.
Literature Comparison
Carrying out a comparison with the results obtained by Diniz et al. [17] is not feasible because the authors only dealt with the classification task, and the results obtained in our work carry the error from both detection and classification tasks.When writing this work, we did not find any work in the literature that detected and classified nuclei in the CRIC Cervix database.
Xiang et al. [22] presented a similar approach for this situation, obtaining an mAP of 0.634 and a recall of 0.975 in the whole image analysis.Our results of mAP of 0.806 and recall of 0.992 were superior to theirs, but it is important to note that the bases used in these two studies differ.It is worth emphasizing that Xiang et al. [22] used liquid-based cytology images, which means that the collected cells were pre-processed to improve their spatial distribution leading to an easier problem than the one we approach in this work.
CEA: Cytopathologist Eye Assistant
We developed the Cytopathologist Eye Assistant (CEA) using Python's Kivy library (https://kivy.org)(accessed on 1 November 2022).The idea of the tool is to assist the cytopathologist in diagnosing cervical cancer using images obtained from the Pap smear.In general, the CEA follows the interaction displayed in Figure 4, in which a specialist inputs an image for the system, and a detection algorithm processes this image.The results of the detection are the input for a classification algorithm.In the end, both results are combined and shown for the specialist in the CEA interface.In addition, the detection and classification models can be changed without impacting the general interaction flow of the tool; however, the models need to accept an image as input and provide bounding boxes and labels as output.For implementation, we have used the YOLO model with the best result configuration obtained in this work.This model was selected because YOLO and the ensemble approach have similar results for the whole image classification (Table 5).However, YOLO has the detection and classification unified in a single process improving the tool processing time.
The CEA interface is presented in Figures 5 and 6.Its functionalities are highlighted through numbered red circles.In Figure 5, the user clicks on the "Upload your image" button to select the image to have its cells detected and classified (see red circle 1).CEA shows the chosen image below (see red circle 2), and the user can send the selected image for processing (by clicking on the "Process your image" button, see red circle 3) or can select another one (in red circle 1).After the image processing step, the application presents the second screen, shown in Figure 6.CEA displays the resulting image at the position highlighted by the red circle 9.If the user wants to view the original image, he can click on the "no tags" button (see red circle 4).The "all" button (see red circle 5) displays the user-submitted image with all tags found.The user can also select only lesioned cells marked (see red circle 6) or only negative cells marked (see red circle 7).In the position highlighted by red circle 8, the user will see the overall automatic classification of the image.Finally, in the "Process a new image" button (see red circle 10), the user can choose a new image for classification, returning to the first screen shown in Figure 5.
CEA Validation Procedure
We designed a validation procedure to analyze the use of CEA and what specialists think about its applicability, possible improvements, or unnecessary functionalities.
The procedure was performed with three authors with 7, 12, and 24 years of experience in cytopathology.Moreover, for this evaluation, they were not aware of the construction of the tool or its characteristics.
Initially, the users (the authors performing the process) were asked to use the tool to classify four images.They have not received any prior instructions on how to use the tool.Then, the users were informed of the effect of "no tags", "all", "lesions", and "negatives" buttons and notified that they could use them to assist their analysis.After that, the users classify another four images using CEA.In each image, the users answered if they agreed with the image's overall classification and if the tags helped them in the analysis.
We gathered the images used for testing from the model's test set.We separated the images from the model's test set into four groups: only lesioned images, only negative images, images with few markings, and images with many markings and overlaps.We defined these groups because they represent the primary cases of application use in a more generic way.Thus, we randomly selected two images from each of these four groups for testing with users, totaling eight images.
After interacting with CEA, participants filled out a questionnaire in which they were asked if they thought the buttons helped them and if they thought the tool contributed to the image analysis.In addition, they evaluated the ease of use and learning, intuitiveness, and response time of the tool on a five-point Likert scale (1 as strongly disagree and 5 as strongly agree) based on their experience with the tool during the test.Finally, we conducted a semi-structured interview to gain more insights into the participants' experiences.
Specialists' Feedback
During the test, users agreed with the image overall classification made by the CEA tool in 83% of the cases.They reported that some cases lack general context information of the slide (similar to other cells) to have confidence in the result.However, another user reported that these buttons could contribute to specific cases.
Specialists described that in some cases, the color, the number, and the overlapping of tags on the image might have negatively impacted the analysis since these characteristics may have influenced the user's ability to capture information.At this point, they reported that the filter buttons helped in the investigation.During the interview, a user suggested that the buttons for negatives filter and no tags are unnecessary, although it does not interfere.However, another user reported that buttons might not be needed in all analyses but can contribute to specific cases.
All users scored 5 (strongly agree) on the Likert scale concerning the statements analyzed in the test.That is, they agreed that the application was intuitive, easy to learn, and to use.During the interviews, users reinforced the application's ease of learning and used it as a positive point.Furthermore, they also rated the processing time of 8 seconds (on average) as satisfactory.However, during the interview, we realized that this evaluation was based on different reasons, varying from response time expectations and experience with other algorithms and tools.
During the interviews, some users highlighted that we could improve the flow of interaction with the system.They highlighted two distinct viewpoints: (1) remove non-essential steps; (2) change evaluation structure.For case (1), one user highlighted that viewing the image between loading it and submitting it allows them to perform a preliminary visual analysis of the image.However, this user understood that we only used it to confirm the chosen image.For case (2), the user highlighted that the process would become exhaustive and error-prone for multiple images due to repeated steps (select, evaluate, repeat).One suggestion in the interviews is that the tool could allow the loading and processing of multiple images.
Another observation reported by users concerns the number of classified classes.They confirmed that the classification into two classes is interesting for them, but they pointed out that it would be better to have more detail on the lesion class by classifying it into three or six classes according to the lesion type.Furthermore, they reported that it would be interesting if the application could handle images with different characteristics (such as zoom, resolution, and noise) since even similar equipment still generates slightly different images.However, we do not find a database with these characteristics.Thus, it is not possible yet to train a model capable of dealing with this variability.
Finally, users reported that the application could be beneficial in answering specific questions.One of these cases is when there is doubt and no other professional to assist them.Currently, the process adopted consists of taking a photo and sending it to other professionals.
Conclusions
This work presents the CEA (Cytopathologist Eye Assistant) tool that detects and classifies cells in multicellular images obtained in the Pap smear.Internally, the CEA processes utilize a detection algorithm and a classification algorithm to provide the results for the specialist.The application uses the YOLOv5s, a deep learning model for both tasks in this work.The goal is to support the cytopathologist's decision-making by offering a second opinion based on the whole image, not demanding a previous cell segmentation.
This work investigated different configurations for cervical cell detection and classification using YOLO architecture.The best results obtained were using the YOLOv5s with the original dataset.For this configuration, we obtained a precision of 0.736, recall of 0.777, and mAP@0.5 of 0.806.In addition, we evaluate the classification process as a separate task.For this analysis, we used the ensemble proposed by Diniz et al. [17] and obtained a precision of 0.726 and a recall of 0.906 for classifying the bounding boxes detected by the YOLOv5s.
The analysis becomes even more interesting when we analyze the image as a whole.In this case, if the image has at least one lesioned cell, it is classified as lesioned.Otherwise, it is classified as negative.For this analysis, the YOLOv5s achieved the best results with 0.975 precision, 0.992 recall, 0.970 accuracy, and 0.983 F1-score for the CRIC Cervix database.The results outperform the literature method proposed by Xiang et al. [22] concerning the precision, recall, and F1 metrics; however, as they used liquid-based cytology, their images were easier to detect and classify than ours.
We can conclude from the specialists' assessment of CEA that it contributes to a general analysis of Pap smear images.They also reported that the tool is easy to learn and use, intuitive, and has an adequate response time.The main improvements suggested by users to be incorporated into the application were the following: (1) allowing the user to select multiple images for processing and (2) making a new version of the tool for classification into three or six classes.
Figure 3 .
Figure 3. Image example from the cervical cell classification collection.
Table 3 .
Results of the object detection and classification.
Table 4 .
Results of the object's classification.
Table 5 .
Results of the whole image classification.
|
2022-12-07T19:09:33.491Z
|
2022-11-30T00:00:00.000
|
{
"year": 2022,
"sha1": "e14af33372da5e6d81a5b50ed58bc4536a422508",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-9909/2/4/38/pdf?version=1669795364",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "650fc432f3bbf93f5d110ce51f434eca0cd9644a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
18623505
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiological Investigation of Equine Piroplasmosis in China by Enzyme-Linked Immunosorbent Assays
ABSTRACT The objective of this study is to investigate the seroprevalence of equine piroplasmosis in China. A total of 1990 sera were collected from clinically healthy horses in various districts located in ten different provinces of China and examined by using indirect enzyme-linked immunosorbent assays (ELISAs) with recombinant Theileria equi (T. equi) merozoite antigen 2 (rEMA-2) and Babesia caballi (B. caballi) 48-kDa rhoptry protein (rBc48), respectively. The results showed that 1,018 (51.16%) and 229 (11.51%) samples were positive for B. caballi and T. equi infection, respectively. The number of samples with mixed infection was 152 (7.64%). These results indicated that equine piroplasmosis was widespread in China.
Equine piroplasmosis is a tick-borne disease of horses, donkeys, mules and zebra. This disease is caused by two kinds of hemoprotozoan parasites, named as Babesia caballi (B. caballi) and Theileria equi (T. equi), both of which are transmitted by species of Ixodid ticks [5]. Clinically, the acute phase of the disease is characterized by fever, anorexia, weight loss, tachypnea and congestion of mucous membranes, but chronic infection usually presents unconspicuous clinical symptoms. Persistently-infected horses that recover from the acute infection usually carry the parasites for lifelong and serve as the reservoir hosts for transmission to other susceptible animals [21]. There are no vaccines available for this disease [12]. The control measures of equine piroplasmosis require effective diagnostic approaches that can detect carrier or chronically infected animals.
Equine piroplasmosis has a worldwide distribution. In recent years, many surveillance studies of equine piroplasmosis have been reported in many countries all over the world, such as Korea, Mongolia, Venezuelan, Tunisia, Sudan, Italy, Hungary, Saudi Arabia, Mexico and Texas of U.S.A. [1,3,7,10,[13][14][15][16][17][18]. In China, B. caballi and T. equi were documented in Heilongjiang Province as early as 1943 [25]. Although both B. caballi and T. equi infections have been identified in China, a comprehensive survey of the infections has never been conducted [4,22,24].
Equine piroplasmosis causes serious health effects to horses, especially with respect to agricultural production including low working capacity, high cost of the control measures and impact on the transport of goods and international trade [12]. Therefore, there is an urgent need to determine the prevalence of the disease in China to facilitate the control of the infection.
The immunodominant merozoite surface protein of T. equi (EMA-2) is expressed throughout life cycle of the parasite, in both vector tick stage and in mammalian host stage [20]. The 48-kDa merozoite rhoptry protein of B. caballi (Bc48) is recognized earliest by the host immune system and throughout infection [2]. Indirect enzyme-linked immunosorbent assay (ELISAs) using recombinant EMA-2 and Bc48 (rEMA-2 and rBc48) is highly sensitive and specific for detecting antibodies in infected horses and widely used in serological surveillance of equine piroplasmosis [6,11,12,19].
The rEMA-2 and rBc48 were expressed in Escherichia coli(E. coli) and purified as Glutathione S-transferase (GST) fusion proteins as described previously. The recombinant GST was also expressed and purified as a control [9]. The ELISA was performed as described previously [23]. Briefly, 96-well plates were coated overnight at 4 °C with 2 µg/ml purified recombinant proteins in coating buffer (carbonatebicarbonate buffer, PH 9.6), respectively. The plates were washed once with washing buffer (phosphate buffer saline with tween-20) and then blocked with blocking buffer (3% skimmed milk dissolving in phosphate buffer saline) for 2 hr at 37°C. After washing once with the washing buffer, 100 µl of horse sera diluted in blocking buffer were added into [8]. The samples with OD value more than 0.2 were considered as positive for B. cabali infection in the ELISA with rBc48 [24]. In this study, a total of 1990 serum samples were collected from ten different provinces or areas of China, including Hebei, Beijing, Shaanxi, Guangdong, Xinjiang, Ningxia, Yunnan, Guizhou, Gansu and Jiangsu, and some of them are vital localities for the horse industry in China.
The positive rates in the ELISAs using the two antigens are shown in Table 1, and the distribution of OD values is shown in Fig. 1. Totally, 1,018 (51.16%) and 229 (11.51%) equine samples were positive for B. caballi and T. equi infection, respectively. Mixed infections were detected in 152 (7.64%) serum samples. Although there were no samples collected in northeast of China in this study, another study conducted in 2003 reported that, out of 111 equine samples, 38 (34%) and 36 (32%) samples were positive for T. equi and B. caballi infection, respectively [22]. These results indicated that equine piroplasmosis is widespread in China, and therefore, the impact on the horse industry caused by this disease should be considered. The seroprevalence of B. caballi and T. equi infection in different locations is shown in Fig. 2. The highest prevalence of B. caballi was found in Ningxia province (77.61%), and the lowest infection rate was noted in the Guizhou province (20%). Likewise, the highest infection rate of T. equi was found in Yunnan province (37.58%), and the lowest prevalence was noted in Shaanxi province (1.04%). Notably, the seroprevalence of B. caballi was significant higher than that of T. equi in nine provinces, except for Hebei. Differences of the positive rate might be caused by the difference in distribution of tick vectors or the use of horses.
In 2002, a surveillance study of equine piroplasmosis in Xinjiang province reported that, out of the 70 samples collected from three farms in Xinjiang, 28 (40.0%) and 17 (24.3%) were positive for T. equi infection and B. caballi infection respectively [24]. Out of all the 350 sera, 34 (9.71%) samples were positive for T. equi, while 271 (77.43%) were positive for B. caballi in the present study. The positive rate of B. caballi infection was predominant over T. equi. This difference might be caused by the differences of sampling time and sample number. The dominant positive rate of B. caballi infection was also reported in a study conducted in Mongolia (adjacent to Xinjiang province), in which the positive rate for T. equi and B. caballi infection out of 250 samples was 49 (19.6%) and 129 (51.6%), respectively.
Serological surveillance of B. caballi and T. equi infections has been conducted in many countries. However, extensive survey of the prevalence about this disease in China has never been performed. There were few reports, and most of them focused on the northern China, rather than southern China. Our data demonstrate the successful applications of ELISAs with rBc48 and rEMA-2 as antigens for investigating the epidemiology of equine piroplasmosis caused by B. caballi and T. equi infections in horses in China. To our knowledge, this study is the first comprehensive survey for equine piroplasmosis in China. Our data may be regarded as important information that would contribute to establish prevention and control measures against equine piroplasmosis in China.
|
2016-05-15T09:01:32.034Z
|
2013-11-29T00:00:00.000
|
{
"year": 2013,
"sha1": "b75bd3d36ba94cbaeadf527aab34d1019064edca",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/jvms/76/4/76_13-0477/_pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b75bd3d36ba94cbaeadf527aab34d1019064edca",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
233717271
|
pes2o/s2orc
|
v3-fos-license
|
Magnitude, risk factors and antimicrobial susceptibility pattern of Shigella and Salmonella, among children with diarrhea in Southern Ethiopia: A Cross-sectional Study
Objective: This study was aimed at identifying Shigella and Salmonella infection, their antibiotic susceptibility pattern and associated risk factors among children with diarrhea who attended Alamura Health Center. Method: A facility-based cross-sectional study was conducted on 263 children aged below 14 years with diarrhea. A structured questionnaire was used to collect socio-demographic and clinical data after obtaining the necessary consent from their parents or caretakers. The culture and sensitivity tests were performed using the standard operating procedure of the microbiology laboratory. Results: Accordingly, 20/263 (7.6%), 95% confidence interval: 4.4%–11.4% Shigella and 1/263 (0.38%), 95% confidence interval: 0.0%–1.1% Salmonella were isolated. Shigella dysenteriae was dominant 11 (4.2%), followed by Shigella spp. 9 (3.42%) and Salmonella typ 1 (0.38%). The isolates showed 71.4% overall resistance to ampicillin and 61.9% for augmentin and tetracycline, whereas 95.2% of the isolates were sensitive to ciprofloxacin, 85.9% to ceftriaxone and ceftazidime, 81% to gentamycin, 76.2% to chloramphenicol, 66.7% to cefuroxime and 52.4% to cotrimoxazole. The habit of washing hands after toilet use for a while (adjusted odds ratio: 235.1, 95% confidence interval: 20.9–2643.3, p < 0.000) and storing cooked food in an open container for later use (adjusted odds ratio: 36.44, 95% confidence interval: 5.82–228.06, p < 0.000) showed a statistically significant association. Conclusion: High level of Shigella and single Salmonella was isolated. Ampicillin, augmentin and tetracycline were resistant and ciprofloxacin, ceftriaxone, ceftazidime, gentamycin, chloramphenicol, cefuroxime and cotrimoxazole were relatively sensitive. Hand-washing after defecation for some time and storing of foods for later use in an open container were statistically associated. Therefore, to alleviate this infection, the concerned body should focus on imparting health education for hand-wash after defecation and storing food in a closed container for later use is mandatory.
Introduction
Diseases caused by enteric pathogens are of common public health concerns in many parts of the world including Ethiopia. 1,2 Salmonella and Shigella are associated with a high burden of illness among children in the developing world. 3 Children are one of the victims of these infections accounting for approximately 8% of all deaths among children under age 5 worldwide in 2017. This implies that over 1300 young children passed away each day, 480,000 children a year, regardless of the availability of humble active treatment. Most of these deaths are due to diarrhea in South Asia and sub-Saharan Africa. 4 The rates of Shigella and Salmonella in Ethiopia reported from different studies are in the range of 4.3%-45% 5 -8 and 1%-12.6%, 7,9,10 respectively.
They are species of particular concerns as they cause enteric fevers, food poisoning and gastroenteritis. 9 They are Gram-negative rods that commonly inhabit the intestinal tracts of humans and many animals. 10 It was estimated that 1.8 million cases of children died from diarrheal illness worldwide, a large proportion of which was attributed to infection by Shigella and Salmonella spp. 11 Different studies have reported that Shigella spp. were associated with majority of cases of bacillary dysentery, which is prevalent mainly in developing nations; 12,13 whereas, Salmonella spp. were the most common cause of food-borne infection outbreaks almost all over the world. 14 In recent years, the emergence and global dissemination of Salmonella and Shigella species resistance to ampicillin, chloramphenicol, tetracycline and co-trimoxazole are increasingly documented in developing countries. 15 Infections of Shigella and Salmonella can be asymptomatic and can be treated with rehydration solutions unless the infection is by invasive strains. 16 Prescribing antibiotics might shorten the extent of diarrhea and control the organisms, which otherwise might continue to spread among people and in the environment, and furthermore, it would pose a public health concern. 17 Children are at high risk of these infections due to their weakened immune status and ease of contamination. 18 In developing countries, this infection increased due to poor sanitation, personal hygiene and lack of appropriate food supply that leads children to contaminate by themselves. 19 Therefore, this study is aimed at identifying Shigella and Salmonella infections, antibiotic susceptibility and associated risk factors among children with diarrhea who visited Alamura Health Center in Southern Ethiopia.
Study area and period
The study was conducted in the Southern Nations, Nationalities and Peoples Region (SNNPR) at Hawassa Alamura Health Center from 1 April 2019 to 30 August 2019. Hawassa is the capital city of the SNNPR, located in the Southern part of Ethiopia, on the shores of Lake Hawassa which is one of the Great Rift Valley lakes situated around 270 km from Addis Ababa, the capital city of Ethiopia. The mean annual rainfall is about 950 mm, temperature about 20°C and humidity 70%-80%. The rainy season generally extends from June to October. The human population of Hawassa for 2015 was estimated at 351,469, with an annual growth rate of just over 4%. 20 Hawassa city has 7 sub-cities with 5 private, 1 general and 1 comprehensive specialized hospital and 10 health centers. Alamura Health Center is located in the Tabor sub-city and borderline between Fara and Hitata Kebele near Alamura Mountain.
Study design and population
A facility-based cross-sectional study design was conducted among children with diarrhea at Alamura Health Center. A convenient sampling technique was employed in which diarrheic pediatric patients below the age of 14 years were included. They were considered for the study only after obtaining the necessary consent from their parents or guardian and signing the document. The participants are excluded if their parents are not willing or refuse to sign. All diarrheic pediatric patients who visited Alamura Health Center for the diarrheal case of illness were the source of population.
Sample size determination
The sample size was calculated using a single population proportion formula: n = z 2 p (1−p)/d 2 , where n = sample size, z = confidence level at 95% (standard value of 1.96), M = margin of error at 5%, p = estimated prevalence of Shigella and Salmonella from the previous study 22.2%. 21 Therefore, the calculated sample size for this study was 263.
Variable of study
The dependent variables were the presence of Salmonella and Shigella. The independent variables were socio-demographic factors, namely, age, sex, place of residence, educational status of the mothers, marital status, family size, monthly income, occupation of the family. Clinical variables collected include history and type of diarrhea, malnutrition and vaccination status of the children. Another variable considered for the study was behavioral factor, which includes a drinking water source, hand-wash after toilet use, food/drink consumption before illness, storage of cooked food for later use, the habit of hand-washing before and after a meal, washing habit of food containers and history of contact with domestic animals. These were assessed with a structured questionnaire.
Data collection
The socio-demographic and clinical data were collected after informing the parents/caregiver about the aim of the study. A face-to-face interview was conducted to collect the data with a structured questionnaire from parents or caretaker of the children who complained of diarrhea after they signed the consent and the child accepted the assent.
Laboratory diagnosis
The stool was collected using a screw cup container. The parents/caregiver was instructed to bring a fresh stool sample without any contamination before 30 min of collection. All stool specimens were placed into Carry Blair Transport Medium and transported to the Microbiology Laboratory of Hawassa University Comprehensive Specialized Hospital (HUCSH). The stool was inoculated on prepared culture media which is MacConkey, Xylose lysine deoxycholate (XLD) and selenite F-broth (Abtek, UK). The culture plates were incubated aerobically at 37°C for 24 h.
Bacterial identification
The colonies were examined morphologically for size, shape and ability to ferment lactose. Those bacterial colonies with non-lactose fermenting characteristics with H 2 S for Salmonella and without H 2 S for Shigella were picked up for biochemical identification. Indole test, urease production, mannitol fermentation, hydrogen sulfide, gas production test, citrate utilization test, motility test, carbohydrate fermentation test, lysine decarboxylase test (LDC) and oxidase test were used to identify the bacteria up to genus/species level. 22
Quality control
A pre-test was conducted on 5% of the questionnaire before conducting the study. The validity and completeness of the data were verified daily. Sterility of culture media and biochemical tests were checked by overnight incubation of uninoculated media from each batch of preparation. Standard strains of Escherichia coli ATCC 25922 and Pseudomonas aeruginosa ATCC 27853 were used for the culture and antibiotic susceptibility testing of internal quality assurance.
Data analysis
Data were entered into Statistical Package for the Social Sciences (SPSS) version 20 and were analyzed to make inferences on the frequency of occurrence of enteric pathogens associated with diarrhea and to show bacterial resistance pattern to locally prescribed antibiotic substances. Descriptive statistics were performed to get the frequency of dependent and independent variables. Binary logistic regression analysis was conducted to identify real predictor of Shigella and Salmonella. The strength of association was presented by odds ratio at 95% confidence interval (CI) and a p value of ⩽0.05 was considered as a statistically significant association.
Ethical consideration
The study was conducted after obtaining formal permission from the Southern Nations Nationality and People Regional Health Office, Hawassa City Administration Health Office, Alamura Health Center Manager and Laboratory Head. The patients were included in the study only when the parents or caretakers of the patients sign the consent letter. The culture and antimicrobial susceptibility results were communicated to the concerned bodies in the health center within 72 h.
Socio-demographic characteristics of the study subjects
A total of 263 diarrheic pediatric patients from Alamura Health Center were enrolled for the study with a mean and standard deviation of age 6.8 ± 3.7 years. The frequency and percentage of pediatrics age range enrolled for the study were 0-4, 88 (33.5%); 5-9, 103 (39.2%); and 10-14, 72 (27.4%). An almost equal ratio of male to female was enrolled for the study (130:133). Regarding the residence, most of the study subjects 155 (58.9%) were from an urban area and 108 (41.1%) patients were from a rural area. Concerning the educational status of the mothers of pediatric patients, most of them were educated (81%) ranging from reading and writing to university graduate level and the rest 19% were illiterates. The marital status of their mothers: 178 (67.7%) were married, 43 (16.3%) divorced and 41 (15.6%) widowed. The mean and standard deviation of the family size was 5.6 ± 1.9 persons. The average income of the family was 3743.3 ± 2568.1 Ethiopian birr. Most of the study participants have a large family size with a relatively low income of <1500 birr per month and from this number, the diarrhea positive was 12 (57.1%; Table 2).
Salmonella typhi
A single Salmonella typhi was isolated from the patient and it was sensitive to ciprofloxacin, gentamicin, ceftazidime, chloramphenicol, cefuroxime, ceftriaxone and co-trimoxazole and resistant to ampicillin and tetracycline.
Other Shigella species Shigella spp. isolated were 100.0% sensitive to both ceftriaxone and ciprofloxacin, 77.8% to both ceftazidime and chloramphenicol, 66.7% to cefuroxime and 55.6% to gentamycin. Resistance was seen 81.8% for ampicillin, 72.7% for tetracycline and 55.6% for both co-trimoxazole and augmentin.
Associated risk factors
Among the study participants, 162 (61.6%) patients showed a history of diarrhea; of these, 17 (81.0%) were positive for current infection. Of all diarrheic children, the type of diarrhea was watery for 111 (42.2%), mucoid for 103 (39.2%) and bloody for 49 (18.6%). Children with mucoid diarrhea were more 18 (85.7%) as compared to the rest of the patients. Most of the children, 170 (64.6%), had diarrhea once a day and most of the bacteria, 11 (52.4%), was isolated from these patients. Most of the study subjects used pipe water, 159 (60.5%), for drinking, and the children infected in these categories were more than 17 (81.0%).
The bivariate analyses indicate that family with monthly income >1500 (Table 2).
However, in multivariate analysis, after adjustment, those who had a habit of washing the hands of children after toilet use for some time (adjusted odds ratio (AOR) = 235.1, 95% CI: 20.9-2643.3, p = 0.000) and store cooked food in an open container (AOR = 36.44, 95% CI: 5.82-228.06, p = 0.000) showed a statistically significant association of Shigella and Salmonella infection with p values ⩽0.05. However, factors like the type of diarrhea, history of contact with domestic animals, the habit of hand-washing before and after a meal and washing of food containers were not statistically significant (Table 2).
Discussion
The overall magnitude of Shigella and Salmonella isolated in this study was 8.0% (4.6%-11.4%), which is lower than the studies conducted in Tanzania 42.7%, 24 Mozambique 27.2%, 25 Ethiopia 22.3%, 26 22.2% 27 and 18.1%. 28 It is comparable with a study reported in Ethiopia from Nekemte 9.2% 28 and Southern Ethiopia 8.3%. 21 The possible reason for such a difference may be the sample size, the method adopted and age variation. 8,21 In this study, 7.6% (4.6%-11.0%) of Shigella spp. was isolated, which is comparable to the study conducted in Burkina Faso 5.8%, 29 Kenya 7.4%, 30 Nigeria 8%, 31 Ethiopia 8.3%, 32 9.1%. 33 In contrast to our findings, a lower rate of Shigella infection was reported from China 1.4%, 34 Nekemte 2.1%, 28 Ambo 2.5%, 35 Goba 4.3%, 8 and a higher prevalence of Shigella was reported from Mekelle 13.3%, 36 Botswana 21%. 37 This study identified Shigella dysenteriae from another Shigella spp. with available biochemical tests and accordingly, 11 (4.2%), 95% CI: 1.9%-6.8%, were infected by Shigella dysenteriae. This rate is lower than the report from Nepal 14.5%. 38 However, it is comparable with the findings from Central Africa 3%. 39 The other nine (3.42%), 95% CI: 1.5-5.7, were other species of Shigella and this value is higher compared with the results reported from China 1.4%, 34 17 A single S. typhi 0.4%, 95% CI: 0%-1.1%, isolated in this study was in line with the findings reported from Addis Ababa, 0%, 41 1.1%. 50 In contrast to our finding, higher rates were reported from Sudan 4.0%, 47 China 4.3%, 34 Addis Ababa 3.95%, 32 Kenya 3.4%, 53 Turkey 3%, 54 Gondar 1.6% 55 and Hawassa 1.5%. 21 This difference might be due to sampling size, climatic condition and age differences. 8,21,41,56 Our study revealed that the highest rates of antibiotic resistance of Shigella spp. were against Ampicillin 81.8%, which is comparable with the studies from different areas of Ethiopia 70.1% from Jimma, 43 79.9% from Gonder, 57 86.7% 28 and 88.9% from Mekelle. 58 Our study also showed relatively low resistance compared to the findings from Nigeria 90.5%, 59 Harar 100%, 48 Jimma 100%, 17 Hawassa 93%. 60 This may be due to widespread resistant strains in the countries. Another antibiotic resistance of Shigella spp. was seen against tetracycline 71.4%, and this was comparable with the findings reported from Harar 70.6%, 48 Jimma 63.6% 43 and Mekelle 77.8%. 49 This result was slightly lower than the studies reported from Butajira 82.4%, 52 Gondar 86% 61 and 86%, 57 Hawassa 90%. 60 This may be due to the nature of the susceptibility of strains to tetracycline. Our results also indicated that 52.4% was resistant against co-trimoxazole and this was comparable with the studies done in Hawassa 56.0%, 50 Addis Ababa 45.7% 62 and Mekelle 55.6%. 49 In contrast to our finding, higher results are reported from Gonder 73.4%. 57 Several factors may contribute for the resistance, which may be related to the potency and quality of antimicrobials and the distribution of resistant strains. 62 Studies have shown that Shigella is a global problem especially in developing countries. 63,64 It is common in areas where living standards of people are very low and access to safe and adequate drinking water and proper waste disposal systems are often very limited or even absent. 17,32,57,61,65 Deprived access to a good latrine, poor sanitation and hygienic status, hand-washing habit before and after a meal and/or latrine, absence of proper sewage disposal system were responsible for a typhoidal type of Salmonella infections. 17,21,32 Our study assessed risk factors for acquiring Shigella/Salmonella infection. Accordingly, the socio-demographic factors like age group (5-9 years), sex, educational and marital status of mothers; family size (4-5 persons) and monthly income (>1500) were responsible for a higher percentage of infection. However, none of these variables were statistically associated. In agreement with our study, a statistically insignificant association with socio-demographic characteristic was reported from Ethiopia in Addis Ababa 32 and Bahir Dar. 33 In contrast to our study, age range 1-3 from Bale, 8 Burkina Faso 66 and Mekelle Ethiopia; 36 educational status of the family (illiterates) from Gonder; 6 and family income in Thailand 67 were statistically associated with p < 0.05.
The clinical variables showed that there are high rates of infection associated with mucoid diarrhea with a history of diarrhea, no malnutrition. However, none of these variables were statistically associated. Contrasting to our finding, the type of diarrhea with watery consistency from Bahir Dar was with high rates and statistically associated. 33 Similarly, a study from Ambo showed that mucoid diarrhea was with higher rates of infection. 35 Our study also showed that those who have taken vaccination were highly affected, which is in disagreement with the study reported from Ambo, which spells that children who were not vaccinated were at higher risk and significantly associated. 35 Host factors associated with malnutrition, such as a compromised immune system, environmental enteric dysfunction and enteric microbiome may predispose malnourished children to be more severe to disease infection. [68][69][70][71] Children with malnutrition may also be more likely to live in households of low socioeconomic status where poor access to clean water, 72,73 sanitation and hygiene may expose them to greater fecal microbial loads and a higher risk of pathogens associated with mortality such as Shigella species. 3 However, in our study, even if it is not statistically associated, high rates of infection were evident from malnourished children.
Behavioral factors such as the source of water from the pipeline, washing of hands after defecation for some time, consumption of food before illness, storing of food in open containers for later use, not washing hands before and after a meal and cleaning of cooking containers for some time constitute associate factors for the infection. However, the multivariate analysis showed that those who had a habit of washing hands for some time as compared to those who practice hand-washing always were at risk of infection. This can be justified that regular hand-washing using detergent is important for the prevention of Shigella and Salmonella transmission. Similarly, those who store cooked food in an open container for later use (34.44 times) are also at risk of infection as compared to those who practice closing the container, with a p value of ⩽0.05, which is in agreement with the studies conducted in Southern Ethiopia Arbaminch. 7,74 This can be explained by the transmission of these infections by flies, cockroaches and rodents in the kitchen and, therefore, exposing food can lead to diarrhea in children through bacterial contamination. [75][76][77][78][79] Limitation of study • • Our study does not indicate the total magnitude of Salmonella and Shigella infection in Hawassa town. • • It does not identify bacteria at the species level due to a lack of anti-sera in the local market.
Conclusion
Our study indicated that there was a high rate of Shigellosis and incidence of single Salmonella among children with diarrhea. Ampicillin, augmentin and tetracycline were resistant, while ciprofloxacin, ceftriaxone, ceftazidime, gentamycin, chloramphenicol, cefuroxime and cotrimoxazole were relatively sensitive. It was also found that those who practice hand-washing after defecation for some time, and store foods for later use in an open container were at risk of infection. Therefore, to alleviate this infection, the concerned body should be given health education for hand-washing after defecation and storing food in a closed container for later use.
|
2021-05-05T05:17:51.822Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f23db717058f774d6166dfcd4be3c81ddc28c6dc",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20503121211009729",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f23db717058f774d6166dfcd4be3c81ddc28c6dc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254295756
|
pes2o/s2orc
|
v3-fos-license
|
Protective effects and potential mechanisms of fermented egg-milk peptides on the damaged intestinal barrier
Introduction Fermented egg-milk peptides (FEMPs) could enhance the colon-intestinal barrier and upgrade the expression of zonula occludens-1 and mucin 2. Besides, the underlying biological mechanism and the targets FEMPs could regulate were analyzed in our study. Methods Herein, the immunofluorescence technique and western blot were utilized to evaluate the repair of the intestinal barrier. Network pharmacology analysis and bioinformatics methods were performed to investigate the targets and pathways affected by FEMPs. Results and discussion Animal experiments showed that FEMPs could restore intestinal damage and enhance the expression of two key proteins. The pharmacological results revealed that FEMPs could regulate targets related to kinase activity, such as AKT, CASP, RAF, and GSK. The above targets could interact with each other. GO analysis indicated that the targets regulated by FEMPs could participate in the kinase activity of the metabolic process. KEGG enrichment revealed that the core targets were enriched in pathways related to cell apoptosis and other important procedures. Molecular docking demonstrated that FEMPs could bind to the key target AKT via hydrogen bond interactions. Our study combined the experiment in vivo with the method in silico and investigated the interaction between peptides and targets in a pattern of multi-targets and multi-pathways, which offered a new perspective on the functional validation and potential application of bioactive peptides.
Introduction
To protect the body from the environment, the intestinal tract-one of the largest luminal interaction areas-contributes more to the body. The intestinal barrier was reported to be vital in regulating the immune system, improving nutrition absorption, and maintaining intestinal health (1). Among the treatments for intestinal diseases, many potential therapies aimed to develop new drugs to protect the intestinal barrier .
/fnut. . and repair damage to the intestinal mucosa. It is essential to keep the intestinal barrier intact and healthy. The weakness of the intestinal barrier has been linked to fat, bile acids, emulsifiers, and gliadin (2). Damage to the intestinal barrier could cause abdominal pain, diarrhea, and indigestion. As the previous paper reviewed, the intestinal barrier might drive a wide association with other chronic metabolic diseases (3), such as upper gastrointestinal diseases, inflammatory bowel disease (IBD), celiac disease, and non-alcoholic fatty liver disease. In the body of patients with IBD, the damaged intestinal barrier could upgrade the expression of pro-inflammatory cytokines, such as tumor necrosis factor-α (TNF-α) and interleukin−1β (IL-1β), which exacerbated the inflammatory response (4,5). Thus, the intestinal barrier is a crucial point linked to IBD. Research on celiac disease indicated that increased intestinal tract permeability might cause negative secondary effects, such as vicious cycles of intestinal cell damage (6). In addition to the diseases listed above, the intestinal barrier is also related to obesity and diabetes. A reference has confirmed that the changing of the intestinal barrier might be a pathological cause of obesity. A type of intestinal microbial metabolite called short-chain fatty acids was found to regulate intestinal dysfunction and ameliorate obesity (7). Another study reported that the disruption of the intestinal biological barrier could cause the aggregative symptoms of diabetes (8). Based on the association between the intestinal barrier and other metabolic diseases, it is of great significance to maintain stability and repair the intestinal barrier. Different kinds of tight junctions, mucosa, and numerous mucosal epithelia, which constitute the intestinal barrier (3), were beneficial in enhancing the powerful protective function of the intestinal. Zonula occludens−1 (ZO-1), one of the vital tight junctions (TJs), was reported to have an integral effect on intestinal barrier function and mucosal permeability (9). Mucin 2 (MUC-2), a vital element of mucus components, is secreted by the intestinal goblet cells (10). Therefore, these two proteins have become one of the indicators to determine whether the intestinal composition is complete and whether the barrier function is sound.
On the basis of the consequences of the intestinal barrier on the health of the body, people have become increasingly interested in functional and effective food ingredients. Protein derived from egg white was claimed as the feature of repairing the damaged intestinal barrier (11). Bioactive peptides, obtained under the effects of hydrolysis and fermentation (12), have the potential to be functional ingredients to enhance the intestinal barrier and cure intestinal diseases. Peptides derived from eggs have been widely reported as having biological functions, and they could be functional ingredients to enhance body health (13). As we expected, egg white peptides were reported for their effect on the intestinal wall, which represented intestinal barrier repair and gut microbiota regulation. Ge has demonstrated the alleviative effect of egg white peptides on the colon in colitis mice. The results confirmed that egg white peptides could repair the mucosa and intestinal structure of the damaged tissue (14). Our previous research also investigated the protective function of fermented egg milk on colitis induced by dextran sulfate sodium (DSS) in mice (15). However, the underlying mechanism and potential targets of the fermented egg-milk peptides' (FEMPs) action on the damaged intestinal barrier need further research.
In extensive research on the mechanism of bioactive peptides regulating the intestinal barrier, researchers mostly mentioned a single target and pathway. Unlike the traditional idea, network pharmacology, a novel, and powerful tool provide evidence for the possible mechanism of action of multi-target and multipathway disease regulation (16). It has been utilized in the research of colonic diseases (17,18). Molecular docking is a justified and proven instrument to analyze the interaction between receptors and ligands. Recently, molecular docking has been widely utilized in the field of computer-aided binding for predicting peptides with bioactive functions (19). Herein, we used the idea and methods of network pharmacology to analyze the possible and potential targets FEMPs could affect, and the interaction and relationship between targets were further investigated. Gene ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis, as the emphasis of network pharmacology, were often used to carry out and illustrate the function of the core targets.
In our study, the DSS-induced colitis mouse model was applied to evaluate the repair of the intestinal barrier. The ZO-1 and MUC-2 were focused on as the factors. Besides, the potential pharmacological mechanism under the FEMPs effect on the intestinal barrier was performed via the idea of network pharmacology and the method of molecular docking. This study combines in vivo assessment with in silico evaluation, which could provide a new perspective for intestinal barrier enhancement and colonic disease treatment and broaden the horizons of egg products' functional applications.
Animal experiments design
A total of 60 Balb/c mice (male, 8 weeks old, SPF level) were obtained from Beijing Charles River Co., Ltd (Beijing, China) and housed in the lab in the Animal Model Laboratory Building at Jilin University. The housing condition was adjusted to 20-23 • C of temperature, 40-70% of humidity, and a light-dark cycle of 12 h per day. Before the beginning of the experiment, the mice were made to acclimate to the environment and were given food and water freely. All the animal procedures in the whole experiment were implemented in accordance with the guidelines for laboratory animal care and use at Jilin University. The animal experiments were reviewed and confirmed by the Jilin University animal ethics committee (Approval No. 20200483). The entire experiment's anesthetic-required procedures were performed under the influence of isoflurane.
For the allocation of the 60 mice (22-24 g), the five groups were settled, which were named CK (the short name of control check), CK + FDP (control check and drink FEMPs freely), DSS + FDW (dextran sulfate sodium and drink water freely), DSS + GP (DSS and gavage FEMPs), and DSS + FDP (DSS and drink FEMPs freely). The integral experiment period was divided into the intestinal damage-making period (period 1, days 0-7) and the treatment period (period 2, days 7-21). During period 1, the DSS, the DSS + GP, and the DSS + FDP groups were made to drink 3% DSS solution instead of making them drink water to run the intestinal-damaged colitis model. Meanwhile, the CK and the CK + FDP groups were provided with free food and water. During period 2, the DSS + GP group was asked to obtain 200 mg/kg/day FEMPs by gavage. The CK + FDP and the DSS + FDP were required to drink FEMPs freely for 4 h and drink water for the other 20 h per day. The DSS + FDW group was asked to drink water. All the groups were given free mouse food, and the body status of all the mice was recorded during the experiment periods. The mice were euthanized according to animal procedures and guidelines. The colons were harvested and weighted, then cryopreserved at −80 • C.
Immunohistochemistry staining
Incubate sections were deparaffinized and rehydrated with xylene and absolute ethanol before being placed in an EDTA antigen retrieval buffer with a pH value of 8.0. Maintained at a sub-boiling temperature for 8 min, we washed the sections three times with PBS (pH 7.4). Then, 3% BSA was added here to cover the marked tissue and block non-specific binding in 30 min. After that, the primary antibody and the secondary antibody were provided. DAPI counterstain was carried out in the nucleus, threw away the liquid carefully, then the slips were covered with the anti-fade mounting medium. The incubated sections were detected and envisioned by fluorescent microscopy.
Western blot analysis
The western blot analysis was developed under the protocol of a previous study with some modifications (20). The key proteins (ZO-1 and MUC-2) in colon tissue lysates were removed by an SDS-polyacrylamide gel and transferred electrophoretically to a polyvinylidene difluoride (PVDF) membrane. We placed the transferred membrane into the TBST incubation tank for a quick wash before blocking the membrane for 30 min with a blocking buffer (5% milk) at room temperature. After that, the membrane was incubated with appropriate dilutions of primary antibodies. The membrane was incubated with a dilution of 1:5000 of conjugated secondary antibody in blocking buffer at a temperature condition in a room for 30 min. After that, the film was washed three times (5 min each time) with TBST. The acquired pictures were obtained with darkroom development techniques for chemiluminescence. The performance of ECL was in accordance with the manufacturer's description, adding ECL reagents for 1-2 min at room temperature, and the WB images were captured under the various times of exposure.
FEMPs information preparation and target gene prediction
The sequence information for FEMPs was based on the database in our lab and silicon digestion. The ExPASy PeptideCutter program, a database available at https://web. expasy.org/peptide_cutter/, was used here to collect the FEMPs' information. All the proteins and peptides in fermented egg milk and the sequences of LC-MS/MS results were collected based on our previous study (15) and uploaded into the PeptideCutter database. Pepsin and trypsin were selected as the digestive enzymes to perform the digestion process. All sequences of FEMPs were uploaded to the Pharmmapper platform at http://www.lilab-ecust.cn/pharmmapper/index.php. To screen the targets, FEMPs could be regulated (21). The targets were recorded according to the Fit score. The Uniprot platform (http://www.uniprot.org/) was utilized here to filter the Homo targets. Intestinal barrier-related targets were obtained from the GeneCard database (https://www.genecards.org). Besides, the targets obtained from references linked to intestinal health were used to supplement the targets set.
Construction of the protein-protein interaction (PPI) network
To clarify whether FEMPs could regulate the relationship between targets, we established and analyzed a PPI network of the targets. The STRING database (http://string-db.org/) (22), a .
powerful platform for analyzing current and possible proteinprotein interactions, was used to establish functional protein association networks based on computational prediction, knowledge conversion between different organisms, and interactions aggregated with other related databases. The top 53 targets were input to the STRING database to analyze the potential co-interaction, and the STRING database depicted the figure of the PPI network.
GO and KEGG enrichment analysis
To understand the underlying effect and mechanism of intestinal barrier functions of the core targets, we performed the GO and KEGG enrichment analysis based on the Metascape software (https://metascape.org) (23). In the custom analysis process, homo sapiens was set as the analysis species. GO biological processes (BP), GO cellular components (CC), and GO molecular functions (MF) were ticked in this part. The parameters of the analysis were shown: Min overlap, 3. Pvalue cutoff, 0.01, Min enrichment, 1.5. All the results were visualized by a free online bioinformatics database (http://www. bioinformatics.com.cn/).
Pathway regulation
FEMPs' regulation of the pathway was analyzed by Metascape software, and the related genes were marked by the website of the KEGG pathway (https://www.genome.jp/ kegg/pathway.html). The targets FEMPs could influence in the pathway were marked in a different color, and the figure was visualized via Adobe Illustrator software.
Molecular docking
Based on the computer calculation, molecular docking, a considerable tool widely used to reveal the relationship and interaction between receptors and ligands, was used here to analyze the action of FEMPs on the targets. The calculation has been extensively utilized and confirmed to predict the interaction energy between different molecules. In the current research, molecular docking was performed based on the Autodock Tools and Autodock Vina software. Based on the node degree analyzed by the PPI network (Supplementary Table S1), the AKT showed the highest degree. Herein, the AKT target was selected as the receptor to perform the molecular docking process, and the FEMPs were developed as ligands. All the files of receptors and ligands with the PDBQT format were set as the input files for molecular docking experiments. The crystal structure of AKT (1UNQ, PDB doi: 10.2210/pdb1UNQ/pdb) was downloaded from the database of the RCSB Protein Data Bank (http://www.rcsb.org/pdb), and the organism of the protein was selected as "Homo." All the molecules of water were excluded from the crystal structure of AKT (1UNQ) to prepare for the molecular docking process. The binding site detection was according to the reference. In this study, a grid box of 25 × 25 × 25 Å was established at the surrounding binding site location, and the grid was generated suitably for peptide docking. The flexible style of the peptides was selected. The hydrogen bonds and cooperative interactions between the AKT residues (1UNQ) and ligands were visualized and analyzed.
Results
FEMPs enhance the expression of ZOand MUC-Previous references have confirmed the significance and effect of TJs in the occurrence and preservation of UC (24,25). ZO-1, a kind of vital peripheral membrane protein in the colon, maintained the junction network's relationship. It could enhance the colon barrier by linking claudins, occludin, and other proteins. MUC-2 was the vital component of mucus, with the effect of forming an indispensable barrier against the pathogen in the tissue of the intestine. Besides, MUC-2 was reported to preserve the mucus layers of colon tissue. The content of MUC-2 was also related to the number of intestinal goblet cells (26), which might benefit the immune function of the colon. Our study investigated the expression of these two vital proteins by immunohistochemistry staining and western blotting.
The results of immunohistochemistry staining revealed the two vital proteins locally. As Figure 1 depicts, the nucleus in intestine epithelial cells was blue, ZO-1 was red, and MUC-2 was green. The intensity and area of immunofluorescence staining represented the expression levels of the two proteins. Figure 1 shows that in the CK and CK+FDP groups, ZO-1 and MUC-2 were distributed around the nucleus, which meant the normal expression of the two proteins. The intestinal structure was complete and sound, and the intestinal barrier function could be normal and powerful. By contrast, DSS treatment induced an obvious loss in the content of ZO-1 and a considerable decrease in the expression of MUC-2, which showed that the gut structure was damaged and weak. Without the protection of the intestinal barrier, the function of the gut might be influenced. FEMP treatment could repair the colonic damage induced by DSS and enhance the expression of two vital proteins. In addition, the expression of ZO-1 and MUC-2 was upregulated. As the picture depicts, the fluorescence intensity of ZO-1 and MUC-2 was enhanced, which indicated that the location and expression of these two significant proteins were restored.
To further determine the relationship between the observed proteins and FEMPs treatment in experimental colitis, an experiment with the western blot was performed in our study. The proteins were extracted from the colons of the mice. As shown in Figure 2, both ZO-1 and MUC-2 displayed a .
/fnut. . decreasing trend in the DSS group, which meant these two proteins weakened in the colitis body. The proteins in the DSS+GP group and the DSS+FDP group were restored, which indicated that FEMPs might benefit from the increased levels of ZO-1 and MUC-2. Both the immunohistochemistry staining and western blotting revealed the positive effect of FEMPs on the damaged intestinal barrier.
Analysis of the PPI network
To acquire information on targets that FEMPs could regulate, we developed target matching based on the Pharmmapper website. According to the network pharmacology method, a total of 52 targets (the top 52) were contained in the FEMPs targets database. The STRING website was applied to perform the PPI network. After that, a network containing 52 nodes and 223 edges was received, and the enrichment p-value of this network was 1.0e−16 (Figure 3). In the structure of the PPI network, the nodes represented the targets, and the edges represented the interactions. The more edges a node emerged, the more important role it would play in the network. Notably, the targets AKT1, CASP3, SRC, HPGDS, and MMP9 were connected with other targets, which revealed that these targets could generate more interaction and play a linkage role. Meanwhile, the node degree illustrated the same results that .
/fnut. . the above-mentioned targets have the potential to affect other targets (Supplementary Table S1).
To further demonstrate the information on interactions and functions of the PPI, the interaction analysis of the edges was calculated by the STRING database. The edges with a combined score larger than 0.90 are listed in Table 1. The results indicated that the interaction between CASP3 and XIAP, GLO1 and HAGH, INSR and PTPN1, PIK3R1 and SRC showed a strong relationship, with a combined score of 0.999. Besides, the co-interactions between PTPN1 and SRC, AKT1 and GSK3B, AKT1 and PIK3R1, and INSR and PIK3R1 revealed an intensive influence in the PPI network. All of these results demonstrated the potential interaction among the targets.
GO and KEGG enrichment analysis
The GO and KEGG enrichment analyses of the above targets were carried out by Metascape software to further find out their potential biological functions. The parts of BP, CC, and MF were performed. The top 35 results (14 highly enriched in BP, 10 highly enriched in CC, and 11 highly enriched in MF) are shown in Figure 4. For BPs, the results indicated the core targets participated in the endopeptidase activity involved in the apoptotic process with an enrichment score of 52.8. Meanwhile, the glutathione metabolic process was enriched with an enrichment score of 50.7. The regulation of cysteine-type endopeptidase activity (with an enrichment score of 47.8) was shown here. Besides, the results also illustrated the process of . /fnut. . response to peptides and the peptide metabolic process, which corresponds to the FEMPs. For the parts of CC, tertiary granule lumen, ficolin-1-rich granule lumen, and ruffle membrane were contained with enrichment scores of 31.7, 23.4, and 18.0, respectively. These results indicated that the targets could participate in the regulation of cell structures. We could observe from the MF enrichment results that the core targets could affect the regular activity involved in the apoptotic process with an enrichment score of 43.5, metallopeptidase activity with an enrichment score of 22.0, and protein serine/threonine/tyrosine kinase activity with an enrichment score of 10.4. For the KEGG pathway enrichment analysis, various signaling pathways related to cell proliferation, differentiation, morphogenesis, and apoptosis were analyzed. The results demonstrated that there were 23 pathways with a number < 5, as listed in Figure 5. The typical signaling pathway might influence the intestinal epithelial cells and intestinal barrier functions mentioned in this study. The PI3K-Akt signaling pathway, with a p-value of 4.41e−10, was enriched with the largest count of 10. These results showed that the targets could participate more in the PI3K-Akt signaling pathway. VEGF signaling pathway and EGFR tyrosine kinase inhibitor resistance were listed here with an enrichment score of 59.06 and 51.46, respectively. Meanwhile, many pathways related to intestinal health, such as the TLR signaling pathway (27), MAPK signaling pathway (28), and Rap1 signaling pathway, were enriched with a high enrichment score.
Regulation of PI K-Akt signaling pathway
As a central and vital signal transduction pathway in many biological and physiological processes, the PI3K-Akt signaling pathway has a relationship with cell proliferation, morphology, apoptosis, migration, and synthesis (29). Besides, the PI3K-Akt signaling pathway has been confirmed to produce a relationship between intestinal health and colonic mucosa by previous references (30,31). In our research, the connection between intestinal barrier function and the PI3K-Akt signaling pathway was confirmed again. The core targets of FEMPs could regulate and participate more in the PI3K-Akt signaling pathway. As Figure 6 depicts, cytokines, GF, RTK, FAK, PI3K, AKT, RAF1, GSK3, 4EBPs, and EIF4E were included in the PI3K-Akt signaling pathway. The result illustrated that the targets that FEMPs could regulate were distributed both upstream and downstream in the pathway, which could influence focal adhesion, protein synthesis, and cell cycle progression.
Molecular docking
According to the result of the PPI network analysis, the target AKT revealed the highest degree, which indicated that this target might play a significant role in the PPI network and could affect other targets easily. Thus, the AKT target was chosen as the receptor in the molecular docking process. A widely known and powerful method, molecular docking, was developed here to demonstrate the interaction between the target and ligands. There have been many studies that have utilized molecular docking to define the interactions between different targets and peptides (19,32,33). The crystal structure of AKT (1UNQ, PDB doi: 10.2210/pdb1UNQ/pdb) was downloaded from the RCSB Protein Data Bank (http://www.rcsb.org/pdb) of the "Homo" organism ( Figure 7A). The water molecules were removed during the molecular docking process. After the consummation of the calculation, the binding site was constructed. The ligands were set to bind at the site ( Figure 7B). As the figure shows, the peptide was embedded in the active cavity at the docking site of the protein receptor and was fixed at the site by chemical force, forming a stable composite structure. The docking energy can numerically reflect how tightly the ligand binds to the receptor. Herein, the docking energy between peptides and the target was recorded. The result showed that FEMPs could bind to the receptor with a stable status and the top 15 FEMPs with their interaction information, as listed in Table 2. The docking energy ranged from −8.576 to −6.048 kcal mol −1 . The peptide with the sequence ESQNK showed a docking energy of −8.576 kcal mol −1 , which formed nine hydrogen bonds and two salt bridges in the combined interaction structure. For the . /fnut. .
FIGURE
Gene ontology (GO) enrichment of key targets. Biological processes, cellular components, and molecular functions were mentioned. The enrichment scores were marked in the picture.
interaction analysis of the combined structure, many residues contributed to the chemical bonds. The interactions between the peptide ESQNK and the target were visualized in Figure 7C. The interaction result revealed that GLU 114 and NH + 3 ion, LYS 111 and O − ion formed one salt bridge, respectively. Six residues (GLU 114, LEU 111, ALA 58, GLN 59, CYS 60, and ARG 76) could produce hydrogen bonds in conformation with other atoms.
Discussion
In recent years, the intestine's role as a protective organ of the body has brought it to the forefront of medical research. In addition, intestinal immune capacity has become a hot and well-known topic in recent research (34, 35). As a crucial part of the intestinal tract, the intestinal mucosal barrier contributes greatly to intestinal digestion, absorption, and barrier protection. More and more evidence has confirmed that the intestinal barrier not only acts as a medium for absorbing and exchanging nutrients and other substances but also plays an essential role in keeping external antigens and harmful armamentariums away from entering the body (1). Intestinal was also related closely to the immune system. Previous studies have reported that the intestinal mucosal barrier was associated with the toll-like receptor (TLR) signal pathway, which represented the significant processes of the immune system and its functions (36). Tight junctions, made up of Zonula occludens 1 (ZO-1), claudin, and occluding, were reported to contribute beneficially to the structure and .
/fnut. . In the current study, these two proteins were used as the evaluation indicators to assess the damage to the intestinal barrier, and DSS-induced colitis was performed here. Bioactive peptides derived from the raw material or other food ingredients were reported as having intestinal protective functions and features. Our previous research found that fermented egg-milk peptides could alleviate intestinal .
FIGURE
The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis showed that various targets of the PI K-AKT signaling pathway were tightly associated with the peptides' pharmaceutical actions. The red nodes represent the genes could be regulated or a ected by fermented egg-milk peptides (FEMPs).
inflammation symptoms and establish the gut microbiota in colitis mice (15). However, the protective function of the damaged intestinal tissue barrier and the underlying biological mechanism are not clear now. In the current study, the core idea of network pharmacology was utilized to develop the analysis of FEMPs on the potential targets related to the intestinal barrier. Network pharmacology, a widely used method with the idea of "multi-targets and multi-pathways, " would naturally become a widely used strategy and unbiased method for developing the underlying and potential mode of action of natural foods and their functional ingredients (40). We mapped the interaction figure of possible interactions between the targets not only to investigate the application potential of functional ingredients but also to develop their possible interactions, thereby increasing the speed of the development process (41). FEMPs, with their characteristic multicomposition, were suitable for the application of network pharmacology. Herein, the assessment model of colitis mice and bioinformatics methods such as network pharmacology and molecular docking technology were co-utilized to find out the pathological mechanism FEMPs could exert on the targets. It could provide a new view for intestinal barrier damage restoration, and colonic disease treatment broadens the horizon of egg products' functional applications. The expression and location of key proteins such as ZO-1 and MUC-2 could influence the function and condition of the colon. To further demonstrate the effect of FEMPs on the reconstruction of the two key proteins, the commonly used method of evaluating intestinal damage, immunofluorescence analysis, was performed here. At present, many studies have used immunofluorescence to analyze the location, distribution, and function of such proteins in the intestinal. Bian and his partner used immunofluorescence analysis to clarify the effect of Akkermansia muciniphila on two important evaluation markers of the intestinal barrier, ZO-1 and Occludin (42). Kim performed immunofluorescence to analyze the influence of short-chain fatty acids (SCFAs) on ZO-1 in the colon tissue of inflammatory mice (43). In addition to using immunofluorescence analysis, in this paper, we also employ western blotting to detect the expression of these two vital proteins. Herein, the results showed that FEMPs could enhance the expression of ZO-1 and MUC-2, which benefited the construction of an intestinal barrier.
Manu studies have shown that bioactive peptides can improve the intestinal barrier and that bioactive peptides can regulate the expression of ZO-1 and MUC-2. Zou has researched the protective function of the intestinal tight junction with tissue factor-related peptides (44). The effect of shrimp peptide on the intestinal barrier of cyclophosphamide-treated mice was detected, and the results revealed the peptide could increase tight-junction-associated proteins (45). Our study also /fnut. . demonstrated the beneficial influence of FEMPs on damaged intestinal barrier repair and construction in colitis mice induced by DSS, making the study of bioactive peptides in the intestinal barrier more complete and holistic. Based on the idea of multi-targets and multi-pathways, network pharmacology, a novel method used to analyze the pharmacological effects of multi-ingredient compounds, was performed here to further define the regulation of FEMPs. PPI networks, GO, and KEGG analyses were developed in this research to investigate the underlying biological mechanisms. The PPI network enrichment results revealed that AKT, CASP3, SRC, HPGDS, and MMP9 play vital roles, and many references confirmed the significance of such targets. Akt acted as a central and core part in the regulation of cellular anti-apoptosis effect on a majority of body diseases, which might influence the proliferation and metabolism of intestinal epithelial cells (46). CASP3 is a key enzyme that regulates the process of apoptosis. SRC might regulate cell growth, differentiation, and survival through unequal signal transduction, which affects cell adhesion, migration, and invasion. Such significant targets were enriched in the PPI network, which indicated that the interaction of the target set of FEMPs was complete and consummate. GO analysis (BP) revealed that the targets could regulate the endopeptidase activity associated with the apoptotic process, glutathione metabolic process, and cysteine-type endopeptidase activity regulation. A previous paper demonstrated the relationship between the biological process and intestinal epithelial cells (47). In the part of CC, the biological function included the tertiary granule lumen and ruffle membrane. The MF results indicated that kinds of kinase activity were mentioned. These results demonstrate that FEMPs can be a functional ingredient applied to intestinal damage by targeting the kinase, the same as Yan's paper (48). KEGG analysis revealed the core targets mostly participated in the PI3K-Akt pathway, focal adhesion, the TLR signaling pathway, the MAPK signaling pathway, the EGFR tyrosine kinase inhibitor resistance, and the VEGF signaling pathway. The aforementioned pathways were related to the intestinal barrier and colonic health, which could regulate the vital biological processes of the intestinal epithelial cells.
. /fnut. . Previous references have reported that the PI3K/Akt pathway could have a relationship with the proliferation of transformed intestinal epithelial cells (49,50). Bian and his partners also demonstrated that the PI3K/Akt signaling pathway could affect the distribution of ZO-1 in epithelial cells (51). Our results were consistent with the above papers. TLR, as former research described, could mediate the process of intestinal barrier breakdown (52). Our KEGG results also demonstrated the relationship between TLR and the intestinal barrier. MAPK signaling pathway, according to Zheng's paper, was linked to intestinal barrier disruption and colonic inflammation occurred in the colon (53). In brief, we speculated that such a pathway could occur in relation to the key proteins ZO-1 and MUC-2 in the colon and then play important roles in the regulation of intestinal barrier function.
To further research the interaction between FEMPs and the key protein, molecular docking, a powerful and widely used method, was developed in the current study. It could be a significant strategy based on the computer calculation of receptor-ligand interaction for predicting peptides with the core target (19). Akt, with the highest degree in our study, was used as the receptor in the molecular docking part. Akt could affect the transcriptional regulation of cellular genetic information and cell survival, metabolism, differentiation, growth, migration, and angiogenesis in the bodies. Besides, it has a close relationship with the intestinal barrier, and its activity regulation was seen as an effective way to enhance it (54). A reliable paper demonstrated the regulation style of "Akt-in" peptides, and their binding sites were described in this research (46). As we expected, the FEMPs could bind to the receptor with a lower docking score and less energy in our study, indicating that FEMPs could combine with the receptor in a stable condition.
Moreover, hydrogen bonds and salt bridges play vital roles in maintaining the structure of the ligand receptor. According to the results, we inferred that FEMPs might be able to closely chimera with Akt, thus occupying its active site and affecting the enzyme activity. This may enhance the distribution and expression of tight junction proteins in intestinal epithelial cells and result in the repair of the damaged intestinal barrier.
Conclusion
In this study, the protective function and underlying targets of FEMPs on the damaged intestinal barrier were investigated based on immunofluorescence analysis, western blot, and network pharmacology. The potential mechanism appeared to affect the peptidase activity in the apoptotic process, serine/tyrosine kinase activity, and cellular metabolismrelated pathways such as the PI3K-Akt signaling pathway, TLR signaling pathway, and MAPK signaling pathway. Besides, FEMPs could combine the key protein Akt with hydrogen bonds and salt bridges. Based on the analysis of multiple targets and multiple pathways, this study could offer evidence for the regulation of peptides on the damaged intestinal barrier and provide a new perspective and ideas for the research of bioactive peptides and their biological functions.
Data availability statement
The original contributions presented in this study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by Animal Ethics Committee of Jilin University (Approval No.20200483).
Author contributions
SL: writing-original draft preparation. QY, XD, SL, and XL: data curation and formal analysis. ZD, TZ, and XS: writing-reviewing and editing. MX: sample collection. JL and FP: supervision. TZ, JL, and FP: conceptualization. All authors contributed to the article and approved the submitted version.
|
2022-12-07T18:20:46.134Z
|
2022-12-07T00:00:00.000
|
{
"year": 2022,
"sha1": "aee49195769a97e7e8e27f3116b7d144e1117f89",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "aee49195769a97e7e8e27f3116b7d144e1117f89",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221098508
|
pes2o/s2orc
|
v3-fos-license
|
Identification of a Cluster of Extended-spectrum Beta-Lactamase–Producing Klebsiella pneumoniae Sequence Type 101 Isolated From Food and Humans
Abstract We report a cluster of extended-spectrum beta-lactamase (ESBL)-producing Klebsiella pneumoniae sequence type 101, derived from 1 poultry and 2 clinical samples collected within the setting of a prospective study designed to determine the diversity and migration of ESBL-producing Enterobacterales between humans, foodstuffs, and wastewater.
We report a cluster of extended-spectrum beta-lactamase (ESBL)-producing Klebsiella pneumoniae sequence type 101, derived from 1 poultry and 2 clinical samples collected within the setting of a prospective study designed to determine the diversity and migration of ESBL-producing Enterobacterales between humans, foodstuffs, and wastewater.
Multiple studies and outbreak reports point to health-care settings being the most important reservoir for ongoing transmission of multidrug-resistant Klebsiella pneumoniae, with only a few reports pointing to the food chain as another source [1,2]. To further investigate whether such strains and their resistance genes are predominantly acquired in the community or within health-care settings, we designed a prospective study to determine the genetic relatedness of extended-spectrum beta-lactamase (ESBL)-producing Enterobacterales and their mobile genetic elements recovered from clinical samples, foodstuffs, and wastewater [3]. Within this setting, we identified a mixed cluster of ESBL-producing K. pneumoniae isolates of the sequence type (ST) 101, derived from 1 poultry and 2 clinical samples. This high-risk ST is related to numerous hospital outbreaks [4][5][6][7], but has not been reported from food samples so far. We here report the epidemiological context and the detailed genetical analyses of this cluster; compare it to international, publicly available chromosomal and plasmid sequences; and discuss its contribution to our understanding of the epidemiology of ESBL-producing K. pneumoniae.
METHODS
Over a 12-month period (June 2017-May 2018), a prospective study [3] (ClinicalTrials.gov identifier: NCT03465683) designed to study transmission of extended-spectrum betalactamase-producing Enterobacterales (ESBL-PE) was carried out in Basel, Switzerland. ESBL-PE were systematically recovered from samples collected during routine clinical care at the University Hospital Basel, while wastewater and foodstuff samples were collected monthly at predefined locations throughout the city. Whole-genome sequencing was performed on all ESBL-PE isolates collected from wastewater, foodstuff, and clinical samples (we chose 1 isolate per species from each body site per hospital stay) by Illumina NextSeq500/550, and genetic relatedness was assessed via core genome multi-locus sequence typing (cgMLST) genotyping. During the study period, 1 mixed cluster of ESBL-producing K. pneumoniae, isolated from 1 food sample and 2 clinical samples, was identified, applying the definition of less than 15 allelic differences in the 2358 genes analyzed. The isolates belonging to this cluster were additionally sequenced by Oxford Nanopore technologies. The chromosomes and plasmids were compared, and the resistance genes and replicons were identified. The Supplementary Material further details the methodological approaches.
RESULTS
ESBL-producing K. pneumoniae ST101 was recovered from 1 chicken meat sample that originated from Switzerland and was bought from a supermarket in Basel on 23 June 2017 and from 2 rectal swab samples collected on 9 July and 13 July 2017 from 2 patients admitted to the University Hospital Basel. During the study period, 31 chicken meat samples (7 from the same brand) were collected from the same supermarket from which the contaminated chicken meat sample was bought, and none of the remaining 30 samples revealed ESBL-producing K. pneumoniae ST101. Phenotypic resistance profiles are shown in Supplementary Table S1. As per institutional protocol, both patients were routinely screened at hospital admission for rectal carriage of multidrug-resistant bacteria after being repatriated from a hospital in Thailand on 9 July 2017, where they were hospitalized for 11 days after a motorcycle accident. Both patients were treated for polytrauma with multiple fractures. No infections or microbiological results were documented on arrival, and no infections were recorded during hospitalization at our institution.
Comparison of the 3 genomes revealed 0 allelic differences in the core genome between the 2 clinical isolates and 6 allelic differences between both clinical isolates and the food isolate. A whole-genome single nucleotide polymorphism (SNP) analysis showed 0 SNPs between both clinical isolates (after filtering out low-quality variants), and found 13 SNPs scattered along the chromosome between the food and the clinical isolates. The 3 isolates have a capsule locus type KL2 and an O locus O1v2; no Klebsiella known virulence genes were identified. Wholegenome assembly based on long-read sequencing revealed a circular chromosome of about 5.1 Mb, an ESBL plasmid of 241-253 kb, and 2 small plasmids of 3 kb and 4 kb in all 3 isolates. The 3 ESBL plasmids showed at least 99.98% identity at the DNA level ( Figure 1B). They share the same ESBL genes (bla CTX-M-15 ), resistance genes to other beta-lactams (bla TEM-1B ), sulphonamides (sul2), quinolones (qnrS1), aminoglycosides (aph [3"]-Ib, aph [6]-Id) and plasmid replicons (IncFIB K and IncFII). Only 2 small regions (approximately 5 and 7 kb) present in the food isolate were absent in the plasmids of the clinical isolates; 1 of these regions contains the gene tetA, conferring resistance to tetracycline, and the regulatory gene tetR. Mobile genetic elements, like phages, insertion sequences, and transposons of the Tn3 family, were identified in the flanking regions, suggesting potential involvement in the mobilization of DNA fragments. In the 3 ESBL plasmids, we found additional features, like metal binding and transport genes, including whole operons for cooper resistance (copA, copB, copC, copD, cusA, cusB, cusF, and cusC), for Fe(3+) dicitrate transport (fecA, fecB, fecC, and fecD), and the ars operon (arsR, arsD, arsA, arsB, and arsC), which confers resistance to arsenicals and antimonials. Other antimicrobial-resistant genes to quinolones (oqxA and oqxB), trimethoprim (dfrA14), fosfomycin (fosA), and a bla SHV-28 -like gene were identified in the 3 chromosomes.
All 3 sequences were compared to 256 K. pneumoniae ST101 genomes from 32 countries (including Switzerland), retrieved from the Genome database of the National Center for Biotechnology (NCBI) on 19 April 2020. According to cgMLST, all samples were distributed into 24 clusters, and the cluster detailed in this report remains unique, with a distance of more than 200 allelic differences to other ST101 genomes ( Figure 1A; Supplementary Figure S1). The ESBL plasmids seem to be exclusive to our sequences. A search of the PLSDB database [9] revealed 5 hits ( Figure 1B; Supplementary Table S2). From them, only 2 plasmids (NZ_CP025457 and NZ_CP025577) show comparable size and structure, but they lack the multidrug-resistance genes.
DISCUSSION
The genetically distinct chromosomes and plasmids of the food and clinical isolates reported here differ from all other [8]: pKP101-F is plasmid of the food isolate; pKP101-P1 and pKP101-P2 are plasmids of the clinical isolates; NZ_CP025457, NZ_CP025577, CP042872, NZ_CP011334, and NZ_CP046945 are hits of the pKP101-F plasmid in the PLSDB database. The color intensity of the concentric rings represents the percent of identity against the reference used (pKP101-F). Replicons and resistance genes are marked in red. Abbreviations: cgMLST, core genome multi-locus sequence typing; ESBL, extendedspectrum beta-lactamase; ST, sequence type; BRIG, BLAST ring image generator; GC content, guanine-cytosine content.
K. pneumoniae ST101 sequences published so far, suggesting that they are part of a possible regional transmission cluster.
Based on the SNP analysis, we speculate that both patients may have been exposed to the same contaminated food source of ESBL-producing K. pneumoniae in Switzerland prior to travelling to Thailand. Our study design, which does not collect detailed information on links between patient and foodstuff samples, does not allow us to conclude that this specific food isolate was the direct transmission source, but based on the number of SNPs, it is likely that the 3 isolates have a common and recent ancestor. Hence, our results suggest that foodstuffs may be an important, neglected source for the transmission of important outbreak clones to humans. Once introduced into health-care settings, further institutional spread by direct contact between health-care workers and patients or indirect contact with contaminated environments may facilitate the establishment of specific clones within health-care systems [1]. Even though no known virulence genes were found in these isolates, their multidrug resistance represents a potential risk for their carriers. Additionally, they host a set of metal resistance genes, which could favor the persistence of these microorganisms in the environment even in unfavorable conditions, facilitating their spread and eventually reaching areas of close contact with humans.
It is noteworthy that both patients were only identified as carriers, as the institutional infection prevention and control guidelines require screening of all patients repatriated from institutions abroad, in line with recommendations by public health authorities [10]. Unaware of the sequencing results, colonization with ESBL-producing K. pneumoniae would have been considered likely to have been acquired during the journey or the hospitalization in Thailand, based on reports of high rates of ESBL colonization in returning travelers from [11] and after hospitalization in Southeast Asia [12]. The distinct chromosomes and plasmids identified in this cluster, however, question the assumption that acquisition of these strains occurred in Thailand, a conclusion that is further supported by published K. pneumoniae ST101 genomes from Thailand being genetically very distant to the Swiss genomes, with 302 to 322 allelic differences. Screening policies merely based on the known epidemiology of ESBL producers may thus fail to detect potentially relevant sources, entertaining further spread.
Over the last decade, ESBL-PE, mainly Escherichia coli, have been increasingly identified in livestock, in the food chain, and in companion animals. However, the importance of these reservoirs in entertaining ongoing transmission in humans remains controversial [13,14]. CTX-M-15-producing K. pneumoniae of other STs has been identified in companion animals [15] and retail chicken samples [16,17], yet, so far, clustering with clinical samples has not been reported. Applying a "One Health" approach has revealed distinguishable ESBL transmission cycles in different hosts, and failed to demonstrate a close epidemiological linkage of ESBL genes and plasmid replicon types between livestock farms and people in the general population for E. coli [18]. Yet transmission to and from nonhuman sources may be required to maintain spread among humans [19], based on the results of a recently published modeling study revealing that human-tohuman transmission within the open community alone might not be self-maintaining without transmission to and from nonhuman sources. Our findings support these results and point to the need of continuous surveillance, including detailed whole-genome sequencing studies of epidemiologically important strains to enhance our knowledge of the epidemiology of ESBL-PE and derive effective infection prevention and control measures.
Supplementary Data
Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
|
2020-08-06T09:04:21.147Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "38f680499873b7ce2dc3685dfe89d22dd94b4ffc",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/cid/article-pdf/73/2/332/39065072/ciaa1164.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1766899728bba83e643e1037d69354e5d8028afe",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210814997
|
pes2o/s2orc
|
v3-fos-license
|
Remote Ischemic Preconditioning Neither Improves Survival nor Reduces Myocardial or Kidney Injury in Patients Undergoing Transcatheter Aortic Valve Implantation (TAVI)
Background: Peri-interventional myocardial injury occurs frequently during transcatheter aortic valve implantation (TAVI). We assessed the effect of remote ischemic preconditioning (RIPC) on myocardial injury, acute kidney injury (AKIN) and 6-month mortality in patients undergoing TAVI. Methods: We performed a prospective single-center controlled trial. Sixty-six patients treated with RIPC prior to TAVI were enrolled in the study and were matched to a control group by propensity-score. RIPC was applied to the upper extremity using a conventional tourniquet. Myocardial injury was assessed using high-sensitive troponin-T (hsTnT), and kidney injury was assessed using serum creatinine levels. Data were compared with the Wilcoxon-Rank and McNemar tests. Mortality was analysed with the log-rank test. Results: TAVI led to a significant rise of hsTnT across all patients (p < 0.001). No significant inter-group difference in maximum troponin release or areas-under-the-curve was detected. Medtronic CoreValve and Edwards Sapien valves showed similar peri-interventional troponin kinetics and patients receiving neither valve did benefit from RIPC. AKIN occurred in one RIPC patient and four non-RIPC patients (p = 0.250). No significant difference in 6-month mortality was observed. No adverse events related to RIPC were recorded. Conclusion: Our data do not show a beneficial role of RIPC in TAVI patients for cardio- or renoprotection, or improved survival.
Introduction
Physicians are still seeking a tool that can improve the outcome of patients with myocardial injury. In this regard, remote ischemic preconditioning (RIPC) has attracted broad awareness lately. In RIPC, brief, reversible episodes of ischemia followed by reperfusion are applied in a peripheral tissue or organ. The peripheral stimulus can be chemical, electrical or mechanical and renders protective effects on the heart or another distant organ through neuronal and humoral signaling [1]. Mechanical RIPC by repeated inflation and deflation of a blood pressure cuff is a non-invasive and inexpensive method, which is easily applicable in the clinical routine. Inconsistent results have been published to date concerning the cardioprotective value of RIPC. In patients suffering from myocardial infarction who were treated with percutaneous coronary intervention (PCI), decreased infarct sizes and reduced mortality were reported [2][3][4]. In contrast to promising results in smaller studies [5,6], large surgical trials did not confirm a benefit of RIPC in patients undergoing heart surgery [7,8].
In patients with symptomatic severe aortic stenosis, transcatheter aortic valve implantation (TAVI) has evolved from a bail-out procedure in patients unsuitable for cardiac surgery to the method of choice in patients who are high, intermediate, and also recently, low risk for cardiac surgery [9][10][11][12]. However, myocardial injury occurs frequently in patients receiving TAVI, which is associated with worse outcome [13,14]. It can be caused by several conditions [14,15] including (1) micro-embolism into coronaries, (2) hypoperfusion due to rapid pacing, (3) direct mechanical myocardial injury due to balloon dilatation and prosthesis implantation, and (4) possible complications during the procedure. Moreover, acute kidney injury (AKIN) is regularly observed as a complication following TAVI, being partly related to the amount of injected iodine contrast agent during the procedure. The knowledge about RIPC in patients undergoing TAVI is sparse. Until now, only one study by Kahlert et al. [16] has examined the influence of RIPC in TAVI patients. However, it did not provide evidence for a protective effect on the heart, kidney and brain, or an improved outcome.
In this study, we sought to elucidate the effect of RIPC prior to TAVI on acute myocardial and renal injury, as well as on mortality, after six months.
Study Design and Patient Enrolment
We performed a prospective single-center controlled trial with patient enrolment from February 2014 to December 2016. The decision for TAVI was made on an individual basis by the interdisciplinary Heart Team in accordance with current guideline recommendations [17]. Exclusion criteria were participation in other trials, second intervention ("valve-in-valve"), active malignancy with life-expectancy less than one year, systemic inflammatory response syndrome or sepsis, cardiogenic shock, dependency on inotropes, symptomatic peripheral artery disease, thrombosis and chronic renal failure with need for dialysis ( Figure 1). Written informed consent was obtained from all patients prior to TAVI. Data on control group patients (non-RIPC group), who received TAVI between February 2014 and April 2015, were acquired retrospectively. The study was approved by the local ethics committee (reference number #73032013).
Procedure-TAVI and RIPC
Aortic valves were implanted through the transfemoral approach and under general anesthesia as previously described [9]. Application of RIPC started with the induction of anesthesia. We performed three cycles of ischemia for five minutes followed by reperfusion for five minutes (Figure 2). To induce ischemia, the cuff of a standard blood-pressure-manometer (Boso, Jungingen, Germany) was inflated 20-30 mmHg above the systolic arterial pressure. Efficacy was assessed clinically by pulselessness of radial artery and acrocyanosis followed by reactive hyperemia. The time interval between the end of the RIPC procedure and the start of the TAVI intervention was less than 30 min. Patients of each group received either a self-expandable CoreValve Evolut R (Medtronic, Minneapolis, Minnesota, USA) (n = 44) or a balloon-expandable Sapien XT/Sapien 3 (Edwards Lifesciences Inc., Irvine, CA, USA) (n = 22) valve. We used Imeron 350 (Bracco S.p.A., Milan, Italy) as contrast agent.
J. Clin. Med. 2020, 9, x FOR PEER REVIEW 3 of 15 Figure 1. Study design and patient enrolment. Left column shows RIPC-group, right column shows control group. Exclusion criteria were participation in other trials, body mass index (BMI) > 40 kg/m 2 , second intervention because of bioprothesis degeneration ("valve-in-valve"), active malignancy with life-expectancy less than 1 year, SIRS or sepsis, cardiogenic shock, dependency on inotropes, peripheral artery disease, thrombosis and chronic renal failure with need for dialysis and dialysis fistula at forearm. BMI = body mass index; SAVR = surgical aortic valve replacement; RIPC = remote ischemic preconditioning.
Procedure-TAVI and RIPC
Aortic valves were implanted through the transfemoral approach and under general anesthesia as previously described [9]. Application of RIPC started with the induction of anesthesia. We performed three cycles of ischemia for five minutes followed by reperfusion for five minutes ( Figure 2). To induce ischemia, the cuff of a standard blood-pressure-manometer (Boso, Jungingen, Germany) was inflated 20-30 mmHg above the systolic arterial pressure. Efficacy was assessed clinically by pulselessness of radial artery and acrocyanosis followed by reactive hyperemia. The time interval between the end of the RIPC procedure and the start of the TAVI intervention was less than 30 min. Patients of each group received either a self-expandable CoreValve Evolut R (Medtronic, Minneapolis, Minnesota, USA) (n = 44) or a balloon-expandable Sapien XT/Sapien 3 (Edwards Lifesciences Inc., Irvine, CA, USA) (n = 22) valve. We used Imeron 350 (Bracco S.p.A., Milan, Italy) as contrast agent. Three cycles of ischemia (I) and reperfusion (R) of 5 min each, respectively, were applied, resulting in a total duration of RIPC of 30 min. Efficacy of RIPC was assessed clinically (pulselessness, acrocyanosis, reactive hyperemia). The blood pressure cuff was regularly applied to the right upper arm.
Blood Sampling and Analysis
Venous blood samples were collected at hospitalization and routinely the first five days after the TAVI procedure each morning at rest (Figure 3). Parameters were measured with Cobas 6000 (Roche, Rotkreuz, Switzerland). High-sensitive troponin-T (hsTnT) was measured by immunoassay in a Figure 2. RIPC scheme. Three cycles of ischemia (I) and reperfusion (R) of 5 min each, respectively, were applied, resulting in a total duration of RIPC of 30 min. Efficacy of RIPC was assessed clinically (pulselessness, acrocyanosis, reactive hyperemia). The blood pressure cuff was regularly applied to the right upper arm.
Blood Sampling and Analysis
Venous blood samples were collected at hospitalization and routinely the first five days after the TAVI procedure each morning at rest (Figure 3). Parameters were measured with Cobas 6000 (Roche, Rotkreuz, Switzerland). High-sensitive troponin-T (hsTnT) was measured by immunoassay in a sandwich-technique (cut-off 14 ng/L). Concentrations of creatinine and urea were determined photometrically. Glomerular filtration rate was calculated using the CKD-EPI-formula. Figure 2. RIPC scheme. Three cycles of ischemia (I) and reperfusion (R) of 5 min each, respectively, were applied, resulting in a total duration of RIPC of 30 min. Efficacy of RIPC was assessed clinically (pulselessness, acrocyanosis, reactive hyperemia). The blood pressure cuff was regularly applied to the right upper arm.
Blood Sampling and Analysis
Venous blood samples were collected at hospitalization and routinely the first five days after the TAVI procedure each morning at rest (Figure 3). Parameters were measured with Cobas 6000 (Roche, Rotkreuz, Switzerland). High-sensitive troponin-T (hsTnT) was measured by immunoassay in a sandwich-technique (cut-off 14 ng/L). Concentrations of creatinine and urea were determined photometrically. Glomerular filtration rate was calculated using the CKD-EPI-formula.
Endpoints
The primary study endpoint was the characterization of myocardial injury as reflected by hsTnT kinetics in both groups (RIPC, non-RIPC). We calculated the respective area under the curve (AUC) by the trapezoid method. Myocardial injury was assessed according to the current Valve Academic Research Consortium (VARC)-2 recommendations [18]. Patients with baseline hsTnT below the upper limit of normal (ULN) (14 ng/L), which rose above the ULN after the TAVI procedure, were classified as having myocardial injury, as were patients with baseline hsTnT concentrations above the ULN and an additional increase of 20% [19]. Secondary endpoints were events of acute kidney injury (AKIN) according to KDIGO, echocardiographic changes and 6-month mortality. Moreover, other peri-procedural complications were also recorded. Transthoracic echocardiography was performed prior to the intervention, post-procedure and at follow up visits after 3 and 6 months, respectively. Patients unable to participate in our institution's follow-up program were instead followed up by their local cardiologists.
Endpoints
The primary study endpoint was the characterization of myocardial injury as reflected by hsTnT kinetics in both groups (RIPC, non-RIPC). We calculated the respective area under the curve (AUC) by the trapezoid method. Myocardial injury was assessed according to the current Valve Academic Research Consortium (VARC)-2 recommendations [18]. Patients with baseline hsTnT below the upper limit of normal (ULN) (14 ng/L), which rose above the ULN after the TAVI procedure, were classified as having myocardial injury, as were patients with baseline hsTnT concentrations above the ULN and an additional increase of 20% [19]. Secondary endpoints were events of acute kidney injury (AKIN) according to KDIGO, echocardiographic changes and 6-month mortality. Moreover, other peri-procedural complications were also recorded. Transthoracic echocardiography was performed prior to the intervention, post-procedure and at follow up visits after 3 and 6 months, respectively. Patients unable to participate in our institution's follow-up program were instead followed up by their local cardiologists.
Statistical Analyses
RIPC group patients were matched to control group patients by propensity score. The method used was nearest-neighbour by MatchIt-package [20] using the statistics software package "R" (The R Foundation for Statistical Computing, Vienna, Austria). Patients were matched according to the type of valve implanted. Possible influencing factors of troponin elevation were considered with the variables sex, pre-procedural ejection fraction, frequency of relevant coronary artery disease with more than 50% stenosis, frequency of peri-procedural pacing runs, volume of contrast agent used during the procedure and pre-procedural creatinine concentration as a marker of pre-existing kidney injury. To obtain these variables as matching covariates, we compared the baseline variables of both groups with standardized differences [21]. All tests were run by Statistical Package for Social Sciences, version 23 (SPSS Inc., IBM, Armonk, NY, USA). Data are presented as means and inter-quartile-ratios (IQRs), unless stated otherwise. p-values < 0.05 were considered statistically significant. Normally distributed continuous data was compared by paired-samples t-test, and non-normally distributed data by the Wilcoxon signed-rank test. Distribution of normality was assessed by the Shapiro-Wilk and Kolmogorov-Smirnov test. Dichotomous data was analyzed with McNemar's test and the survival analysis was done by the Kaplan-Meier procedure and Log Rank test. For missing hsTnT concentrations, an imputation method was used by a linear mixed model as quadratic development over the baseline troponin concentrations [22].
Results
In total, 358 patients were screened for eligibility to participate in our study, i.e. 131 RIPC and 227 non-RIPC patients (Figure 1). Patients not meeting the inclusion criteria were excluded. Eventually, 66 RIPC patients and 66 matched control subjects were available for statistical analysis. One patient was lost to follow up in the control group.
The baseline characteristics of the unmatched cohorts are given in Table S1, as are the standardized differences of the matching variables in Table S2. Following propensity matching, both groups (RIPC and matched non-RIPC groups) showed no statistically significant differences in baseline and procedural parameters (Table 1). We observed no adverse events related to RIPC.
Myocardial Injury
Myocardial injury was common, occurring in 63 cases of the RIPC group (97%) and in 64 cases of the control patients (99%). Increased baseline hsTnT levels occurred in 78% of all patients. Peri-procedural myocardial infarction was not registered in any of the two groups (Table 2)
Kidney Injury
Creatinine peak, creatinine AUC at five days and change of GFR did not differ significantly between both groups (Table 3). AKIN occurred in one RIPC patient and four non-RIPC patients (p = 0.250) following TAVI. None of the patients needed dialysis.
Kidney Injury
Creatinine peak, creatinine AUC at five days and change of GFR did not differ significantly between both groups (Table 3). AKIN occurred in one RIPC patient and four non-RIPC patients (p = 0.250) following TAVI. None of the patients needed dialysis. Table 3. Kidney-specific parameters for RIPC-and matched control-group.
Echocardiography
Echocardiographic parameters differed neither before nor after TAVI between the two groups (Table S3). In both groups, aortic stenosis was successfully treated, as reflected by highly significant decreases of Vmax, dpmax and dpmean (p < 0.001). Left ventricular ejection fraction and aortic regurgitation were similar in both groups post-intervention.
Mortality
All-cause mortality did not significantly differ at 30 days, 3 months and 6 months after TAVI ( Figure 6). Within 6 months, 7 RIPC patients and 9 control patients died (p = 0.559). All deceased patients (n = 16; 12%) showed a significantly higher hsTnT concentration at baseline compared to all other patients (42.
Complications According to VARC-2
Atrial flutter/fibrillation, atrioventricular and branch blocks were common after TAVI in both groups. New pacemaker implantations were not significantly different. Patients from the RIPC group
Complications According to VARC-2
Atrial flutter/fibrillation, atrioventricular and branch blocks were common after TAVI in both groups. New pacemaker implantations were not significantly different. Patients from the RIPC group showed three cases of valve thromboembolism, making implantation of another bioprosthesis (valve-in-valve-procedure) or conversion to open heart surgery necessary. Other complication rates were not statistically different between both groups (Table 2).
Discussion
The objective of the current study was to explore the effects of RIPC on myocardial injury, kidney injury and mortality in patients undergoing TAVI for severe aortic stenosis. Our data show a significant rise of hsTnT across all patients following TAVI, indicating myocardial injury. However, myocardial injury was not mitigated by RIPC across all valves implanted, and when considering Medtronic CoreValve and Edwards Sapien bioprostheses separately. Patients receiving RIPC prior to TAVI did not benefit in terms of kidney injury or failure. Neither could we show a benefit of RIPC in TAVI patients with respect to mortality after one, three and six months. The RIPC procedure, however, was well tolerated without any related adverse events.
Baseline Characteristics
With respect to baseline criteria, our study population is comparable to other studies [12,23]. Up to this point the effect of RIPC in TAVI patients was investigated only by Kahlert et al., 2017 [16]. In their and this present study, RIPC patients did not benefit in terms of cardioprotection, renoprotection or mortality.
Type of Bioprosthesis
One central part of our study was the investigation of the RIPC effect in different types of transcatheter biological aortic valves. The self-expandable CoreValve bioprosthesis and the balloon-expandable Sapien bioprosthesis are the two main devices in clinical use. The choice of valve type is mainly dictated by the size of the native annulus and the cardiac anatomy. Although device specific complications are evident, there are no differences in clinical outcomes in terms of short-or long-term survival with use of either valve [24]. Due to the use of a CoreValve prosthesis, the incidence of permanent pacemaker implantation was 33% in this study as compared to 8% in the study by Kahlert et al., who only included Edwards prostheses. From today's point of view, one major limitation of this previous study is the exclusive use of Sapien XT prostheses. Our study did not reveal a valve-specific impact on myocardial injury. Troponin-T-peak and AUC were similar in both groups, which is reasonable as both valve types showed comparable number of implantations, sizes, pacing runs and additional dilatations in the subgroup analysis.
Mortality and Myocardial Injury
For a period of up to 6 months, RIPC did not achieve a significant difference in mortality compared to the control group (n = 7 RIPC group vs. n = 9 control group, p = 0.559). Our study participants represented the common patient population undergoing TAVI with an average age of 82 years and multiple co-morbidities. An increased hsTnT baseline is a sign of multimorbidity and a predictive parameter of patient outcomes [25]. The baseline hsTnT was significantly higher in the 16 patients who died compared to the survivors. At present, the only established surrogate parameter to assess the extent of peri-interventional myocardial injury is measurement of troponin and there is evidence for the correlation of higher troponin levels with higher mortality [19,[26][27][28]. The majority of patients (98%) in this study suffered myocardial injury following TAVI, confirming previous study results [19,29]. However, this did not influence left ventricular function as echocardiographic evaluation of ejection fraction showed. To clearly differentiate between myocardial injury and infarction, we followed current VARC-2 recommendations [18] and reference levels from other studies [19,29] and defined peri-procedural infarction as an increase of troponin above 15 times of URL within 72 h plus presence of clinical manifestations (ischemia in electrocardiogram, abnormal myocardial motion in echocardiography) in contrast to injury where troponin increase is no more than 20%. No patient in the current study suffered peri-interventional myocardial infarction.
Renoprotection by RIPC
The amount of contrast agent within the TAVI procedure is important for a possible kidney injury. The amount used in the current study population (142 mL RIPC group vs. 137 mL control group) is comparable with numbers from "The German Aortic Valve Registry" (GARY; 27 participating centers, 165 mL contrast agent [30]). With 3.8%, occurrence of renal injury without renal replacement therapy was below average [13].
In two large clinical trials RIPHeart [31] and ERICCA [8], RIPC did not reduce major perioperative adverse cardiac, renal and cerebral events in bypass surgery patients with or without valve replacement. In contrast to that, Zarbock et al. showed that RIPC reduced the rate of acute kidney injury and use of renal replacement therapy in patients undergoing bypass surgery. In contrast to the RIPHeart and ERICCA trials, there were differences in the anesthetic regimen (no propofol) and prior medication (no sulfonylurea), and only patients at a high risk of acute kidney injury were included. Of note, our study showed a small trend towards renal protection (acute kidney injury: n = 1 RIPC group vs. n = 4 control group, p = 0.250), but this was not statistically significant. Comparing these previous studies with our present, the following points and differences need to be considered: (1) different cohort (cardiac surgery), (2) preconditioning may be less effective in patients with infarct-remodeled hearts, (3) cardiopulmonary bypass, hypothermia and cardioplegia itself are known to be protective (perhaps further protection is impossible to achieve), and (4) concomitant medications may interfere with remote ischemic preconditioning.
Pharmacological Confounding Factors
RIPC is believed to convey its positive effects on ischemia/reperfusion injury through neural and hormonal pathways. Multiple pharmaceuticals, such as statins, platelet inhibitors and general anaesthesia, including opioids, hypnotics and sedatives, might consequently influence the outcome in a trial of RIPC. For instance, the influence of anesthesia is a matter of debate, particularly the lack of RIPC effect in some studies where propofol was applied [6][7][8]. On this basis, the popular trials ERICCA and RIPHeart received some criticism [32]. An alternative drug that has been used to avoid propofol was midazolam, which however did not unmask an effect of RIPC either [16]. On the other hand, publications also exist that confirm an effect of RIPC despite usage of propofol [5,33,34]. Opposed to that, a variety of medications may possess cardioprotective effects in the sense of pharmacological preconditioning. In summary, the confounding effect of various drugs has not been finally resolved. While we cannot exclude a general effect of pharmaceuticals on the outcome of RIPC, we can safely assume that there were no inter-group differences regarding common cardiac medications and type or dose of anesthesia.
Best Practice of RIPC
There are several studies, animal as well as clinical trials, examining the potential beneficial effect of ischemic preconditioning [1,5,6,[35][36][37]. While there is still a lack of evidence if pre, per-, or post-conditioning is superior to one of the others [38][39][40], we decided to perform preconditioning because application of RIPC before the TAVI procedure was most convenient time point to implement into the TAVR routine, assuring the best consistency in performance. While there is still no sufficient proof for the optimal regimen regarding RIPC duration [3,41,42] or number of cycles [43], we took the most commonly used version (3 cycles of 5 min. ischemia) derived from studies in cardiac surgery as well as cardiology [2,[4][5][6]44]. It remains unresolved whether expansion of ischemia areas, such as on both arms or both legs, would amplify the RIPC effect.
Limitations
Our study has limitations. First, this is not a randomised controlled trial (RCT). While data in the comparison group were collected prospectively, patients were not randomized into the comparison group or the control group. Instead, we chose propensity score matching to match the comparison cohort to a control cohort, where data were acquired retrospectively. Especially in small trials, this has the theoretical upside of outcomes of matched treated and untreated subjects likely being more similar to one another compared to the outcomes of randomly selected treated and untreated subjects [45]. However, although potential influencing factors are minimized through matching by reducing measured covariates, we cannot rule out that potential unmeasured confounders between the groups might influence the outcome, especially as data were collected differently (prospective vs. retrospective) and during different time frames (2014 to 2015 in control group vs. 2015 to 2016 in RIC group). Nonetheless, during the time frame covered by the study, the standards of data collection, the implantation methods and the post-interventional monitoring did not significantly change. Hence, we estimate the risk of relevant confounding to be low. Second, the study is underpowered and too short to identify a change in mortality. For instance, to obtain a significant reduction of mortality rate of 3% with a statistical power of 80%, a total sample size of 3710 patients would have been necessary. Considering both the limitations to the study design and to the interpretability of mortality, our study is hypothesis generating. For future studies, a randomized controlled trial will be the next logical step. The optimum timing and amount of ischemic conditioning should be further evaluated.
Third, selecting cardiac enzymes to describe the endpoint for myocardial injury is debatable. As an example, cardiac magnetic resonance imaging (CMRI) is more valid for detecting myocardial ischemia and/or edema, thus providing more valuable information on the effect of RIPC on myocardial injury. However, conducting CMRI shortly after a TAVR procedure is not recommended by the prosthesis manufacturer. Cardiac enzymes are commonly used and, furthermore, they are defined by the VARC-2 consortium, thus providing the most easily accessible and comparable data with evidence correlating to mortality and morbidity [26][27][28].
Finally, the efficacy of ischemia under RIPC was controlled by clinical methods only, but with objective criteria. Measurement of serum lactate was proposed as an objective read-out, but was not feasible due its short half-life.
Conclusions
Our data do not lend support for a beneficial role of RIPC in TAVI patients for cardio-or renoprotection or to improve survival. However, RIPC is a non-invasive, drug-free method without adverse effects. An increase in total ischaemic area or a longer period of RIPC can be additional aspects in further studies.
|
2020-01-09T09:13:28.876Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "f43baae9796dbc2db847869e33867e12fcd361b9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/9/1/160/pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "f43baae9796dbc2db847869e33867e12fcd361b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257757253
|
pes2o/s2orc
|
v3-fos-license
|
On the analogies between gravitational and electromagnetic radiative energy
We give a conceptual exposition of aspects of gravitational radiation, especially in relation to energy. Our motive for doing so is that the strong analogies with electromagnetic radiation seem not to be widely enough appreciated. In particular, we reply to some recent papers in the philosophy of physics literature that seem to deny that gravitational waves carry energy. Our argument is based on two points: (i) that for both electromagnetism and gravity, in the presence of material sources, radiation is an effective concept, unambiguously emerging only in certain regimes or solutions of the theory; and (ii) similarly, energy conservation is only unambiguous in certain regimes or solutions of general relativity. Crucially, the domain of (i), in which radiation is meaningful, has a significant overlap with the domain of (ii), in which energy conservation is meaningful. Conceptually, the overlap of regimes is no coincidence: the long-standing question about the existence of gravitational waves was settled precisely by finding a consistent way to articulate their energy and momentum.
Introduction
Heuristically, gravitational waves are propagating ripples in the fabric of spacetime; ripples that we can now detect as originating in some of the most energetic events in the universe, such as the merger of two black holes. In recent years, instruments such as LIGO (the Laser Interferometer Gravitational-Wave Observatory) have made it possible to measure these remarkably elusive waves. Since the first direct detection in 2015, there have been numerous detections, including the merger of two neutron stars and the collision of two black holes. It is not hyperbole to say we have developed a new type of sensor with which to observe the cosmos.
But in earlier decades, gravitational waves were controversial. The theory governing their behaviour had a turbulent origin. Proposed by Albert Einstein soon after the discovery of general relativity, gravitational waves were shortly thereafter confused with mere artefacts of a bad coordinate choice; and so their existence was denied. But Einstein himself was thereafter convinced they were not just coordinate artefacts, and so came to realise the waves are real (Kennefick, 2007). But these controversies are now long past. The work of Penrose, Bondi, Metzner, van der Burg and Sachs in the 1960s provided an invariant formulation with which to describe gravitational radiation at asymptotic distances from their sources. Crucially, gravitational waves were shown to carry energy in an unambiguous, fully covariant manner.
But scepticism about whether these waves carry energy lingers on in some of the philosophy of physics literature (see e.g. Duerr (2019); Fletcher (2023); Hoefer (2000); Lam (2011)). Their sceptical arguments have two main foci: they attempt to (i) show germane dissimilarities between gravitational and other types of physical interactions, and to (ii) weaken the significance of asymptotic conservation laws. In the course of the paper, we will-one might say, unduly-focus on Duerr (2019). But there is a simple reason: it is the most recent and complete illustration of (i) and (ii).
Thus we plan to review the established work from the 1960s (and some later developments), including the analogies between gravitational radiation and electromagnetic radiation, with its much better understood energy transfer: and thereby reply to these sceptical arguments.
There is general philosophical debate that we do not intend to here address about (i) the nature of idealizations and approximations and (ii) the nature of the contrast between emergent or effective regimes, and fundamentals. That is why we use electromagnetic radiation as a foil: reasonable views on (i) and (ii) should apply equally to electromagnetic and gravitational radiation.
In the rest of this Section, we briefly introduce the paper's themes, and give a prospectus. One argument against gravitational radiation carrying energy is that it can be sourced by objects that are following geodesics, and are thus 'force-free'. But the generation of gravitational waves depends on the quadrupole moment of the source's motion: that is a statement about relative motion between different bodies, or parts of a body. Whether or not the individual bodies or parts of a body follow 'force-free' motion, that motion produces real tension, and thus work, within an extended body. Thus, being force-free is compatible with the emission or absorption of energy.
We can see this very clearly in the electromagnetic analogy. Take the question of whether a freely falling charge radiates or not. This was satisfactorily answered in the 1960s by Rohrlich and Fulton (Fulton & Rohrlich, 1960). Their answer was that the freely falling particle can be taken to radiate, for us, in Earth's frame, but not for a local, comoving observer.
The reason this answer is (perhaps surprisingly) not in conflict with the equivalence principle is that, generically, we can't unambiguously extract the wave part of the electromagnetic field from its electro-magneto-static part, within spatial regions that are close to the source of the wave. Even in a Minkowski spacetime, this extraction is only unambiguous at large distances: the so called wave-zone (see appendix A).
And there is a very analogous picture for gravitational waves. In a weak-field approximation, it is in the wave-zone of the source that we can only unambiguously discern the part of the linearised field that is radiative. To accomplish something similar in full general relativity, we also need to invoke something akin to the wave-zone: asymptotic infinity. Generically, it is only asymptotically that we have an objective split between the "Newtonian" (or Poisson, or Coulombic) part, and the radiative part, of the field.
At this point it is worthwhle to disambiguate between two different meanings of 'radiation'. For one could ascribe to 'radiation' a very weak, and even vague, meaning: the propagating effect of a given change in initial conditions (an effect that by all accounts travels within the light-cone). 1 1 Both general relativity and electromagnetism respect relativistic causality, which means that, given two solutions of the equations of motion that agree on a proper subset of a Cauchy initial slice Σ, i.e. that agree on Σ 0 ⊂ Σ, will agree on D + (Σ 0 ) (the domain of dependence of Σ 0 ). See (Choquet-Bruhat, 2008, Appendix III, Theorem 2.15) and (Landsman, 2021, Appendix B) for a proof, which is based on the fact that both the Einstein and the Maxwell equations are quasi-linear hyperbolic equations. By the same token, differences in initial data constrained to a subset will evolve into differences within the domain of dependence of that subset. But the weak meaning of radiation is not the one that features in discussions of gravitational waves in the weak-field limit, and they are not the sense we are mostly concerned with here. The stronger meaning of radiation that we are concerned with here is similar to the meaning it has in flat spacetime: waves should propagate along null directions, and have transversal polarizations, oscillatory behavior, and, when sourced by compact sources, the correct fall-off behavior.
Thus we see that, properly understood, 'radiation' is a derivative concept; it is a property of the field that in limited regimes emerges out of the fundamental ontology of each theory. In both the gravitational and electromagnetic cases, generically, it is only only at asymptotic distances that we can unambiguously distinguish radiative and 'Coulombic' components. And in both cases there are similarly special circumstances-that are again similar in the electromagnetic and gravitational cases-in which we can distinguish these derivative or emergent properties everywhere.
In sum, singling out the 'waving' part of the field is subtle both for electromagnetism and gravity, and in similar ways. We will thus argue that dissimilarities between electromagnetic and gravitational waves are irrelevant to the issue at hand. That is, we can happily admit that spacetime curvature and matter fields are fundamentally distinct-"marble and wood", as Einstein called them. 2 Nonetheless, in both theories, we can only objectively and unambiguously characterise the waves in similarly special circumstances. Both the electromagnetic and the gravitational fields come to us as whole, and we must carve out the part that corresponds to radiation; but the joints are, in both cases, only apparent at very large distances.
As to the energy carried by the waves, we find a happy overlap of regimes. First, it is important to state upfront that an energy-momentum tensor is well-defined for electromagnetism, but it is not, in general, for the gravitational field. Therefore we can always define the local energy of the electromagnetic field at a spacetime point and in a given frame, while in general we cannot do so for the gravitational field. 3 Moreover, our intuitions for electromagnetism are mostly based on its applications in Minkowski spacetime, where the energy-momentum tensor of the electromagnetic field is not only well-defined but conserved. In gravity, we can associate energy to the gravitational field only in certain regimes. For instance, we can do so in the weak-field regime of general rela-There is of course the tricky question of how to actually build solutions that match in some subset but differ elsewhere, which we discussed in footnote 4.
2 The labels are usually taken to mean that the geometric side of the equations (the Einstein tensor and cosmological constant) was smooth and pristine-like marble-whereas the right-hand side of the Einstein equations (the energy momentum tensor), was of a different, knottier, or rougher nature, 'like low-grade wood'. But as discussed in (Lehmkuhl, 2019, Sec 3), it is not quite what Einstein meant: Einstein seeing the left-hand side of the Einstein equations as fine marble and the right-hand side as low-grade wood has nothing to do with geometry. It is about quanta. He believed that the left-hand side of the Einstein equations gave an accurate picture of the gravitational field, but that the right-hand side of the equations did not give an accurate picture of matter, for it does not account for the quantum features of matter. It is only a docking station for results of theories like classical hydrodynamics and electrodynamics, which do not do justice to the quantum nature of matter either. Thus, T µν in GR is a place-holder for a theory of matter not yet delivered. (Lehmkuhl, 2019, p. 180) Here we seek only to invoke a distinction between the natures of the gravitational and electromagnetic fields: a meaning closer to the folklore than that intended by Einstein.
tivity. In this regime the analogy between gravity and electromagnetism is very strong: we can treat perturbations of the gravitational field in much the same way as we treat the electromagnetic field: as a kind of matter field on a rigid background geometric structure. For these gravitational perturbations, we can infer the same conclusions about radiation and energy conservation as we do for electromagnetism.
In a generically curved spacetime, as is well known, the notion of conserved energy is, to say the least, complicated. Generically, energy conservation is not locally meaningfulirrespective of whether that energy refers to gravitational or electromagnetic waves. If we allow geometries to vary arbitrarily in some region we can still make sense of the energy of the gravitational field by assuming spacetime to be asymptotically flat. In that context, we can interpret the spacetime as representing an isolated subsystem. The energy of the entire spacetime is interpreted as the total energy of the subsystem in the fulness of time; and we can also interpret the flux of radiative energy 'carried away' to infinity from the system as time passes. As Bondi, Penrose and others showed, gravitational waves objectively carry energy away to asymptotic null infinity; as Arnowitt et al. (1962) showed, the entire energy of a spacetime is registered at asymptotic spatial infinity; and finally, as Geroch, Ashtekar and others showed, the energy carried away to null infinity can be seen as part of the energy contained in the entire spacetime. 4 But asymptotic infinity is a hard place to get to. So why is it so useful in general relativity, even for experimental predictions? Because it is how we represent isolated subsystems in theories with long-range forces, like electromagnetism and gravity. Fortunately, in some circumstances, given a frame and appropriately separated relative scales, we can also identify gravitational energy and its transfer to bodies, even without going 'all the way out' to infinity. For instance, the scale separations between the size of our LIGO detectors and the distance to what it is observing suffice for us to identify a component of the gravitational field here that represents the outgoing radiation emitted by the astrophysical sources. Or, in the sticky-bead example (see Section 3.1), the scale of the bar, the beads, and the curvature already allow us to distinguish a radiative component that does work: we can verify that the gravitational wave will have less energy at infinity than if it had not encountered the bar.
In sum, we will present two main arguments in favour of the received view about the energy of gravitational energy. First, whether or not spacetime is asymptotically flat, radiative energy transfer-a redistribution of the total conserved energy into identifiably distinct components of a solution of the theory-occurs, or fails to occur, in conceptually similar circumstances for the gravitational field as for the electromagnetic field. Thus we will argue that sceptics who maintain that we don't understand energy transfer for gravitational radiation, must also claim we don't understand it for electromagnetic radiation; even if the energy-momentum tensor is generically well-defined for electromagnetism but not for gravity. Second, in the case of gravitational radiation, because of the compounded subtleties of coordinate invariance and nonlinear field equations, the question of carving out the objectively radiative components of the gravitational field was more explicitly tied to the question of whether these components carried energy. Heuristically, defining a wave requires a rigid background and this same background can be used to define energy conservation. .
We thus conclude that though there are many dissimilarities between gravitational radiation and electromagnetic radiation, they do not license a relevant distinction with respect to energy transfer. The fact that the energy of the gravitational field is not generally well-defined is consistent with both points of our conclusions: that the notion of energy carried by a wave is valid in a (very relevant) regime of general relativity; and that gravity and electromagnetism swing together in this respect: the effective notion of gravitaional radiation that emerges in this regime is as well-defined and as part of the basic furniture of the world as electromagnetic radiation.
Here is how we plan to proceed. In Section 2 we give a succinct list of technical results that we will invoke in this paper (we give a more detailed account in the appendix). In Sections 3 and 4 we address the recent philosophers' scepticism: in Section 3 we will compare gravitational and electromagnetic radiation and in Section 4 we discuss energy conservation and isolated subsystems.
Technical results
Here we remind the reader of the following crucial points about electromagnetism and gravitational waves ( labelled for later reference): First, regarding the wave behavior of the fields: R(i) Both the gravitational and the electromagnetic field obey constraints: these are equations that are not dynamical, but must be satisfied by any valid initial data for the theory. For instance, one of way of parsing the electromagnetic field according to constraints and dynamics is to say that the field has a component that is determined by the simultaneous distribution of charges and one component that has its own dynamics; loosely called 'radiative'.
In vacuum, in a contractible space, due to its linearity, the entire electromagnetic field can be unambiguously characterised as radiative. But in general, e.g. when compact sources are present, there is no single local decomposition of either the gravitational or the electromagnetic fields into a radiative and a 'Coulombic' part: for a given region of a generic spacetime, different decompositions can lead to different conclusions about radiation.
R(ii) The 'waving part' of an electromagnetic field is unambiguously identifiable far away from the source, as the 'component of the field' that falls-off as 1/r; the region in which this behavior occurs is called the wave-zone.
In asymptotically flat spacetimes, we can use the Penrose-Newman null tetrad formalism to directly characterise the different components of the electromagnetic field that fall-off at the different rates: this is the electromagnetic Peeling Theorem. This theorem allows us to identify the components of the field that 'become more and more' representative of electromagnetic radiation as one moves away from the source: these are the scalars Φ 2 .
R(iii) Similarly to the electromagnetic case, in the weak field limit of vacuum general relativity, in which we ignore everything but linear perturbations of the Minkowski metric, and in a particular choice of gauge (transverse-traceless), the metric perturbations satisfy the usual wave equations on the Minkowski background and thus are entirely radiative. In this gauge, it is clear that there are two propagating degrees of freedom of the perturbation, as expected from the canonical counting of the physical degrees of freedom of the gravitational field. If the perturbations are sourced by matter, precisely as in the electromagnetic case, we can identify the radiative parts of the perturbation as those components with the appropriate 1/r fall-off, in the wave-zone. These components depend on the quadrupole moment of the sources. R(iv) Moving away from the weak field limit we have to deal with the non-linearities of the Einstein field equations head-on. But there are no dimension-length constants in the theory that could characterize the onset of the strong field regime in general relativity. This implies the strong field regime is not associated with a particular length scale, and instead can be reached at any scale if some characteristic radius representing curvature becomes "small". It is this comparison of length scales that justifies the extrapolation of idealised asymptotic features of general solutions to finite distances in individual solutions.
R(v) As in the electromagnetic case, in asymptotically flat spacetimes, we can use the Penrose-Newman null tetrad formalism to directly characterise the different components of the gravitational field that correspond to gravitational radiation. The Peeling theorem for the curvature also let's us identify components that fall off as 1/r, and that, asymptotically, become 'more and more' like the radiative modes of the weak-field approximation: these are the Weyl scalars Ψ 4 . In the bulk of the spacetime, the scalars Ψ 4 strongly depend on a choice of null tetrad basis. In certain algebraically special spacetimes these choices can be physically constrained and the Ψ 4 can be taken to correspond locally to gravitational radiation in a strict sense. But generically, an unambiguous notion of radiation is only available at asymptotic infinity (see (D'Ambrosio et al., 2022, Ch. 6-8)). 5 Next, regarding the conservation of energy: E(i) Given the background Minkowski metric and its associated Killing vector fields, one has meaningful notions of energy conservation for matter fields (i.e. the right-handside of the Einstein field equations), which apply equally to the electromagnetic field and to the linearised gravitational degrees of freedom. More generally, conservation laws can be deduced for spacetimes that are suitably algebraically special. But for a generically curved spacetime, no covariant, quasi-local conservation laws exist without the introduction of some background structure. 6 E(ii) Energy transfer is not solely radiative. In the case of electromagnetism in a Minkowski spacetime, energy transfer through a surface is given by the flux of the Poynting vector. But the Poynting vector can be non-zero for a non-radiative source: a Lorentz boost of a purely Coulombic field gives rise to both an electric and a magnetic field and to a non-zero Poyinting vector; but that field is not radiative. In accord with item R(ii), the magnitude of such a non-radiative Poynting vector falls-off with distance faster than a radiative Poyinting vector; indeed, the former's flux vanishes at asymptotic infinity, unlike the latter's. E(iii) In the asymptotically flat case, using the Penrose-Newman null tetrad formalism, we can again identify different fluxes at infinity. We have conserved charges at asymptotic spatial infinity that correspond to the integral of non-radiative components. For instance, for electromagnetism and gravity we get the total electric charge and the ADM energy-momentum, respectively. And we have an energy flux that corresponds to an integral over null asymptotic infinity whose arguments include only radiative components. One can interpret the difference between the ADM energy and the energy of the radiation up to a certain (retarded) time as the 'leftover' energy of that spacetime at that time Ashtekar & Magnon-Ashtekar (1979).
Gravitational radiation in thought and in reality
We will now deploy Section 3's similarities between gravitational and electromagnetic radiation in discussing two iconic, and historically significant, "experiments". One is a thoughtexperiment (Section 3.1); the other is a decades-long observational programme (Section 3.2).
(For historical details, cf. again e.g. Kennefick (2007)). In the course of this, our disagreements with the sceptics about radiative gravitational energy will become clear, since they also focus on these two cases.
The sticky bead
After disavowing his theoretical discovery of gravitational waves in the 1930's, Einstein was set right by Infeld and (indirectly) Robertson. With their help, he became convinced that he had misinterpreted the coordinate artefacts of his and Rosen's construction. Rosen, his collaborator on the original disavowal, was less convinced, alongside many others. The state of play remained relatively inconclusive until the famous 1957 conference in Chapel Hill, in which Feynman gave his famous "Sticky Bead" thought-experiment. Leading up to the conference, Pirani and Robinson had been emphasising the role of tidal effects and of the Riemann curvature in providing a coordinate-independent description of gravitational radiation. Using these ideas, Feynman pictured a rigid rod with two ring-like beads, free to slide with friction on the rod, placed in the path of a gravitational wave. Thus, since the beads would slide back and forth on the bar, and through the action of friction heat up the bar, Feynman concluded that the bar can only heat up if the gravitational waves transfer energy to it. Rovelli (1997, p. 197) expresses the idea forcefully: A strong burst of gravitational waves could come from the sky and knock down the rock of Gibraltar, precisely as a strong burst of electromagnetic radiation could. 7 But many philosophers demur. For instance, (Duerr, 2019, p. 30) writes: Therefore, even if [...] we did register an increase in thermal energy of a Sticky Bead detector, we wouldn't be licenced to infer a transfer of energy from the GW, so as to restore energy balance. Rather, it would seem more natural to accept an alternative stance: Energy conservation simply ceases to hold in GR. The detector would just heat up -without there being a causal story about it that would allow us to track the lost energy. Energy conservation is just violated (quantifiably!), when a GW hits a detector.
Other arguments, in recent years common among philosophers, proceed in a similar spirit. As is often pointed out, one can realize the relevant sliding motion of the beads through any geodesic deviation; e.g. in the exterior Schwarzschild metric, as the beads fall towards the center of the planet.
Before criticising these arguments, let us first make a concession. It is true that, had we placed the beads close to the source, it would be impossible to univocally distinguish the energy they obtained solely from gravitational radiation, since, by item R(i) (see also R(iii)), at that distance, there is no unique, unambiguous decomposition of the curvature into an outgoing radiative and a non-radiative component. This is analogous to placing an electromagnetic antenna very close to an oscillator, in the near-field zone (see R(ii)): there is definitely a changing electromagnetic field that does work on the antenna, but what part of that work is solely due to radiation? 8 In the same manner, we believe all parties would agree that talk about energy transfer from gravity to a sticky bead, or to a glass of water, only requires a regime where the sticky bead or the glass of water itself can be well approximated in standard non relativistic terms (in their frame). Then they have a well defined local energy, which is measurable, and if this energy grows and their only interaction is gravitational, it is legitimate to say that there was transfer of energy from gravity to the object.
But the question is whether that transfer is due to an objectively defined gravitational wave, and whether, when it is, if the approximations required also allow us to attribute energy to that wave.
We contend that the assumption underlying Feynman's picture of a discernible gravitational wave hitting the sticky bead is equivalent to an assumption of the weak field regime or of sufficient distance from the source. That is, the beads are at a distance such that, according to R(iv), the Newman-Penrose components Ψ 4 -which are defined everywhere but retain some frame dependence that is irrelevant under the intended interpretation of asymptotic infinityprovide good approximate notions of outgoing radiation (as in (Sachs & Bondi, 1961); see (D'Ambrosio et al., 2022, Ch. 8) for a pedagogical review). These notions are approximate in the sense that the difference between their values where the beads are and their asymptotic values is smaller than some quantity-e.g. the experimental error bars-and this difference decreases monotonically with distance. In other words, for each particular spacetime model of the scenario, we can compute Ψ 4 at ever farther geodesic distances from the source, on a frame that has a well-defined physical interpretation, and verify that its difference to the value taken asymptotically is bounded by some relevant (e.g. experimental) limit.
Of course, spacetime could fail to admit the relevant notion of "increasing distances", or otherwise fail to admit discernible, frame-independent gravitational waves. The possibilities for spacetime geometries are enormous after all, and have little respect for the geometric intuitions we get from our tame surroundings. It would be absurd to require that generic spacetimes should admit clearly discernible gravitational waves; and yet it is virtually guaranteed that extended bodies placed in those spacetimes would be subject to changing tidal effects, and thus to tension and work. As for the beads, they could heat up the bar via tidal forces related to any arbitrarily varying (in their frame) Riemann tensor; this is guaranteed by the 8 Indeed, as the excellent Wikipedia entry on Electromagnetic Radiation correctly states: the term "radiation" applies only to the parts of the electromagnetic field that radiate into infinite space and decrease in intensity by an inverse-square law of power, so that the total radiation energy that crosses through an imaginary spherical surface is the same, no matter how far away from the antenna the spherical surface is drawn. Electromagnetic radiation thus includes the far field part of the electromagnetic field around a transmitter. A part of the "near-field" close to the transmitter, forms part of the changing electromagnetic field, but does not count as electromagnetic radiation. geodesic deviation equation (3.1), and no more. 9 Absent any background structure, including asymptotic conditions, we would not know how to define the spacetime's energy (as per item E(i)). In these cases we can agree that the beads' gain or loss of energy-however it is defined in the beads' frame-are due to tidal forces, even if it is meaningless to say that the energy encoded in the spacetime curvature has diminished or increased.
Nonetheless, in either of the two regimes where we can clearly discern outgoing radiation, Duerr is mistaken to say that we cannot tell a causal story about transfer of energy from the wave to the detector. In these regimes, the accounting of energy is explicit, just as it is with electromagnetic radiation. Thus a gravitational wave will have less energy at infinity than if it had not encountered the bar. The point here is that if one assumes outgoing gravitational radiation has been emitted from a body and can be discerned, one is bound to give an account of that radiation. And the cases in which we know how to do that either require algebraically special spacetimes, with associated conservation laws; or asymptotically flat spacetimes, where the phenomenon takes place very far from the source, where asymptotic conservation laws are approximately observed.
Turning to the second argument: it is true that beads that are freely falling in a Schwarszchild background would slide-not necessarily back and forth, but creating friction nonetheless. That is, as described above, the gravitational field would impart energy to the bar even in a stationary spacetime, such as Schwarzschild. Since it is agreed by all that such vacuum spacetimes do not carry gravitational waves, the sticky bead argument, by itself gives us no reason to believe that it is the gravitational waves that are transferring energy.
So we submit that this appraisal gets things backwards. The sticky bead argument never claims that radiation is the sole purveyor of gravitational energy transfer: as described in item E(ii), the fact that energy transfer occurs non-radiatively is no mystery, and it is equally true in the case of electromagnetism, where we can obtain a non-zero Poynting flux from electro-magneto-stationary sources. If we assume no gravitational wave is present, obviously its causal powers must also be absent. But here we are assuming the existence of a discernible gravitational wave to begin with, and that it is the only source of tidal effects.
In sum, so as long as we picture the sticky bead in the weak field approximation of general relativity-as it was meant-gravitational energy transfer is solely radiative. Leaving the weak-field approximation, the same conclusion holds to ever higher degrees of approximation as we place the sticky beads farther and farther from the source. As described above, given the relative scales, we need not even involve the idealisation of asymptotic infinity directly (see item R(iv)).
The binary pulsar
In 1974, almost two decades after the introduction of the sticky bead argument, Russell Hulse and Joseph Taylor discovered a binary star system (now known as the Hulse-Taylor binary) consisting of a neutron star and a pulsar, which emitted regular pulses detectable on Earth. Theoretically, that system would be a source of gravitational waves. The question then was, would the quadrupole formula give a reasonable approximation of the source strength of this system? Between then and the early 1980's, joint efforts by many theoreticians-most notably Clifford Will and Thibault Damour, who introduced a new method especially tailored for computing the third-post-Minkowskian, gravitational field outside two compact bodiesculiminated in a successful calculation, making precise predictions in a post-Newtonian approximation up to fifth order (c −5 ). The precise mathematical results about the orbital decay agreed with exquisite precision both with the standard, quadrupole formula for gravitational radiation in the weak field limit and with the observations of the Hulse-Taylor pulsar. The evidence pointed directly to radiative energy transfer. 10 Still, in recent conversations and in print, some philosophers demur. One argument, the more easily countered, is that: the pulsars (modelled as dust particles) are in free-fall. Hence they move inertially. Shouldn't their kinematic state therefore remain unaltered? Duerr (2019, p. 26) But the more common and (superficially) convincing argument against the received wisdom about the binary pulsar is that we can bypass explanations employing energy transfer by resorting to numerical simulations for solutions of the Einstein field equations directly.
Let us take these comments in turn. For a freely-falling cloud of pressureless dust particles, one can find an adapted coordinate system in which the position of the particles don't change. Without any calculation (Duerr, 2019, Sec. 2.1), we can conclude that the naive notion of kinetic energy adapted to this coordinate system cannot change.
First, we point out that under the motion induced by a passing gravitational wave what changes are relative positions of particles, not their individual velocities. This relative motion is best described by the geodesic deviation equation. For v a the tangent vector to the time-like geodesics, and r a the transversal displacement vector, we get, an entirely covariant, non-zero acceleration: So under a frame that is adapted to the time-like geodesics, we can easily associate a non-trivial notion of kinetic energy to the accelerated relative motion. Second, the fact that motion is geodesic requires further analysis in order to conclude that it emits or absorbs, or fails to emit or absorb, radiation, or whether that provides a suitable explanation. 11 Indeed, as mentioned in the introduction, Rohrlich and Fulton showed in the 1960's that a freely falling charge could be taken to radiate, in Earth's frame, but it would have no detectable radiation to a comoving observer. But the matter here is subtle, and involves global properties of the accelerated observer in Earth's frame and of the inertial charge. 12 Thus, in their review article, (de Almeida & Saa, 2006, p. 2) write: We need to recognize that the concept of radiation has no absolute meaning and depends both on the radiation field and the state of motion of the observer.
The second argument, about the explanatory usefulness of radiation, can be illuminated by the question of "radiation-reaction", originally discussed by Dirac and DeWitt and Brehme. Let us elaborate.
A charged particle or an extended or spinning body-like the components of the binary pulsar-can't be taken to follow the paths of neutral point-like particles, namely, the geodesics of a background spacetime. For unlike neutral point-like particles, these objects necessarily 10 But these computations do not give precise predictions for the gravitational wave-forms: for that, we need numerical methods in full general relativity. This is a distinct and enormously complicated task, whose breakthrough moment came much later, in Pretorius (2005). 11 Here is a proof of principle for doubting a necessary connection: in the Kaluza-Klein framework electromagnetic forces are geometrised. In that framework, charged particles undergoing motion in a background electromagnetic field are interpreted as following geodesics, and nonetheless absorb radiation. 12 The main difficulty in locally analyzing the radiation emitted by an inertial charge in the context Rohrlich and Fulton discussed is the fact that, in general, because the current associated with such a charge does not have a compact support, it cannot be completely confined in any Rindler wedge. A different definition of inertial charges-that takes the infinite limit of acceleration and thus confines these sources to the Rindler wedge-concludes that these accelerated observers would find no radiation.
contribute to the energy-momentum tensor, and thus change the spacetime geometry. The approximation schemes used to find their true trajectory given a background geometry are precisely what we mean here by radiation reaction. The idea is that the background geometry scatters the electromagnetic, or the gravitational field, sourced by the particle's whose motion we are trying to determine. In more detail, an electromagnetic field originates on the charge in the past, is scattered by a gravitational field some distance away, which then produces a nonzero force in the present. Though the equation determining the deviation from the comparative motion of the uncharged particle will involve non-local contributions-through the value of Green's functions acting on the (retarded) past of the particle-there is nothing mysterious in the non-local character of the force. It is the result of reducing the interaction between fields (gravitational and electromagnetic) to a finite dimensional description in terms of the source's motion alone (see Quinn & Wald (1999) and Poisson et al. (2011) for a comprehensive review).
And again, in certain contexts in which we can define suitable local conservation laws, namely, in a globally hyperbolic, stationary spacetime, we can use the radiation reaction to give a precise account of quasi-local conservation laws. 13 As described in (Quinn & Wald, 1999, p.3): "This provides justification for the use of energy and angular momentum conservation to compute the decay of orbits due to radiation reaction." Reference to radiation and scattering is crucial to this explanation. How could we deligitimize the use of such effective terms within a theory without condemning the vast majority of physical concepts to the same fate?
We will come back to this topic about the concepts of physics in the Conclusions. But for the main message of this paper we don't need to be so general: the explanatory utility of the concept of radiation is again analogous for the electromagnetic case; the strength of the analogy-and the ubiquituous use of electromagnetic radiation in physics-suffices to shift the onus of explanation to the sceptic about gravitational radiation and its energy transfer.
Being explicit: Yes, it is true that the motion of the LIGO detector plates can in principle be described without ever using the notion of energy transfer. Such motion could be entirely accounted for by free fall and violations of free fall due to non gravitational forces. But the same formal manouvers are available in electromagnetism. So, if we discard an "explanation" because there is in principle an account that is more general, then we must discard energy transfer as an explanation equally for gravity and for electromagnetism. Conversely, if we count the use of regularities that hold in special regimes as explanatory, then energy transfer by waves is equally explanatory in general relativity and in electromagnetism.
Asymptotic flatness and dynamical isolation
Lastly, we turn to the role of asymptotic flatness in considerations of energy conservation. This is a last resort for the sceptic, who may try to bite the bullet and deflate the ontic significance of any wave's energy transfer on generic curved spacetimes. Duerr (2019) writes: [p. 30] Asymptotic flatness would have to be shown to be a "working posit" of (i.e. essential for) relativistic astrophysics. But this is questionable. ...
[p. 34... asymptotic flatness is] an idealisation in Norton's sense: The embedding spacetime is an unrealistic, surrogate spacetime. Consequently, realism about notions of gravitational energy based on asymptotic flatness isn't straightforward.
The argument here is that, since asymptotic flatness is not fundamental in general relativity, any concept or quantity that depends on this assumption must also be less than fundamental in the theory. This is a blunt argument, condemning our understanding of energy transfer, simpliciter, in generic curved backgrounds.
And, amongst philosophers, we have witnessed a distinct, frequent misunderstanding of the meaning of the asymptotic integrals that are involved in the definitions of the relevant energies. Namely, that different notions of energy are 'holistic', that they cannot distinguish between the energies due to gravitational waves and due to other, non-radiative contributions. For instance, in a recent talk ? describes the situation thus: 14 It is extremely tempting, on the story that I have given, to say that because we find that the Bondi energy decreases with time, that gravitational radiation carries away positive energy from a radiating system. But [...] we should resist getting carried away, because strictly speaking gravitational waves don't have any Bondi energy of their own. [...] these global notions of energy are assigned to whole space times, so we can't divide the energy content into one part which is associated with one part of a space-time and another [...] with [...] the gravitational waves. [...] therefore we can't say that the gravitational waves have Bondi energy that is carried away. All we can say is that the Bondi energy decreases.
Are they right? Let us once again take their claims in turn: first Duerr and then Fletcher. Is asymptotic flatness a working posit for astrophysics? Agreed: not in full generality, but it surely is a working posit to study gravitationally isolated subsystems. What undergirds the assumption of asymptotic flatness is just dynamical isolation of subsystems; conceptually, dynamical isolation is what grounds both an unambiguous separation of radiative and Coulombic modes and conservation of energy. To talk about energy transfer, we need to be able to clearly distinguish subsystems within the theory. And again, this condition (of dynamical isolation) is necessary to discuss energy conservation in general -even in the familiar case of Newtonian mechanics-not just in general relativity.
In general relativity we cannot set the gravitational field to zero at a supposed boundary between subsystem and environment. What we can do is demand that gravitational (tidal) forces become less and less pronounced at far enough distances from the subsystem. Once again, there is no decree in the theory that every spacetime should have subsystems that are sufficiently isolated: the very idea of removing oneself farther and farther from a subsystem can fail. But if we want to talk about concepts that require dynamical isolation, such as radiation and energy, we have no other recourse. Here is Penrose (1982, p. 182) making this exact point: "Asymptotically flat spacetimes are interesting, not because they are thought to be realistic models for the entire universe, but because they describe the gravitational fields of isolated systems, and because it is only with asymptotic flatness that general relativity begins to relate in a clear way to many of the important aspects of the rest of physics, such as energy, momentum, radiation [...] Indeed, this kind of assumption extends even to Newtonian mechanics. There, to apply the laws and obtain conservation of energy, we must describe the system in an inertial reference frame. But how do we ensure our description is in an inertial frame?
Corollary IV in Newton's Principia says that, if we can ignore external influences on some subsystem, the center of mass of said subsystem will move uniformly with respect to absolute space (or be at rest). 15 Jointly with Corollary V, 16 we conclude that, if we can ignore external influences and are not in circular motion, we can for all practical purposes treat the center of mass of our subsystem as being at rest with respect to absolute space. The underlying assumption of 'dynamical isolation' here is that our subsystem is sufficiently far removed from other, external, bodies. Is this a fully general presupposition of Newtonian mechanics? Once obtained, will it obtain for all time? 'No' is a conceivable answer to both questions. Nonetheless, the presupposition is necessary for conservation of energy, along most other practical applications of the theory. 17 Let us now turn to the second type of misunderstanding, about the holistic nature of asymptotic notions of energy. A common mistake is to take the relevant notions of energy to arise from integrals over entire Cauchy surfaces-for ADM energy-momentum-or even 'from slices that don't intersect radiation escaping out to infinity'. In truth, the relevant quantities are strictly integrals over either asymptotic null infinity or over asymptotic spatial infinity. 18 And so the relevant integrals are already calculated over a surface where it is possible to uniquely distinguish the radiative from the non-radiative components.
The integrated Bondi energy flux is given in (B.13) in Appendix B; the ADM energy is better-known, but would require the introduction of terminology that is besides the point of this paper. The ADM energy is usually interpreted as the total energy available in the spacetime. As it is a quantity calculated at spatial asymptotic infinity, it is 'static', and does not evolve, unlike the Bondi energy. So what is the relation between the Bondi and ADM energies? Consistently, the Bondi energy can be interpreted as the energy remaining in the spacetime at the "retarded time" after the emission of gravitational radiation. For, as shown by Ashtekar & Magnon-Ashtekar (1979), the Bondi energy at a certain cross-section of I differs from the ADM energy by the integral (B.13), up to the retarded time given by that crosssection. So Fletcher may be right that the Bondi energy does not distinguish the energy due to radiation, but it is the Bondi-energy flux that can be seen as 'subtracting energy' from the spacetime. Thus the difference of Bondi energies at two different times is unambiguously associated to the energy of the radiation that leaves spacetime in that interval.
In sum, the radiated energy depends only on the components that encode gravitational waves, as understood both in the linearized and asymptotic limit. And these different notions of energy are remarkably consistent: if we understand the energy at spatial infinity as the energy of the entire spacetime, we can understand a difference between the total energy at spatial infinity and the energy radiated away along null infinity up to a given retarded time as the energy left in the spacetime at that given (retarded) time. Thus, if a part of the gravitational wave is absorbed and turned into e.g. thermal energy, we will find a corresponding subtraction in the energy radiated away to infinity.
Again, an analogy applies to electromagnetism: the Gauss law gives us the total charge in the spacetime; this is an integral over spatial infinity. A different integral over different components gives us the radiated electromagnetic energy from the spacetime. 19 Thus, contrary to a straightforward interpretation of Fletcher's passages, asymptotically, rest, or moves uniformly forward in a right line without any circular motion." 17 'Ignoring external influences' is subtle business: it does not necessarily mean that all external forces on a subsystem have to vanish. As argued in Saunders (2013), to empirically apply the laws, Newton has to implicitly resort to Cor. VI, which says that: "If bodies, any how moved among themselves, are urged in the direction of parallel lines by equal accelerative forces; they will all continue to move among themselves, after the same manner as if they had been urged by no such forces.". So we can empirically apply the laws if our subsystem is sufficiently distant from external sources so that they would act equally (in parallel) on all of its components.
18 Of course, one could use Stokes theorem to convert these integrals into an integral of different dimension, but the details of the fields on the further dimensions would be irrelevant to the value of the integral. 19 The analogy here is only limited by the fact that the total charge is constant, since it does not track a loss of energy and there is no flux of charges at infinity.
we can precisely separate the energy of the system into one part which is associated with the gravitational waves and one part that is related to other charges of the isolated subsystem.
And while it is true that constants such as the ADM mass of a spacetime may not discern e.g. whether a given spherically symmetric solution has singular behavior (i.e. vacuum Schwarzschild) or a star at its centre, there is no reason to think this is problematic for any of the concepts discussed here. 20 4.1 Why can't we define gravitational waves in the bulk of a generic spacetime?
Finally, let us address a possible point of confusion: why couldn't we take Ψ 4 , described in Appendix B, to describe gravitational radiation, for any region of a generic spacetime? The short answer is that the choice of null-tetrad is arbitrary, and different choices can change the values of the Weyl scalars. But in certain spacetimes, there are physically significant choices, that can uniquely determine the values of (some of) the Weyl scalars. This is the case of algebraically special spacetimes that characterise gravitational radiation spacetimes as being of Type N. In this case, there is a particular choice of null direction k, representing the direction of the wave, such that For these types of spacetimes, we find, for some compatible choice of null tetrad : See Szekeres (1965) for a derivation of the explicit geometric relation between Ψ 4 and gravitational radiation in Type N spacetimes (and for the geometric interpretations of the other Weyl scalars discussed in Appendix B as well). One important point for this paper is that, since these solutions are algebraically special, they will come with some background structure which can be used to define conservation laws (see e.g. Aksteiner et al. (2021) for a recent review of conserved quantities in algebraically special spacetimes). 21 Going back to the generic case of asymptotically Minkowski spacetime, using the smooth limit of the Schouten tensor to I and its relation to the Weyl tensor, we straightforwardly obtain, in that limit, a constraint equation of precisely the form (4.1), saying that asymptotically, we approach a spacetime that (can) include a gravitational wave as understood in the algebraically special case. 22 Consistently, as remarked after equation (B.11), the entire, coordinate independent conformal geometry of I is encoded in the conformally invariant limit of the shear, i.e. in the conformally invariant limit of ∇ a ℓ b as it approaches I + . Since it is encoded by a varying shear, radiation acquires a geometric-coordinate-independent-gloss. In sum, the conformal geometry of I + is entirely determined by the radiative degrees of freedom; and conversely, by construing radiation geometrically we limit the role that different choices of frame can have on its value. 23 The moral is that asymptotic infinity gives us enough structure to unambiguously define gravitational waves because it is associated with a "direction" that is infinitely far away from compact sources: n rules I -it gives a choice of retarded times at I -and ℓ is the radial direction away from I towards the bulk. In the bulk, even if we can physically characterize ℓ and n, we would not be able to characterise radiation independently of the remaining choices of the frame/coordinate system. 24
Conclusions
Even the philosophers that are sceptical about gravitational radiation will concede that a gravitational wave is special in that it has a long range: how could they not? But what these sceptics fail to realise is that this is a constitutive property of radiation, and that it is shared by electromagnetic waves. Generically, both types of fields come to us as a whole; and at close range there are no apparent joints to be carved. Nonetheless, there are conditions under which the joints become apparent-at great distances from the source for example-and these are the conditions under which we understand both radiation and energy transfer.
In this paper, we have not touched on topics in the philosophy of science that may be relevant to this issue. But one seems unavoidable, and indeed was briefly touched on at the end of Section 3.2. That is the idea that the only notions that are well defined are those that can be defined in all regimes encompassed by a theory; and that only such notions can claim to be part of the furniture of the world (according to the theory). As we have emphasised, the "local energy of gravity" doesn't fit that bill, since it is not defined in all regimes. Thus, the sceptic will say, "the local energy of gravity does not exist". But physics rarely trades in fundamental ontology: it wouldn't have gotten much done if it did! Physics, and indeed science more generally, trades in effective notions-like the energy of the wave, the horizon, the orbit, the black hole-that are perfectly well-defined for specific solutions, or for specific regimes, and that have exceptional explanatory value. A philosophy of science that denies any ontological status to these notions leads to an impoverished picture of science, that we must reject.
In the case of general relativity, gravitational waves had particularly turbulent origins, and were only accepted with the introduction of a fully invariant account. This account either required algebraically special spacetimes and perturbations therein, or asymptotic infinity. In both cases, these further structures could be related to notions of conservation, which further established the reality of the waves. This is reassuring, and another example of the great unity of theoretical physics. But it would be a mistake to elevate this reassurance to a form of sanctioning: our theoretical commitment to robust wave patterns is not conditional on their being subject to energy conservation. Nor should we seek such sanction: 'energeticism'-the XIXth century hope of reducing all natural phenomena to 'manifestations of energy'-is long dead in theoretical physics, and for very good reasons.
We finish with two quotes: both confirm our main message, even if the second aimed to sum up the very scepticism we are here rebuffing. (D'Ambrosio et al., 2022, p. 100) write: The fluxes [see Eq. (B.13)] represent a landmark in the discussion on the existence of gravitational waves, which culminated in the nineteen-sixties. Since the inception of gravitational waves in 1916 by Einstein, there has been much debate about whether they are a real physical phenomenon, or whether they are a mere coordinate artifact. Eventually, this dispute was settled by the mathematical rigorous framework presented here, as it provides a gauge-invariant description of gravitational waves. In particular, it provides a gauge-invariant description of the flux of energy and momentum carried by gravitational waves.
And towards the end of his paper, (Duerr, 2019, p. 35) writes: Three considerations bear upon the choice between failure of energy conservation and energy transfer: 1. the contingency of energy conservation on symmetries, 2. the existence of a satisfactory formal account/representation of the energy transport, and 3. the explanatory value of postulating energy transport rather than energy decrease simpliciter, respectively.
We have here shown that gravitational radiation does as well as electromagnetic radiation on all accounts.
A Electromagnetic radiation
Wave equations for the electromagnetic field are easily derived from Maxwell's equations, and so are formal solutions to these equations in the presence of sources. Generally, the radiation field should have three characteristic properties: it should oscillate, it should be transversal, and it should decay as 1 r as we move away from the source. The difficult question is whether we can determine, locally, whether a given source, J µ := (ρ, j), generates a radiation field. And the answer to the difficult question is that we can only determine whether a given source generates radiation asymptotically far away from the source.
Consider an electromagnetic source J µ confined to a finite spatial region at the scale d. Given this source, the vector potential that satisfies the Maxwell equations is: The difficulty mentioned above arises from the fact that the source may have static parts which only produce Coulombic fields and it may have radiating contributions. But the fields come to us as a whole: we do not yet know how to disentangle the different contributions.
Thus, assume there is radiation and that it has a wavelength λ = 2π ω . Moreover, assume an observer is located at the radial distance r from the source (we will compare r with d later).
We can decompose our source into static contributions, J µ stat ( x), and radiative contributions, J µ rad (t, x), where the radiative contributions are assumed, without loss of generality, to oscillate like e −iωt J µ 0 ( x). That is: By inserting this ansatz into the formal solution (A.1), we obtain where A µ stat ( x) contains the static contributions. Finally, we must take into account the position of the observer relative to the source, which can be organised into three different zones: 1. The near zone: d ≪ r ≪ λ 2. The transition zone: d ≪ r ≃ λ 3. The far/radiation zone: d ≪ λ ≪ r The behavior of the vector potential is different in the three zones and this directly impacts the observer's ability to locally infer the existence of electromagnetic radiation.
Let us analyse the near and the far zone. In the former case, the condition r ≪ λ allows us to expand the exponential in (A.3) and we find, by also applying an expansion of (A.3) in spherical harmonics, Although this expression is time-dependent, the dependence is not the one expected of radiation, since the oscillation does not depend on the distance to the source. Fields that oscillate in this way are called quasi-static (Jackson, 1975). Moreover, the fall-off is not 1 r , but rather a sum over terms with 1 r l+1 as coefficients. Again, this is not the behavior of a radiation field, and so an observer would not be able to unambiguously infer the existence of electromagnetic waves in this region.
In the far zone, we implement the condition λ ≪ r and expand x − x ′ as where n is a unit vector in the direction of x. Using this approximation in (A.3), we obtain: The first term in the above expression has the expected properties: it oscillates, it decays like 1 r , and it is transversal: this is a genuine radiation field. The conclusion of this argument is that the observer has to be far enough away from the source to effectively detect radiation. Too close, and the vector potential is quasi-static.
One could attempt to characterise radiation by the energy and momentum that they carry. But while the flux carries energy and momentum, a non-zero flux may have no associated electromagnetic radiation. As mentioned in the main text, the Poynting vector alone cannot parse radiation and other field contributions, except asymptotically.
For consider the Poynting vector S := E × B with its associated Poynting flux, S 2 S d 2 σ, where S 2 is a 2-sphere and d 2 σ = r 2 sin θ dθ dφ. The Poynting vector is not a Lorentz invariant quantity and it therefore depends on a choice of reference frame. As an example, consider the Coulomb solution, i.e., the field of a point charge for an observer in the rest frame of the particle. Clearly, for such an observer the magnetic field is zero and consequently the Poynting vector vanishes. But a boosted observer will see a current, rather than a static charge, and thus an electric and a magnetic field, and so: But the Poynting flux carries information about electromagnetic radiation, in the asymptotic limit. An explicit computation for the above example shows that the boosted observer sees a Poynting vector which decays like 1 r 4 , and thus the Poynting flux of the boosted observer vanishes at infinity. That is, we obtain Both observers now agree that there is no electromagnetic radiation. Going to infinity filters all but the radiative parts of a field. More precisely, the Poynting flux of static contributions vanishes at infinity while the Poynting flux of electromagnetic waves is non-zero. The standard way to disentangle different components according to their fall-off conditions is to use a conformal compactification of asymptotically flat spacetimes, and a Penrose-Newman null tetrad decomposition, to which we now turn. (See (D'Ambrosio et al., 2022, Ch. 3,4).
B Gravitational radiation
Let ( M, g ab ) be a physical spacetime which satisfies Einstein's field equations with vanishing cosmological constant, R ab − 1 2 R g ab = 8π T ab . 25 We call this spacetime asymptotically Minkowski if it satisfies the three following conditions 1) There exists a conformal completion (M, g ab , Ω) such that M := M ∪I is a manifold with a boundary and the boundary has the topology I ≃ S 2 × R. Moreover, the conformally rescaled metric and the physical metric are related by g ab = Ω 2 g ab . The conformal factor is assumed to satisfy Ω = 0 and ∇ a Ω = 0, where hatted equalities are equalities on I .
2) Ω −2 T ab has a smooth limit to I .
3) The normal vector field to I , n a := g ab ∇ b Ω Ω=0 , is complete.
This definition still allows full conformal freedom at I ; a canonical partial fixing of this freedom is given by a divergence-free conformal frame, for which ∇ a n a = 0. The choice of such a frame still allows a conformal rescaling on S 2 (that is dragged along by n). Moreover, using the asymptotic limit for the Einstein equations in the conformally completed spacetime, we obtain: ∇ a n b = 0 (B.1) The conditions Ω = 0 and ∇ a Ω = 0 tell us that Ω is a good coordinate near I , that I has a well-defined normal n a := ∇ a Ω| Ω=0 , and that Ω is heuristically the same as 1 r . 26 Thus the 25 Hatted quantities are the physical quantities, so why encumber notation with hats? Because in this subject, the corresponding asymptotically completed notions are more used, and so they are honoured with the unhatted symbols.
26 Were we to take a stronger convergence for the conformal completion, say Ω = 1 r 2 , then we wouldn't obtain a null generator from the conformal factor, since ∇ a Ω = 0.
condition that Ω −2 T ab has a smooth limit to I tells us that T ab falls-off at least like 1 r 2 . One finds that this is a condition which is satisfied by all reasonable compact sources.
The null tetrad is given, in the physical metric, by null vectors n, ℓ, m, m, satisfying the following conditions: n a ℓ a = −1, m a m a = 1, (B.2) and all other inner products vanishing. This definition implies we can write the metric as: Given a conformal compactification, there exists a corresponding conformally rescaled null tetrad ℓ a and m, m that is well defined at I (with ℓ a = Ω −2 ℓ a and m a = Ω −1 m a ). 27 And we can then use these to decompose the electromagnetic field tensor as: Given the relationship between Ω and 1/r, and the relationship between the physical tetrads and the conformally completed one, we obtain a Peeling theorem for the electromagnetic tensor: a definite fall-off rate for each of the scalars above. Tthe only component that falls-off as 1/r is Φ 2 : that is the component that we asymptotically associate with radiation.
In the gravitational case, we apply a similar treatment to the Weyl curvature. Although, unlike the Faraday tensor, the Weyl curvature vanishes at I (see (D'Ambrosio et al., 2022, Sec. 3.D)), one applies the decomposition to the conformally rescaled Weyl tensor, K abcd = Ω −1 C abcd : Ψ 4 := K abcd n a m b n c m d Ψ 3 := K abcd ℓ a n b m c n d Ψ 2 := K abcd ℓ a m b m c n d Ψ 1 := K abcd ℓ a n b ℓ c m d Ψ 0 := K abcd ℓ a m b ℓ c m d (B.5) From this we can find again a Peeling theorem, that leads again, just as in the case of electromagnetism, to a neat separation of the different components. We find that Ψ 4 encodes the radiation field, since it decays like 1 r ; and Ψ 2 encodes the "Coulombic" information of the gravitational field (i.e., the mass of the source which generates the field). That is, denoting the limit of the Weyl scalars at I by Ψ • , one can write: where σ • is the asymptotic shear, defined as 27 In fact, it is convenient to go in the opposite direction: defining a null tetrad in I + and then dragging it back into the bulk (see (D'Ambrosio et al., 2022, Sec. 3.C)). First, we chose n a as the first null normal to I . This vector field is defined on all of I and it allows us to introduce an affine parameter u which foliates I . We have then introduced (θ, φ) coordinates on the u = const. leaves of the foliation. Put together, (u, θ, φ) provides us with a globally defined coordinate system for I . Next, we have introduced a Newman-Penrose null tetrad {l a ,n a ,m a ,m a } on a cross-sectionS 2 . This "reference" null tetrad is normalized in the usual way and it serves as "generator" of a null tetrad on all of I . In fact, we can generate such a null tetrad by Lie dragging (or parallel transporting, which is the same in this context) the reference tetrad along n a (or along its integral lines). Finally, we have extended the null tetrad from I into a neighborhood of I by Lie dragging it along ℓ a into the bulk of spacetime.
where we have extended all fields into the bulk of spacetime Thus, we have established a connection between the Newman-Penrose scalar Ψ • 4 and the strains of the gravitational wave we use in the linearized theory. The label "radiation field" is thus well-justified for Ψ • 4 . It is worth remarking that this links theory to observations and data analysis. In fact, Ψ • 4 is a key quantity which is computed in Numerical Relativity and integrating it twice over du, isolates the strains. This is what is ultimately used in waveform models and plotted in the famous waveform plots.
There is, of course, much more to be said, about conservation laws-which refer to the symmetry group of I -and, equally important, about how σ • (u, θ, φ) (and its complex conjugate) encodes the entire conformal geometry of I , with a vanishing shear corresponding to no radiation.
Indeed, the Bondi News tensor is constructed entirely from geometric properties of I : namely, it is uniquely determined by the trace-free part of the Schouten tensor at I . And it is related to the asymptotic shear as follows: N ab = 2L n σ • ab =:σ • ab . (B.12) In terms of conservation laws, the energy flux across a portion of I can be expressed as F E (∆I ) = ∆I N ab N cd q ca q bd du d 2 ω (B.13) This is as much as we can fit into this small appendix. It is the trace-freeness that guarantees that the end result is fully conformally invariant. Since the shear tensor σ ab is transverse, trace-less, and symmetric (that is, it satisfies σ ab n b = 0, q ab σ ab , and σ [ab] = 0) this implies that the shear is of the form σ ab = −(σ • m a m b + σ • m a m b ).
From this equation we obtain (B.7).
|
2023-03-27T01:15:49.627Z
|
2023-03-24T00:00:00.000
|
{
"year": 2023,
"sha1": "b3575da12856ae3762de28e4368bfe465bee9039",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-lse-j-pp-files/journals/1/articles/58/65b9d83852cad.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "b3575da12856ae3762de28e4368bfe465bee9039",
"s2fieldsofstudy": [
"Physics",
"Philosophy"
],
"extfieldsofstudy": [
"Physics"
]
}
|
269527262
|
pes2o/s2orc
|
v3-fos-license
|
Metabolic dysfunction-associated fatty liver disease increases the risk of complications after radical resection in patients with hepatocellular carcinoma
Background and aims The prevalence of metabolic dysfunction-associated fatty liver disease (MAFLD) in hepatocellular carcinoma (HCC) patients is increasing, yet its association with postoperative complications of HCC remains unclear. The aim of this study was to investigate the impact of MAFLD on complications after radical resection in HCC patients. Methods Patients with HCC who underwent radical resection were included. Patients were stratified into MAFLD group and non-MAFLD group. Clinical features and post-hepatectomy complications were compared between the two groups, and logistic regression analysis was used to determine independent risk factors associated with post-hepatectomy complications. Results Among the 936 eligible patients with HCC who underwent radical resection, concurrent MAFLD was diagnosed in 201 (21.5%) patients. Compared to the non-MAFLD group, the MAFLD group exhibited a higher incidence of complications, including infectious and major complications after radical resection in HCC patients. The logistic regression analysis found that MAFLD was an independent risk factor for complications, including infectious and major complications in HCC patients following radical resection (OR 1.565, 95%CI 1.109–2.343, P = 0.012; OR 2.092, 95%CI 1.386–3.156, P < 0.001; OR 1.859, 95% CI 1.106–3.124, P = 0.019; respectively). Subgroup analysis of HBV-related HCC patients yielded similar findings, and MAFLD patients with type 2 diabetes mellitus (T2DM) exhibited a higher incidence of postoperative complications compared to those without T2DM (all P < 0.05). Conclusions Concurrent MAFLD was associated with an increased incidence of complications after radical resection in patients with HCC, especially MAFLD with T2DM.
Introduction
The prevalence of nonalcoholic fatty liver disease (NAFLD) has progressively increased over the past few decades, reaching a level almost equivalent to that of obesity and has emerged as the foremost chronic liver disease in contemporary times, posing a threat to 25% of global human health [1].With the deepening understanding of the etiology and pathogenesis of NAFLD, it was revised to metabolic dysfunction-associated fatty liver disease (MAFLD) by an international panel of experts from 22 countries in 2020.The diagnosis of MAFLD is etiologically oriented and recognizes the coexistence of MAFLD with other liver diseases, thereby providing a more comprehensive understanding of its pathogenesis and facilitating patient classification and management [2].Compared to NAFLD prevalence, MAFLD prevalence was higher, posing an elevated risk of overall mortality [3].
Primary liver cancer (PLC) is ranked sixth in incidence and third in mortality among 36 types of cancers across 185 countries worldwide [4].It is estimated that there were approximately 906,000 new patients and nearly 830,000 deaths from PLC globally in 2020.Hepatocellular carcinoma (HCC) is the most prevalent histological subtype of PLC, accounting for approximately 80%-90% [4].The HBV infection is the predominant risk factor for HCC in China, accounting for about 90% [5,6].Currently, hepatectomy remains the most efficacious treatment option for early-stage HCC [7,8].However, the incidence of postoperative complications remains high, particularly in relation to ascites, infectious and major complications, exerting detrimental effects on patient prognosis [9][10][11][12].
The prevalence of MAFLD in the global population is gradually increasing, leading to an increased number of HCC patients being diagnosed with MAFLD.An Italian Liver Cancer Center study showed that out of 6882 patients diagnosed with HCC, 4706 (68.4%) patients were found to have MAFLD [13].A Chinese study showed that among 514 HBV-HCC patients who underwent radical resection, MAFLD was detected in 117 (22.8%) patients [14].MAFLD serves as a significant risk factor for the development of HCC and warrants careful consideration from clinicians regarding its potential impact on post-hepatectomy complications.However, the relationship between MAFLD and post-hepatectomy complications in patients with HCC remains unclear.The aim of this study was to evaluate the predictive value of the MAFLD on complications after radical resection in HCC patients.
Study population
All HCC patients who were underwent radical resection at Mengchao Hepatobiliary Hospital of Fujian Medical University from January 2015 to December 2020 were retrospectively collected.The inclusion criteria were patients with HCC: confirmed through pathological examination following the initial radical resection, favorable liver function reserve (Child-Pugh grade A or B).The exclusion criteria were as follows: hepatocellular-cholangiocarcinoma (HCC-ICC); accompanied by other malignant tumors; invasive treatment before operation [transcatheter hepatic arterial chemoembolization (TACE) or radiofrequency ablation (RFA)]; multiple intrahepatic metastases, adjacent organ invasion or distant metastases; incomplete clinical data.
Definition
The diagnosis of MAFLD was confirmed by hepatic histology, which revealed the presence of hepatic steatosis and met one of the following criteria: BMI ≥ 23 kg/ m 2 , T2DM, or metabolic dysregulation (MD) [2].Lean MAFLD referred to patients with a BMI < 23 kg/m 2 who also met the diagnostic criteria for MAFLD [15][16][17].The criteria for radical resection of HCC were as follows: the liver resection margin should be ≥ 1 cm from the tumor boundary; in patients where the resection margin was less than 1 cm, histological examination of the liver resection section should reveal no residual tumor cells [18].Excessive alcohol consumption: alcohol intake ≥ 30 g/day for men and ≥ 20 g/day for women [19].Postoperative complications were defined as conditions that cause discomfort or abnormal auxiliary examination results secondary to radical resection.The severity of postoperative complications was evaluated using the comprehensive complication index (CCI) [20].The presence of CCI ≥ 26.2 indicates major complications while CCI < 26.2 suggests general complication [21,22].
Statistical analysis
SPSS 22.0 was utilized for conducting statistical analysis.Continuous variables were described using the median (interquartile range, IQR), while inter-group comparisons were performed using either T-test or Mann-Whitney U test.Categorical variables were presented as frequency with corresponding percentages (%), and intergroup comparisons were conducted using either a χ 2 test or Fisher exact test.Univariate and multivariate logistic regression analyses were conducted to examine the risk factors associated with complications after radical resection in HCC patients.Variables with P < 0.05 in the univariate analysis were considered as candidate variables for inclusion in the logistic multivariate analysis.The odds ratio (OR) and its corresponding 95% CI were calculated.The forest plot illustrating the influencing factors of complications after radical resection in HCC patients was generated using software GraphPad Prism 8. P values < 0.05 indicated statistical significance.
The HCC patients were classified into the MAFLD group (201, 21.5%) and the non-MAFLD group (735, 78.5%) based on the presence or absence of MAFLD.In comparison to the non-MAFLD group, the MAFLD group exhibited a higher median BMI (24.2 vs 22.3 kg/ m 2 , P < 0.001) and a greater proportion of patients with combined BMI ≥ 23 kg/m 2 (78.6% vs 43.4%, P < 0.001).Moreover, the prevalence rates of T2DM and MD in the MAFLD group were significantly higher compared to the non-MAFLD group (27.9% vs 11.4%, P < 0.001; 50.2% vs 30.4%,P < 0.001; respectively).Additionally, the ALT levels were also significantly higher in the MAFLD group compared to the non-MAFLD group (36.0 vs 32 IU/L, P = 0.012).No significant differences were observed between both groups in terms of other characteristics (all P > 0.05) (Table 1).
Complications after radical resection in HCC patients
The overall morbidity rate of complications after radical resection in HCC patients was 2 and 3).
The overall incidence of postoperative complications in the MAFLD group was higher compared to the non-MAFLD group (27.4% vs 19.3%, P = 0.013).Moreover, the MAFLD group exhibited a higher occurrence of postoperative infectious and major complications (CCI ≥ 26.2) compared to the non-MAFLD group (23.4% vs 13.5%, P = 0.001; 12.4% vs 7.5%, P = 0.026, respectively).Further analysis found that the MAFLD group exhibited a higher incidence of postoperative complications, including pleural effusion, intra-abdominal infection, liver failure, wound infection, and death within 30 days (all P < 0.05).However, there were no statistically significant differences observed in other complications between the two groups (Table 4).
Influencing factors of complications after radical resection in HCC patients
Univariate logistic regression analysis found that MAFLD was identified as a significant risk factor of complications after radical resection in HCC patients (OR 1.573, 95%CI 1.097-2.255,P = 0.014).Additionally, age ≥ 60 years, male, T2MD, tumor diameter ≥ 5 cm, number of tumors ≥ 2, MVI, Child-Pugh grade B and open surgery were significantly associated with post-hepatectomy complications in HCC patients (all P < 0.05) (Table 5).
Multivariate logistic regression analysis revealed that MAFLD was an independent risk factor of complications after radical resection in HCC patients (OR 1.565, 95%CI 1.109-2.343,P = 0.012).Additionally, age ≥ 60 years, number of tumors ≥ 2, MVI, Child-Pugh grade B and open surgery were also identified as significant independent risk factors of post-hepatectomy complications (all P < 0.05) (Table 5).
Influencing factors of infectious complications after radical resection in HCC patients
Univariate logistic regression analysis found that MAFLD was identified as a risk factor of complications after radical resection in HCC patients (OR 1.961, 95%CI 1.328-2.894,P = 0.001).Additionally, age ≥ 60 years, T2MD, HBV DNA ≥ 500 IU/mL, tumor diameter ≥ 5 cm, tumor number ≥ 2, MVI, Child-Pugh grade B and open surgery were also found to be associated with an increased risk of infectious complications after radical resection in HCC patients (all P < 0.05) (Table 6).
Influencing factors of major complications after radical resection in HCC patients
Univariate logistic regression analysis found that MAFLD was a risk factor of major complications (CCI ≥ 26.2) after radical resection in HCC patients (OR 1.756, 95%CI 1.064-2.898,P = 0.028).Additionally, age ≥ 60 years, BMI ≥ 23 kg/m 2 , T2DM, tumor diameter ≥ 5 cm, MVI, Child-Pugh grade B and open surgery were also found to be associated with an increased risk of major complications after radical resection in HCC patients (all P < 0.05) (Table 7).
Multivariate logistic regression analysis revealed that MAFLD independently increased the risk of major complications after radical resection in HCC patients (OR 1.859, 95% CI 1.106-3.124,P = 0.019).The other independent risk factors included: age ≥ 60 years, tumor diameter ≥ 5 cm, Child-Pugh grade B and open surgery (all P < 0.05) (Table 7).
Subgroup analysis of HBV-HCC
HBV-HCC subgroup was analyzed due to the fact that 91.1% (853/936) HCC were diagnosed with HBV-HCC.The HBV-HCC patients were aged 57 years (49.0-64.0years), including 698 males (81.8%) and 155 females (18.2%).The proportion of patients with BMI ≥ 23 kg/m 2 , T2DM, and MD were 51.1% (436), 14.1% (120), and 34.1% (291) respectively.They were divided into two groups based on the presence or absence of MAFLD: 178 (20.9%) patients in the MAFLD group and 675 (79.1%) patients in the non-MAFLD group.The baseline characteristics of patients in the HBV-HCC subgroup and the comparison of baseline characteristics between the MAFLD group and the non-MAFLD group were presented in Table 8.
Complications after radical resection in the subgroup of HBV-HCC patients
The overall morbidity rate of complications after radical resection in HBV-HCC patients was 20 9 and 10).
The incidence of postoperative complications in the MAFLD group was higher compared to the non-MAFLD group (P = 0.08).Moreover, the MAFLD group also exhibited a higher occurrence of infectious and 11).
Influencing factors of complications after radical resection in the subgroup of HBV-HCC patients
Univariate logistic regression analysis found that MAFLD was a risk factor for complications after radical resection in HBV-HCC patients ((OR 1.669, 95%CI 1.142-2.439,P = 0.008).Multivariate logistic regression analysis showed that MAFLD was an independent risk factor for complications after radical resection in HBV-HCC patients (OR 1.674, 95%CI 1.127-2.487,P = 0.011) (Fig. 2).In addition, we also analyzed the influencing factors of infectious and major complications after radical resection in HBV-HCC patients.We also found that MAFLD was an independent risk factor for infectious and major complications after radical resection in HBV-HCC patients (OR 2.111, 95%CI 1.375-3.241,P = 0.001; OR 1.770, 95% CI 1.006-3.116,P = 0.048; respectively) (Figs. 3 and 4).
Discussion
In this study, we retrospectively evaluated the impact of MAFLD on the complications after radical resection in HCC patients.The results revealed that MAFLD significantly increased the incidence of complications, including infectious and major complications after radical resection in HCC patients.Furthermore, MAFLD was identified as an independent risk factor for complications.Notably, the HBV-HCC patients with coexisting MAFLD and T2DM were particularly prone to developing postoperative complications.With the escalating global prevalence of obesity and metabolic syndrome, the burden of MAFLD is rapidly increasing, particularly in the Asia-Pacific region [23].The co-occurrence of HCC and MAFLD is increasingly prevalent due to the rising incidence of MAFLD.A considerable proportion of HCC patients were also found to have MAFLD in this study, specifically 21.5% (201/936) of HCC patients and 20.9% (178/853) of HBV-HCC patients.We also observed that the primary disparity in baseline characteristics were that MAFLD group exhibited a higher prevalence of metabolic disorders and elevated ALT levels compared to non-MAFLD group.However, the presence of MAFLD did not impact the pathological characteristics of patients with HCC.Similar findings were also noted in HBV-HCC patients.Previous studies [6,15] have reported similar results, nevertheless, one of the studies found that patients within the MAFLD demonstrated better histological differentiation and lower rates of MVI compared to those without MAFLD, indicating earlier detection of HCC in patients with MAFLD.However, our study did not find any influence of MAFLD on histological differentiation and MVI.The reason may be that certain countries actively monitor MAFLD as a risk factor for HCC, leading to earlier detection of HCC in patients with concurrent MAFLD.In contrast, the recognition and surveillance of MAFLD in our country were still insufficient, resulting in no such disparity.
Therefore, the impact of MAFLD on the pathological characteristics of HCC requires further validation through multi-center and large-scale clinical as well as basic studies.
Hepatectomy has been extensively utilized for the treatment of various liver diseases.However, postoperative complication rates remain relatively high at approximately 20% to 56% [24].This study found that the overall incidence of complications after radical resection in HCC and HBV-HCC patients were 21.0% and 20.9%, respectively.Therefore, the persistently high incidence of postoperative complications in patients with HCC is a challenging issue for surgeons in clinical practice [25].Our study also found that the incidence of complications after radical resection in the MAFLD group was higher compared to the non-MAFLD group.Moreover, the presence of MAFLD independently contributed to an increased risk of postoperative complications in patients with HCC who undergo radical resection, suggesting that the coexistence of MAFLD was associated with an increased incidence of postoperative complications in patients with HCC.This association can be attributed not only to the presence of hepatic steatosis in MAFLD patients but also to their higher susceptibility to metabolic disorders such as T2DM.Extensive evidence has consistently demonstrated that T2DM, as a metabolic disorder, significantly increases the incidence of complications following hepatectomy [26].Considering that infectious complications is the most common post-hepatectomy complication in HCC patients, its incidence ranges from 4 to 25%, which is significantly associated with mortality risk [27,28].Therefore, it is crucial to identify and intervene in the risk factors associated with infectious complications following radical resection in order to effectively prevent infections and enhance the clinical outcomes of patients.In this study, a higher prevalence of posthepatectomy infectious complications was observed among HCC and HBV-HCC patients, with rates of 15.6% and 15.9%, respectively.The present study employed the CCI to assess the severity of complications after radical resection in patients with HCC.It has been extensively utilized in assessing complications following abdominal surgery and is also widely referenced for evaluating complications after hepatectomy [29,30].The incidence of major complications (CCI ≥ 26.2) following radical resection in patients with HCC and HBV-HCC were relatively low (8.5% and 8.1%, respectively).We also found that MAFLD independently contributed to the risk of infectious and major complications after radical resection in HCC and HBV-HCC patients.The findings suggest that MAFLD may significantly increase the occurrence of infectious and major complications following radical resection in HCC patients.
In this study, we also observed that the HBV-HCC patients with T2DM-MAFLD group exhibited a higher occurrence rate of complications, including infectious and major complications compared to those with non-T2DM-MAFLD group.is suggested that patients with HBV-HCC combined with T2DM-MAFLD are more susceptible to complications after radical resection.The reason for this is that hyperglycemia-induced oxidative stress response augmentation, inflammatory response enhancement, and impaired liver regeneration capacity [31].Therefore, it is crucial to enhance the comprehension of MAFLD in patients undergoing radical resection for HCC and HBV-HCC, particularly MAFLD with T2DM.This will greatly contribute towards comprehensive preoperative evaluation and reduction in the incidence of postoperative complications.
Additionally, we also revealed that aged ≥ 60 years, Child-Pugh grade B, tumor diameter ≥ 5 cm, and open hepatectomy were identified as risk factors for posthepatectomy complications, infectious and major complications in HCC and HBV-HCC patients, which is consistent with previous research findings [32][33][34][35][36].This is because elderly patients may present with multiple comorbidities and experience gradual decline in organ function, resulting in compromised compensatory capacity of the liver and impaired regeneration ability of hepatocytes after radical resection [32].Research has demonstrated that patients classified as Child-Pugh grade B (7 to 9 points) exhibit higher rates of postoperative complications and perioperative mortality compared to those Child-Pugh grade A (5 to 6 points) [33].The prevailing belief both domestically and internationally is that the larger the diameter of a liver tumor, the broader the resection scope, and consequently, the more challenging the surgical procedure becomes with an increased likelihood of postoperative complications [34].Compared to open surgery, laparoscopic surgery offers the advantages of reduced surgical trauma and faster postoperative recovery.A study of 3,876 HCC patients who underwent hepatectomy found that laparoscopic surgery was independently associated with lower incidences of postoperative infectious complications following hepatectomy for HCC compared with open surgery [35].A meta-analysis also revealed that laparoscopic hepatectomy in HCC patients was significantly associated with decreased blood loss, successful R0 resection, wider scope of liver resection, shorter hospital stays, lower complication rates, and 30-day mortality [36].Although BMI is an important criterion for diagnosing MAFLD, this study found no significant correlation between BMI and postoperative complications after HCC hepatectomy.Because the high BMI patients with HCC may have good nutritional and physiological reserves, leading to an enhanced inflammatory response to injury.This can potentially counteract postoperative complications in high BMI patients undergoing hepatectomy [37,38].
There are inherent limitations to this study.Firstly, it is important to note that this study was conducted at a single center; however, the large sample size we collected helps mitigate potential selectivity bias to some extent.Secondly, our study population primarily consisted of HBV-HCC patients, accounting for over 90%.Further investigation is needed to determine the impact of MAFLD on complications after radical resection in HCC patients caused by different etiologies; however, this study demonstrates the detrimental effect of MAFLD on the complications after radical resection in HBV-HCC patients.Thirdly, it should be acknowledged that the present study is a retrospective analysis, wherein certain parameters such as waist circumference and HOMA-IR could not be extracted from electronic medical records, potentially resulting in a reduced diagnostic rate of MAFLD.
In conclusion, concurrent MAFLD was associated with a higher risk of complications, including infectious and major complications after radical resection in HCC patients, especially MAFLD with T2DM.It indicated that management of MAFLD may confer benefits in reducing complications after radical resection in HCC patients.
Fig. 1
Fig. 1 Flow chart for the selection of the study population measured
Table 2
Incidence of postoperative complications in HCC patients
Table 4
Comparison of complications between MAFLD group and non-MAFLD groupMAFLD metabolic dysfunction-associated fatty liver disease, CCI comprehensive complication index
Table 5
Univariate and multivariate analysis of complications after radical resection in HCC patientsHCC hepatocellular carcinoma, MAFLD metabolic dysfunction-associated fatty liver disease, BMI body mass index, T2DM type 2 diabetes mellitus, MD metabolic dysregulation, AFP alpha-fetoprotein, BCLC Barcelona Clinic Liver Cancer a The diagnostic criteria for MAFLD include T2DM
Table 6
Univariate and multivariate analysis of infectious complications after radical resection in HCC patientsHCC hepatocellular carcinoma, MAFLD metabolic dysfunction-associated fatty liver disease, BMI body mass index, T2DM type 2 diabetes mellitus, MD metabolic dysregulation, AFP alpha-fetoprotein, BCLC Barcelona Clinic Liver Cancer a The diagnostic criteria for MAFLD include T2DM major complications (CCI ≥ 26.2) compared to the non-MAFLD group (all P < 0.05) (Table
Table 7
Univariate and multivariate analysis of major complications after radical resection in HCC patientsHCC hepatocellular carcinoma, MAFLD metabolic dysfunction-associated fatty liver disease, BMI body mass index, T2DM type 2 diabetes mellitus, MD metabolic dysregulation, AFP alpha-fetoprotein, BCLC Barcelona Clinic Liver Cancer a The diagnostic criteria for MAFLD include BMI and T2DM
Table 8
Baseline characteristics of patients with HBV-HCC subgroupHBV-HCC hepatitis B virus-related hepatocellular carcinoma, MAFLD metabolic dysfunction-associated fatty liver disease, BMI body mass index, T2DM type 2 diabetes mellitus, MD metabolic dysregulation, ALT alanine aminotransferase, AFP alpha-fetoprotein, BCLC Barcelona Clinic Liver Cancer
Table 9
Incidence of postoperative complications in HBV-HCC patients
Table 11
Comparison of complications between MAFLD group and non-MAFLD groupMAFLD metabolic dysfunction-associated fatty liver disease, CCI comprehensive complication index
|
2024-05-04T06:17:08.408Z
|
2024-05-03T00:00:00.000
|
{
"year": 2024,
"sha1": "a7b83f2b66c06b657b9f1d72ddaa973b0abe1457",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "36b04fa61cb75bf86d38c169479a0b947b7751b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249532994
|
pes2o/s2orc
|
v3-fos-license
|
Author Correction: Brain aging is faithfully modelled in organotypic brain slices and accelerated by prions
Mammalian models are essential for brain aging research. However, the long lifespan and poor amenability to genetic and pharmacological perturbations have hindered the use of mammals for dissecting aging-regulatory molecular networks and discovering new anti-aging interventions. To circumvent these limitations, we developed an ex vivo model system that faithfully mimics the aging process of the mammalian brain using cultured mouse brain slices. Genome-wide gene expression analyses showed that cultured brain slices spontaneously upregulated senescence-associated genes over time and reproduced many of the transcriptional characteristics of aged brains. Treatment with rapamycin, a classical anti-aging compound, largely abolished the time-dependent transcriptional changes in naturally aged brain slice cultures. Using this model system, we discovered that prions drastically accelerated the development of age-related molecular signatures and the pace of brain aging. We confirmed this finding in mouse models and human victims of Creutzfeldt-Jakob disease. These data establish an innovative, eminently tractable mammalian model of brain aging, and uncover a surprising acceleration of brain aging in prion diseases.
A dvanced age is a strong risk factor for several chronic disorders that affect multiple organs, including the brain 1 . Therefore, treatments targeting the biological aging process may represent promising therapeutic interventions against many of these diseases. However, current mammalian models for aging research suffer from long lifespans and poor amenability to genetic and pharmacological perturbations 2 , which limits their usefulness for dissecting aging-regulatory molecular networks and discovering new anti-aging interventions. Consequently, only a small number of molecular pathways relevant to human aging have been identified 3 . More versatile model systems are needed to advance our knowledge on aging of complex tissues in mammals including humans.
Previous studies have highlighted several hallmarks of biological aging, including inflammation, loss of proteostasis, dysregulated nutrient sensing, mitochondrial malfunction, and altered intercellular communication 4 . Many of these aging hallmarks are also critical pathological events observed in brain disorders characterized by abnormal protein aggregation, such as prion diseases, a group of neurodegenerative disorders caused by misfolding and aggregation of the cellular prion protein (PrP C ) into pathological isoforms called prions 5 . Therefore, studying the aging process in protein-aggregation disorders of the brain may not only enhance our understanding of disease progression but also offer a unique opportunity to identify key drivers of biological brain aging.
All major pathological characteristics of prion diseases can be faithfully mimicked in both prion-inoculated animals and cultured organotypic cerebellar slices (COCS) 6,7 . Such cultures preserve the sophisticated cell-cell interactions present in vivo while gaining simplicity and experimental amenability, and represent a tantalizing biological system for investigating complex physiological and pathological processes of the brain. It is, however, unclear whether COCS experience biological aging akin to brains in vivo.
Here, we evaluated the suitability of COCS for modeling in vivo brain aging through genome-wide gene expression profiling, bioinformatics and machine learning, approaches widely used to measure and predict biological ages of various tissues [8][9][10][11] . We found that COCS faithfully mimicked the in vivo molecular brain-aging process, and demonstrated the physiological relevance of these findings through rapamycin treatment, establishing an innovative, convenient, mammalian model system for brainaging research. Furthermore, we discovered that prions drastically accelerated the pace of brain aging in COCS, mouse models and human patients, providing a theoretical basis for exploiting rejuvenating therapies against prion diseases.
Results
Natural aging of COCS. We used RNA sequencing (RNAseq) to obtain genome-wide gene expression profiles of COCS maintained ex vivo for 12, 28, 42, or 56 days in the presence or absence of prions (Fig. 1a). To compare it with in vivo, a similar RNAseq was performed for the cerebellum of adult C57BL/6 J mice at 56 or 182 days post-intracerebral inoculation with normal brain homogenate (NBH) or brain homogenate containing prions (Fig. 1a). The global gene expression in the control COCS was highly similar to that in the cerebellum and hippocampus (GSE144738 12 ) in vivo, but substantially different from the adult muscle (GSE145480 13 ) (Fig. 1b), indicating that brain-specific gene expression was retained in COCS. The correlation of transcriptional patterns between the control COCS and adult brain tissues underwent a subtle monotonic decrease over time (Fig. 1b). Indeed, a comparison of gene expression in the control COCS between 56 and 12 days identified 2736 differentially expressed genes (DEG) (Supplementary Fig. 1a; Supplementary Data 1).
To examine whether the transcriptional changes in the control COCS might be caused by the transient exposure to NBH, we analyzed ten randomly picked DEG (five upregulated and five downregulated) and ten genes related to cellular senescence by quantitative RT-PCR. However, none of them showed any difference between the control and naïve (no NBH exposure) COCS at 12 and 56 days ( Supplementary Fig. 3a, b), indicating that the time-dependent gene expression changes in the control COCS were mainly induced by the natural aging process.
Enrichment of DEG in aged COCS and in vivo for KEGG pathways identified 41 commonly altered biological processes (Supplementary Data 3). These included antigen processing and presentation, p53 signaling, apoptosis, natural killer cell-mediated cell toxicity, Mapk signaling, ErbB signaling, Hif1 signaling, estrogen signaling, Rap1 signaling, and ECM-receptor interaction (Fig. 1f), all of which had been found to be overrepresented in aged brains [16][17][18][19][20] . Additional aging-related pathways, such as NF-kB signaling, mTOR signaling, autophagy, cellular senescence and VEGF signaling, were found to be more profoundly altered in long-term cultured COCS than in 9-month cerebella (Fig. 1g), suggesting that the 56-day-old COCS represent a more advanced in vivo brain-aging stage.
Aging-modulatory effects of rapamycin in COCS. To further validate the physiological relevance of the time-dependent transcriptional changes in COCS to in vivo brain aging, we treated naïve COCS with rapamycin, which is often used in anti-aging paradigms, or with DMSO from day 12 to day 56, and examined the expression levels of cellular senescence genes by quantitative RT-PCR. Strikingly, we found that the transcriptional induction of all the cellular senescence-associated genes in aged COCS was drastically suppressed by rapamycin treatment (Fig. 2a). In addition, by applying elastic-net-based machine learning 21 to our time-course RNAseq dataset of the control COCS, we identified 322 age-predicting genes (Supplementary Data 4). Quantitative RT-PCR results demonstrated that rapamycin substantially inhibited the expression changes of all eight randomly picked upregulated age-predicting genes (Fig. 2b), and five out of eight randomly picked downregulated age-predicting genes (Fig. 2c), in the 56-day-old naïve COCS, indicating a global slowdown of the biological aging process upon rapamycin treatment. These data confirm that a biological process similar to that underlying in vivo brain aging drove the progressive transcriptional changes in COCS over time.
Accelerated aging in prion-exposed COCS. To evaluate the power of our model system for identifying novel modifiers of brain aging, we next studied how the biological aging process develops in COCS exposed to prions. We compared the transcriptional changes in the prion-exposed COCS at 56 days with those induced by natural aging in the control COCS. Surprisingly, we found that a large number of genes dysregulated in the aged COCS further changed their expression levels in the presence of prions (Fig. 3a). These results suggest that prions may alter the biological aging process.
To test this hypothesis, we selected 3104 aging-related genes by combining DEG identified by comparing control COCS at any two time points, and examined their expression changes in the presence and absence of prions. Using fuzzy c-means clustering 22 , we classified the aging-related genes into six clusters (Fig. 3b). Genes in clusters 2 and 6 showed downregulation over time (Fig. 3b), and were enriched for neuronal genes ( Supplementary Fig. 4a) and pathways associated with the synaptic function ( Supplementary Fig. 4b). Genes in clusters 1 and 3 showed Supplementary Fig. 4a). We examined how prion exposure influenced the transcriptional evolution of these aging-associated genes over time at the cluster level. Strikingly, we found that the temporal signatures of all the six aging-associated clusters were largely preserved after prion exposure; however, the overall temporal kinetics of both the upregulated and downregulated clusters were strongly accelerated in the presence of prions (Fig. 3b).
To further investigate the prion-dependent aging acceleration, we trained an age-predictive machine-learning algorithm based on ridge regression 23 using the time-course gene expression profiles in the control COCS, and calculated the biological ages of COCS exposed or not exposed to prions. The algorithm-predicted biological ages for the control COCS were similar to their chronological ages at all time points (Fig. 3c); however, the algorithm-predicted biological ages for the prion-exposed COCS were notably older compared to their chronological ages at three out of the four time points (Fig. 3c). Specifically, we found that the biological ages of the prion-exposed COCS were~3, 8, and 13 days older compared to the control COCS at day 28, day 42, and day 56, respectively (Fig. 3c). These data further support the conclusion that the biological aging process is accelerated in COCS after prion exposure.
To explore the mechanisms behind the accelerated aging in the prion-exposed COCS, we did a gene co-expression network analysis across all experimental conditions and identified five distinct co-expression modules ( Supplementary Fig. 5). We evaluated the module activities across samples using gene set enrichment analysis, and found all the five modules showed strong associations with both aging and prion exposure (Fig. 3d). Genes in modules 2, 4, and 5 showed time-dependent upregulation while genes in modules 1 and 3 showed time-dependent downregulation (Fig. 3d), all of which were drastically enhanced in the presence of prions (Fig. 3d). These results suggest that prions and aging essentially activate the same molecular programs.
Anti-PrP antibody POM2 abolishes accelerated aging in prionexposed COCS. To further examine the accelerating effects of prions on the aging process, we treated prion-exposed COCS and control COCS with the anti-PrP antibody fragment FabPOM2 24 , and sequenced their transcriptomes at 12 and 56 days. FabPOM2 treatment strongly reduced prion levels in the prion-exposed COCS (Fig. 4a, b). Strikingly, we found that although FabPOM2 did not alter the expression of aging-related genes in the absence of prions ( Supplementary Fig. 6a); it completely abolished the accelerated aging signatures in the prion-exposed COCS (Fig. 4c, d). We then calculated the biological ages of the FabPOM2treated COCS with our machine-learning algorithm, and found there was no difference between the control and the prionexposed COCS anymore (Fig. 4e). Crucially, the biological ages of FabPOM2-treated COCS were similar to their chronological ages no matter whether they had been exposed to prions or not (Fig. 4e). These data indicate that acceleration of biological aging in the prion-exposed COCS was strictly dependent on prion replication. We then examined how FabPOM2 treatment affected prioninduced neurotoxicity. By examining the RNAseq data, we found that FabPOM2 treatment completely abrogated all prion-induced molecular changes in COCS (Fig. 4f, g; Supplementary Fig. 6b). Quantification of the neuronal marker NeuN through immunofluorescence and western blotting suggested that FabPOM2 treatment also abolished prion-induced neurodegeneration (Fig. 4h-k). In addition, we found that the progressive loss of the synaptic marker synaptophysin after prion exposure was fully prevented by FabPOM2 (Fig. 4j, k). These data suggest that aging acceleration by prions may be a driver of neurotoxicity. Accelerated biological brain aging in mouse models and patients of prion diseases. Since the progression of prion diseases is much faster than natural aging in humans and animal models, it is difficult to directly study the influence of prions on the aging process in vivo. However, if prions also accelerate brain aging in vivo, genes whose expression changes in advanced age may be altered prematurely in young prion-inoculated animals or exhibit stronger changes in prion disease patients compared to agematched control subjects.
To test this, we extracted a set of genes significantly dysregulated in the 24-month mouse hippocampus compared to 3-month (GSE61915 14 ), and examined their expression changes in the hippocampus of 8-month compared to 3-month in the presence and absence of prions (GSE144738 12 ). In agreement with our prediction, we found many of the genes dysregulated in the 24-month hippocampus during normal aging had already shown significant changes in the 8-month hippocampus compared to 3-month in prion-exposed mice (Fig. 5a, b). In contrast, changes of the same genes were barely detectable in the 8-month hippocampus compared to the 3-month hippocampus in the absence of prions (Fig. 5a, b). To further validate these findings, we analyzed a publicly available RNAseq dataset (GSE168137 25 ) examining genome-wide gene expression in the hippocampus of wild type (WT) and 5xFAD mice across different ages. Using the same approach as in Fig. 3b, we identified the aging-associated genes in the WT hippocampus and classified them into six clusters using fuzzy c-means clustering (Fig. 5c). We then examined how the expression of genes in clusters 3 and 6, two most age-predictive clusters, changed in the prion-exposed mice and AD mice compared to their respective controls. As expected, we found genes in both clusters changed much more strongly in the hippocampus of prion-exposed mice compared to the NBH control group between 8 and 3 months of age (Fig. 5d). In contrast, very similar expression changes were observed in the hippocampus of WT and AD mice within a similar period of time (Fig. 5d). Furthermore, we found a drastic upregulation of senescence-related genes, including the cellular senescence markers and members of the senescence-associated secretory phenotype, in the brains of prion-infected mice (Fig. 5e). These data suggest that prions also accelerate brain aging in the mouse model of prion diseases.
To investigate whether similar biological processes are present in the brains of human prion disease patients, we examined the transcriptional changes of human brain-aging-signature genes 26 in the brains of Creutzfeldt-Jakob disease (CJD) patients and agematched control subjects (GSE124571 27 ). Strikingly, we found that most of the upregulated human brain-aging-signature genes exhibited higher expression levels in the CJD brains than in the age-matched controls (Fig. 5f). Similarly, most of the downregulated human brain-aging-signature genes exhibited lower expression levels in the CJD brains compared to the age-matched controls (Fig. 5f). In addition, by gene set enrichment analysis, we found the senescence-inducer genes 28 , but not the senescenceinhibitor genes 28 , were strongly upregulated in the CJD brains (Fig. 5g), indicating induction of cellular senescence in the brains of prion disease patients. These observations suggest that acceleration of brain aging may also present in human prion disease patients.
Discussion
In this study, we developed an innovative ex vivo model system to investigate the biological brain-aging process in mammals based on COCS. This model system successfully captured the agingmodulatory effects of rapamycin, a classical anti-aging compound, validating its physiological relevance. Compared to the currently available mammalian models of brain aging, the experimental system described here is much less time-consuming and highly amenable to genetic and pharmacological perturbations. Therefore, our model system would be very useful for dissecting the complex molecular networks underlying biological brain aging in evolutionarily advanced organisms, especially when combined with high-throughput screening technologies.
Aging is a complex biological process strongly influenced by evolution. Although previous studies have identified conserved aging-related pathways across species, the pace and physiological mechanisms of aging in different species are not entirely the same 29 . In principle, our COCS-based aging-modeling approach can be adapted to other mammalian species including primates and humans, thus providing an invaluable experimental paradigm to study species-specific characteristics of biological brain aging.
Advanced age posts strong risks for developing neurodegenerative diseases associated with pathological protein aggregation 30 . However, there is little consensus on why this is the case. Using our COCS-based model system, we discovered that prions strongly accelerated biological brain aging-a finding that we then corroborated in brains of experimental animals and human CJD victims. These findings not only advance our understanding of the molecular pathology underlying prion-induced neurodegeneration but also point to potential therapeutic interventions against prion diseases.
In essence, our data suggest that certain phenotypic manifestations of prion diseases may directly result from the accelerated changes of pathways operative in brain aging. If that is the case, it may be important to explore whether interventions that reduce certain aspects of aging might be beneficial (perhaps in combination with anti-prion therapies) in the treatment of prion diseases. Senolytic therapies and rejuvenating interventions may represent such candidates. Indeed, the anti-aging compound rapamycin notably suppressed prion disease pathogenesis in animal models 31,32 . In addition, long-term injection of young Fig. 4 FabPOM2 treatment abolishes prion-induced aging acceleration and neurotoxicity. a, b Representative western blots (a) and quantification (b) of PrP Sc in prion-inoculated COCS with or without FabPOM2 treatment (n = 5). ***p < 0.001. c Heatmap showing the expression levels of aging-associated genes in NBH-(Ctrl) and prion-exposed COCS with or without FabPOM2 treatment at day 12 and day 56. Values are normalized row wise. d Quantifications of normalized expression changes of the aging-associated genes shown in c between day 12 and day 56 in NBH-(Ctrl) and prion-exposed COCS with or without FabPOM2 treatment. ****p-value < 0.0001. e Chronological and predicted biological ages of COCS in NBH-(Ctrl) and prion-exposed COCS treated with FabPOM2. n.s not significant. f Heatmap showing the expression levels of differentially expressed genes in prion-exposed COCS at day 56 with or without FabPOM2 treatment. Data are normalized row wise. g Quantifications of normalized expression levels of genes shown in f. Left panel: upregulated genes in prion-exposed COCS. Right panel: downregulated genes in prion-exposed COCS. ****p < 0.0001. n.s not significant. h Representative immunofluorescent images of NeuN staining in NBH-(Ctrl) and prion-exposed COCS at day 56 with or without FabPOM2 treatment. i Quantifications of NeuN positive areas shown in h (n ≥ 15 brain slices for each group). ***p < 0.001. n.s not significant. j Representative western blots showing synaptophysin and NeuN protein levels in NBH-(Ctrl) and prion-exposed COCS at day 56 with or without FabPOM2 treatment. k Quantifications of synaptophysin and NeuN protein levels shown in j (n = 4 for Ctrl group; n = 5 for prion group). ***p < 0.001. n.s not significant. Data are presented as mean ± SEM. blood serum alleviated the clinical symptoms of prion-inoculated mice 12 .
Several questions surrounding our findings are still open and need to be addressed in future studies. Firstly, previous studies have observed different effects of aging on prion disease development in animal models inoculated with prions through different routes 12,33 , and prions accumulate in the peripheral organs of animal models and human patients suffering from sporadic and variant CJD [34][35][36][37] . It would be interesting to investigate whether prions influence the biological aging process of tissues other than the brain. Secondly, since different prion strains have been found to be responsible for different prion disease subtypes 38 , future studies may determine whether the observed brain-aging acceleration is a general feature of all prion strains or is restricted to the prion strains investigated in the current study.
Thirdly, we observed strong induction of cellular senescence in prion-infected COCS, mouse brains and human brains of CJD patients; however, the functional relevance of these changes to prion disease development is still unknown. In addition, it is unclear whether the induction of cellular senescence happens cellautonomously as a result of prion infection or non-cellautonomously due to changes in the molecular environment in the prion-infected brain induced by prion replication. Clarifications of these aspects in follow-up studies might help device senescent-cell-targeting therapies against prion diseases.
Materials and methods
Animal experiment. For in vivo studies, adult male C57BL/6 J mice purchased from Charles River Germany were used. These mice were part of the mouse cohort that had been used for our previous investigations focusing on gene expression of Prions accelerate biological brain aging in vivo. a Heatmap showing the normalized fold changes of in vivo brain-aging-signature genes across conditions. ctrl_8M_3M: 8-month vs. 3-month hippocampus in control (NBH-exposed) mice; prion_8M_3M: 8-month vs. 3-month hippocampus in prioninfected mice; aging_24M_3M: 24-month vs. 3-month hippocampus in normal mice. b Quantifications of normalized expression changes of agingsignature genes shown in a. Left panel: upregulated brain-aging-signature genes shown in a. Right panel: downregulated brain-aging-signature genes shown a. ****p-value < 0.0001. c In vivo brain-aging-associated genes clustered by fuzzy c-means. Only those genes with cluster membership values >0.5 are shown. d Normalized expression changes of genes in clusters 3 and 6 shown in c in prion-inoculated mice, AD mice and their respective controls. ****pvalue < 0.0001. n.s not significant. e Heatmap showing the expression levels of cellular senescence genes, including senescence markers and members of the senescence-associated secretory phenotype, in the cerebellum of NBH-(ctrl) and prion-inoculated mice. f Fold changes of human brain-agingsignature genes in the brains of Creutzfeldt-Jakob disease (CJD) patients compared to age-matched controls. Significant: p-value < 0.05. n.s not significant. g Graphs showing results of gene set enrichment analysis of senescence-inducer and inhibitor genes in the differentially expressed genes (DEG) of CJD. Statistical significance of the analysis is given as false discovery rate (fdr). DEG were ranked according to their fold changes in the CJD brains compared to age-matched controls.
the hippocampus in prion diseases 12 . For studies using brain slice cultures, C57BL/ 6 J pups from the Laboratory Animal Services Center, University of Zurich were used. All animal experiments were performed according to Swiss federal guidelines and approved by the Animal Experimentation Committee of the Canton of Zurich.
Prion inoculation. Intracerebral inoculation of adult C57BL/6 J mice was performed as previously described 12 . Briefly, 30 µl of 0.1% normal brain homogenate (NBH) derived from the whole brain of healthy adult CD-1 mice or the same amount of prion-containing brain homogenate derived from the whole brain of terminally sick Rocky Mountain Laboratory strain of scrapie, passage 6 (RML6) infected mice were injected into the brain with 0.3 ml syringes under deep anesthesia. Health and prion disease symptoms of the inoculated mice were monitored according to a protocol approved by Animal Experimentation Committee of the Canton of Zurich.
Organotypic brain slice culture. Brain slice cultures were prepared using cerebellar tissues from 10-12 days old C57BL/6 J mouse pups according to a previously published protocol 39 . Briefly, 350 μm thick cerebellar slices were produced using a Leica vibratome and temporarily kept in ice-cold Gey's balanced salt solution (GBSS) supplemented with kynurenic acid (1 mM) and glucose. Slices with intact morphology were then collected and exposed to RML6, NBH or no brain homogenate for 1 at 4°C. After extensive washes, six to eight slices were put on a Millicell-CM Biopore PTFE membrane insert (Millipore) and kept on a slice culture medium (50% vol/vol MEM, 25% vol/vol basal medium Eagle, and 25% vol/vol horse serum supplemented with 0.65% glucose (w/vol), penicillin/streptomycin and glutamax (Invitrogen)) at 37°C in a cell culture incubator. The culture medium for the brain slices was changed three times per week.
Rapamycin treatment. Rapamycin (HY-10219, MedChemExpress, USA) was dissolved in dimethyl sulfoxide (DMSO, Sigma, 472301, Switzerland), aliquot, and stored at −80°C. Freshly thawed rapamycin was added into the brain slice culture medium during medium changes from day 12 to day 56 with the final concentration of 500 nM. The same amount of DMSO was used as control.
FabPOM2 treatment. Homemade anti-PrP antibody FabPOM2 was supplemented in the brain slice culture medium with 500 nM concentration from the second day after the cultures were established. The treatment continued until the end of the experiment. Fresh FabPOM2 was added to the culture medium every time when the culture medium was changed.
Immunofluorescence. Immunofluorescent staining of cultured brain slices was performed as described previously 41 . Brain slices were fixed in 4% PFA for 30 min, permeabilized with 0.1% Triton X-100 in PBS and blocked with 5% goat serum in PBS overnight. After blocking, brain slices were incubated with anti-NeuN antibody (1:1000, Millipore, MAB377) for 3 days at 4°C. After intensive washes in PBST, slices were incubated overnight with Alexa488 conjugated goat anti-mouse secondary antibody (1:3000, Jackson ImmunoResearch) at 4°C. Stained slices were mounted on glass slides and imaged with a fluorescent microscope (Leica Biosystems).
RNA sequencing. High-throughput RNA sequencing (RNAseq) was performed as previously 12,40 43 . Genes with low read counts were filtered out with the default parameters in edgeR. Trimmed Mean of M-values (TMM) normalization was applied to account for compositional biases introduced by the sequencing depth and effective library sizes. The significance of differential expression for each gene was tested using the QL F-test. Genes with absolute log2 (fold change) > 0.5 and false discovery rate (FDR) < 0.05 were considered differentially expressed genes (DEG). KEGG pathway enrichment analysis of DEG was performed using R package pathfindR 44 .
Clustering of aging-associated genes. Aging-associated genes were clustered according to their temporal profiles using the fuzzy c-means algorithm with the R package Mfuzz 45 . Mean gene expression values (normalized counts) for each time point were normalized through logarithmic transformation and standardized with default settings. The fuzzifier parameter m and the number of clusters c were estimated with the default programs in Mfuzz. Cluster membership value >0.5 was used for identifying the core genes in each cluster. Only the core genes in the clusters were plotted in the temporal profiles and used for downstream analyses. Pathway enrichment analysis of the clustered genes was performed using enrichR 46 .
Elastic-net and ridge regression. Age-predictive machine-learning models were trained using the R package Caret 47 , based on elastic-net 21 , or ridge regression 23 . Logarithmic transformed gene expression levels (normalized counts) in the control (NBH-exposed) brain slice cultures and their corresponding chronological ages were used for training the models with leave-one-out cross-validation.
Cell type enrichment of aging-associated clusters. Enrichment of core genes in each aging-associated cluster for major cerebellar cell types was performed by gene set enrichment analysis (GSEA) using the R package fgsea 48 . The core genes of each cluster were used as gene sets. The ranking list of genes for each cell type was generated based on the expression fold changes of a given gene identified in a given cell type against the rest of the cerebellum 49 .
Gene co-expression network analysis. Gene co-expression network analysis was performed using the R package CEMiTool 50 . Expression levels (normalized counts) of genes detected in all experimental groups were normalized through logarithmic transformation and filtered according to variance with the default parameters. Genes in the filtered list were used for identifying gene co-expression networks with the soft-thresholding parameter beta = 20. The activity of identified coexpression modules across experimental conditions was evaluated by performing a GSEA using the genes within modules as gene sets and the median z-score values of each phenotype as rank.
Gene expression analyses of human brain aging and prion disease patients. Human brain-aging-related gene expression data were obtained from a previously published study 26 , where the authors examined the genome-wide gene expression profiles in the post-mortem samples of the frontal pole of 30 individuals ranging in age from 26 to 106 by Affymetrix gene chips. Genes differentially expressed in the autopsied frontal cortex of 10 sporadic CJD patients and 10 age-matched control subjects were obtained from a previously published study, and are available at GEO with the accession number GSE124571 27 . Information regarding the age, sex, neuropathological diagnosis, cause of death, post-mortem interval of the human subjects, as well as the ethical statements on the use of human samples, can be found in the original publications.
Statistics and reproducibility. Unless otherwise mentioned, an unpaired, twotailed student's t-test was used for comparing data from two groups, which were presented as mean ± SEM. Statistical analysis and data visualization were done using R or GraphPad Prism 8.0.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Raw RNAseq data are available on ArrayExpress with the accession number E-MTAB-11635 and E-MTAB-11742. Raw images of western blots shown in the main figures are included in Supplementary Fig. 7. The source data for graphs in the main figures are included in Supplementary Data 5.
|
2022-06-10T15:06:34.907Z
|
2022-06-08T00:00:00.000
|
{
"year": 2022,
"sha1": "9bc80800e168ebe550ec4cb1b1537cdea8546a5c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s42003-022-03572-w.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "5d852cdd7fecf1790476af3a8d4b3ecc07e8dc54",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17999938
|
pes2o/s2orc
|
v3-fos-license
|
Observation of Chiral Heat Transport in the Quantum Hall Regime
Heat transport in the quantum Hall regime is investigated using micron-scale heaters and thermometers positioned along the edge of a millimeter-scale two dimensional electron system (2DES). The heaters rely on localized current injection into the 2DES, while the thermometers are based on the thermoelectric effect. In the $\nu = 1$ integer quantized Hall state, a thermoelectric signal appears at an edge thermometer only when it is"downstream,"in the sense of electronic edge transport, from the heater. When the distance between the heater and the thermometer is increased, the thermoelectric signal is reduced, showing that the electrons cool as they propagate along the edge.
Heat transport in the quantum Hall regime is investigated using micron-scale heaters and thermometers positioned along the edge of a millimeter-scale two dimensional electron system (2DES). The heaters rely on localized current injection into the 2DES, while the thermometers are based on the thermoelectric effect. In the ν = 1 integer quantized Hall state, a thermoelectric signal appears at an edge thermometer only when it is "downstream," in the sense of electronic edge transport, from the heater. When the distance between the heater and the thermometer is increased, the thermoelectric signal is reduced, showing that the electrons cool as they propagate along the edge. In the quantized Hall effect (QHE) the interior of the two dimensional electron system (2DES) is incompressible; an energy gap separates the ground state from its charged excitations. Gapless charged excitations do exist, but they are confined to the edges of the 2DES. These edge excitations are largely responsible for electrical transport through the system.
Ignoring electron-electron interactions, the gapless edge excitations in integer quantum Hall systems are easy to visualize. Near the physical edge of the sample the discrete Landau energy levels created by the magnetic field B move up in energy and eventually cross the Fermi level. Near these intersections the Landau orbitals are unidirectional current-carrying states analogous to classical skipping orbits. Arbitrarily low energy excitations are possible within each Landau band. In effect, the 2DES is encircled by a set of chiral, one-dimensional metals, one for each Landau level piercing the Fermi level [1].
The theory of edge channels in the fractional QHE regime is more complex [2]. Wen [3] concluded that the chiral edge states encircling fractional quantum Hall droplets are Luttinger, as opposed to Fermi liquids. For the primitive fractions, ν = 1/m with m an odd integer, there is a single edge mode propagating in the direction expected for particles of charge q = −|e|/m. For more complex states, such as ν = 2/3, multiple edge modes are expected, some of which propagate upstream [3,4,5].
Understanding the edge of quantum Hall systems is complicated by uncertainty over the sharpness of the edge; i.e. how quickly the electron density falls from its bulk value to zero. It is widely appreciated that as the edge is softened reconstruction can occur whereby additional pairs of counterpropagating modes appear. Remarkably, even the ν = 1 integer quantized Hall state is expected to undergo such an edge reconstruction [6].
The existence of backward moving modes has yet to be demonstrated experimentally. Experiments designed to detect backward charged modes [7] have so far found no evidence for them. This motivated us to develop a new means for studying the edge modes of quantum Hall systems, a means not dependent on those modes being charged. In this paper we report the observation of edge heat transport in the quantum Hall regime. Our results demonstrate that at ν = 1 heat transport is strongly chiral, with heat propagating along the edge of the sample in the same direction as negatively charged excitations. However, we also find that hot electrons in the ν = 1 edge channel cool significantly as they propagate. The 2DES samples employed here are conventional GaAs/AlGaAs heterostructures. The density N and mobility µ of the 2DES in these samples range from N = 1.1 to 1.6 × 10 11 cm −2 and µ = 1.6 to 3 × 10 6 cm 2 /Vs at low temperature. A schematic illustration, not to scale, of the device geometry is presented in Fig. 1a. Diffused NiAuGe ohmic contacts are placed along three of the edges of a large rectangular 2DES. On the remaining edge (top edge in Fig. 1a) three narrow constrictions (C1, C2, and C3) separate the main rectangular 2D region from smaller, but still macroscopic, 2D regions. Each of these smaller 2D regions has a single ohmic contact. Devices with two types of constrictions have been studied. In one case the constrictions are narrow (10 µm wide, 20 µm long) channels (NCs) covered by surface gates which control their conductance. In the other they are quantum point contacts (QPCs) whose conductance is controlled by surface split-gates. Four NC devices and one QPC device, from two different wafers, have all revealed the same qualitative results. The center-to-center distance between adjacent constrictions, measured along the edge of the main rectangle, is 30 µm in the NC devices and 20 µm in the QPC devices. These constrictions provide a means of locally heating and locally measuring the temperature along the edge of the main 2DES. The efficacy of this approach was first demonstrated in QPC devices at zero magnetic field by Molenkamp et al. [8].
In a typical measurement a low frequency ac excitation current (I ex ∼ 1 − 50 nA at f ∼ 5 Hz) is driven between ohmic contacts 2 and 6 (see Fig. 1a) and thus through the center constriction (C2) of the device. If the conductance of this "heater" constriction is adjusted (via its associated gates) to be sufficiently small, localized Joule heating of the 2DES in the main rectangle will occur in its vicinity. The resulting temperature rise in the electron gas will extend outward from the constriction a distance determined by various energy relaxation and heat transfer processes. At low temperatures cooling to the lattice via electron-phonon coupling is weak and this distance can become relatively long. The existence of chiral edge states in the quantum Hall regime can be expected to significantly impact the extent and directionality of the temperature profile.
Temperature differences within the 2DES are detected by measuring the voltage difference V between two ohmic contacts, 3 and 4 in this typical example. Contact 3 is attached to the small 2DES region behind constriction C3 (the "detector") adjacent to the heater, while contact 4 is attached directly to the main 2DES rectangle. The voltage difference between these contacts (which, by assumption, are in thermal equilibrium with the lattice) will contain two terms: an ordinary resistive voltage drop and a thermoelectric contribution arising from any temperature drop ∆T which exists along the constriction. The existence of this thermoelectric voltage requires only that the thermoelectric power S (Seebeck coefficient) of the detector constriction differ from that of the bulk 2DESs it connects. In order to distinguish the resistive and thermal contributions to V , lock-in detection at both the fundamental frequency f and the second harmonic at 2f is performed. Since Joule heating is proportional to I 2 ex , we expect the 2f component of V to reflect its thermoelectric component.
We have validated this measurement scheme via experiments performed at zero magnetic field on devices with both NC and QPC constrictions. With C2 used as the heater, clear 2f thermoelectric voltages are observed at both C1 and C3. As observed by Molenkamp et al. [8], the thermoelectric signal in our QPC device is maximized when the conductance of the QPC detector is on a riser between adjacent quantized conductance plateaus. That comparable signals are observed with detectors on each side of the heater demonstrates that heat transport at B = 0 is isotropic as expected and not chiral. Figure 1b shows typical results obtained in the vicinity of the bulk ν = 1 QHE around B = 4 − 5 T using a NC device. A current of I ex = 15 nA at f = 5 Hz is driven between contacts 2 and 6 while both the f and 2f components of the voltage difference V between contacts 3 and 4 is recorded. Constrictions C2 and C3 are adjusted to have conductances of ∼ 0.5 e 2 /h at B = 4.25 T. Ignoring electron heating, these constrictions would merely add series resistances to the current and voltage pick-up pathways; no effect on the 4-terminal resistivity of the QHE would be expected. Thus it is not surprising that the resistive component V f of V (dashed trace in Fig 1b) shows the deep minimum characteristic of the quantized Hall effect. At the same time, however, a small but non-zero voltage V 2f is detected at 2f (solid trace). Although the magnetic field dependence of V 2f is fairly complex on the flanks of the ν = 1 QHE, we focus here on the center of the state where V 2f is roughly constant.
We interpret the 2f signal seen within the ν = 1 QHE state as a thermoelectric voltage arising from a temperature drop along C3 induced by heating at C2. Support for this interpretation is presented in Fig. 2. Figure 2a shows how the observed V 2f signal at B = 4.25 T depends on the net two-terminal resistance R sd ≡ V sd /I ex (in units of R Q = h/e 2 ) of the heater circuit. To obtain these data, the gate voltage controlling C2 is adjusted, and the two-terminal voltage V sd between the source and drain ohmic contacts (2 and 6 in this case) is recorded along with V 2f . The excitation current is held constant at I ex = 15 nA. The figure shows that while V 2f is nonlinear in R sd , it appears to vanish as R sd → R Q . This is the expected result. If the entire heater circuit, including C2, is within the ν = 1 QHE, then R sd = R Q . Heat will be generated, in the amount P = R Q I 2 ex , but only at hot spots very near the ohmic contacts. If, as we assume, the ohmic contacts are thermal reservoirs in equilibrium with the crystal lattice, this heat will be absorbed by the contacts. As the constriction conductance is reduced by gating, R sd starts to exceed R Q . Additional heating, now in the constriction, begins to occur. As there is no nearby thermal reservoir to absorb this heat, it propagates away from the constriction and is ultimately detected at C3, the detector constriction. Hence, the data in Fig. 2a demonstrate that V 2f depends not on R sd alone, but rather upon the difference R h = R sd −R Q , which we may regard as the relevant heater resistance [9].
We stress that the data in Fig. 2a are obtained at f ixed excitation current I ex = 15 nA. If V 2f were a parasitic effect (e.g. harmonic distortion) tied, ultimately, to the resistivity of the 2DES, no dependence on the heater resistance would be expected. To explore this further, Fig. 2b shows the dependence of V 2f on heater power dissipation, defined as P h = R h I 2 ex , at both T = 0.1 and 0.6 K. For both temperatures, the solid dots are obtained by changing the heater resistance at fixed I ex , while for the open dots R h is kept (nearly) constant and I ex is varied (from I ex = 0.04 to 15 nA). To a good approximation, the solid and open dots lie on a single curve. This shows that V 2f is a function of heater power P h rather than another combination of R h and I ex . This is strong evidence in support of our assertion that V 2f reflects the heating of the 2DES at C2.
The sub-linear power dependence of V 2f at T = 0.1 K evident in Fig. 2b contrasts with the linear dependence seen at T = 0.6 K. This may indicate that at T = 0.1 K the electrons in this NC device are being heated well out of equilibrium with the lattice. Interestingly, we find that in QPC devices comparable V 2f signals are detectable in the linear regime, even at T = 0.1 K [10]. Figure 2c compares the magnitude of V 2f at B = 4.25 T with the conductance G C3 of C3, the detector constriction, as functions of the dc voltage V g applied to the gate across it. Note that near V g = 0, where G C3 = e 2 /h, V 2f ≈ 0. This is again the expected result since there the 2DES in both C3 and the bulk of the device are within the ν = 1 QHE state. The thermopower is therefore uniform along a path connecting contacts 3 and 4 (passing through C3) and thus no thermoelectric voltage can develop [11]. As |V g | is increased, G C3 falls below e 2 /h and V 2f becomes non-zero. The 2DES in C3 now has (in general) a different thermopower than the bulk 2DES and hence a thermoelectric voltage appears. We empha- size that the sign of this voltage is consistent with the expected sign (negative) of the thermopower S of C3 and that the electron temperature is higher at the end of the constriction where it meets the large rectangular 2DES than at its other end. Up to this point the detector, C3, has been downstream (clockwise in Fig. 1a), in the sense of electronic edge transport, from the heater, C2. What signals, if any, are observed upstream (i.e. at C1) from the heater? Figure 3 summarizes our findings. Panels (a) and (b) show the resistive, V f , and thermoelectric, V 2f , components of the voltages [12] at C3 and C1, respectively, for clockwise edge transport. Panels (c) and (d) show the same, but for counter-clockwise edge transport (obtained by reversing the magnetic field direction). The results are unambiguous: while in all four cases the resistive component of the voltage displays the expected QHE minimum, a significant thermoelectric component is only observed downstream from the heater. As expected, therefore, heat transport in the ν = 1 QHE is chiral. Electrons arriving at an upstream constriction have recently been thermalized at an ohmic contact; those arriving at a downstream constriction have apparently been unable to release the thermal energy they gained in the vicinity of the heater.
How far can hot edge state electrons propagate before they cool appreciably? To investigate this we compared the V 2f signal observed at C3 when C2 is used as the heater with the C3 signal when C1 is the heater. (In the latter case C2 is completely closed off by fully depleting the 2DES within it.) In this way we can compare V 2f signals at a single detector at two different distances from the heater; 20 vs. 40 µm in the QPC devices and 30 vs. 60 µm in the NC devices. In the NC devices only extremely weak thermoelectric voltages could be detected at 60 µm, suggesting that electrons have almost completely thermalized. In the QPC devices, a clear signal is observed at 40 µm, although in the middle of the ν = 1 QHE it is typically 3 to 5 times smaller than the signal at 20 µm. We estimate that the thermal decay length λ for hot edge state electrons at ν = 1 is in the range of λ ∼ 20 µm at T = 0.1 K. The 40 µm V 2f signal at C3 is reduced if the intermediate constriction, C2, is partially opened. This behavior is displayed in Fig. 4 where we plot the ratio of the 40 µm V 2f signal to the 20 µm signal (measured separately at C3 with C2 as the heater) as a function of the conductance G C2 of C2. As C2 is opened, a fraction of the hot electrons are diverted away from the edge and are replaced by cold electrons from ohmic contact 2. The result is a reduced V 2f signal downstream at C3. That the signal vanishes at G C2 ≈ 0.8 e 2 /h instead of e 2 /h is puzzling. Nonetheless, these data provide strong evidence that in addition to being chiral, heat transport at ν = 1 is in fact concentrated at the edge of the 2DES. No analogous quenching of the C3 signal is observed at B = 0.
The mechanism responsible for the observed cooling of edge electrons at ν = 1 is so far unknown. Cooling by acoustic phonon emission is possible, but simple estimates suggest that it is too weak to account for the micron-scale thermal decay length our measurements imply [13]. Since the conductivity σ xx is vanishingly small in the QHE, naive application of the Wiedemann-Franz law would suggest heat cannot leave the edge and enter the bulk of the 2DES. However, this ignores the possibility of energy transport, mediated by long-range Coulomb interactions, between localized electronic states in the bulk [14]. At ν = 1 such a mechanism might be especially probable given the known existence of low energy neutral collective modes in the spin sector [15]. Another possibility is that there are additional collective modes at the edge itself, due to edge reconstruction [6] or the formation of a compressible strip [16]. For example, a backward moving mode could remove energy from the dominant chiral mode and thus thermalize it. If the backward mode velocity were much less than the chiral mode, little if any heating would be detected upstream.
In conclusion, we have employed local heaters and thermometers to explore heat transport at the edge of the ν = 1 QHE. Our results demonstrate that heat transport is strongly chiral but that significant cooling occurs as electrons propagate along the edge.
|
2008-10-17T21:33:07.000Z
|
2008-03-12T00:00:00.000
|
{
"year": 2008,
"sha1": "70718460d5198ed3e3f29a790e18dc948dc10f09",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/14322/1/Granger2009p53710.1103PhysRevLett.102.086803.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "70718460d5198ed3e3f29a790e18dc948dc10f09",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
33458375
|
pes2o/s2orc
|
v3-fos-license
|
Infectious Etiologies of Chronic Diseases: Focus on Women
Infections can directly or indirectly cause chronic conditions through progressive pathology (e.g., chronic infection, inflammation, immunity, malignant transformation), sudden permanent insults (e.g., West Nile virus poliomyelitis paralysis), or by predisposing people to noninfectious sequelae (e.g., neurologic consequences of preterm birth). Bacteria, parasites, prions, viruses, and fungi may be the single or one of several factors contributing to chronic disease; one organism can cause more than one syndrome, and diverse pathogens produce similar syndromes as pathways to disease converge (1). Certain potential outcomes disproportionately affect women (e.g., autoimmune diseases), and in some settings, detection, prevention, or treatment efforts (e.g., ocular trachoma, underdiagnosed genital infections) may marginalize women. Women's activities can also increase exposures to chronic disease pathogens (e.g., schistosomiasis attributable to chores or agriculture), and gender can affect transmission (e.g., increased male-to-female transmission of human T-cell leukemia virus–1). Preventing maternal infections may further minimize chronic disease and neurodevelopmental disorders in offspring.
Infectious Etiologies of Chronic Diseases: Focus on Women
Infections can directly or indirectly cause chronic conditions through progressive pathology (e.g., chronic infection, inflammation, immunity, malignant transformation), sudden permanent insults (e.g., West Nile virus poliomyelitis paralysis), or by predisposing people to noninfectious sequelae (e.g., neurologic consequences of preterm birth).Bacteria, parasites, prions, viruses, and fungi may be the single or one of several factors contributing to chronic disease; one organism can cause more than one syndrome, and diverse pathogens produce similar syndromes as pathways to disease converge (1).Certain potential outcomes disproportionately affect women (e.g., autoimmune diseases), and in some settings, detection, prevention, or treatment efforts (e.g., ocular trachoma, underdiagnosed genital infections) may marginalize women.Women's activities can also increase exposures to chronic disease pathogens (e.g., schistosomiasis attributable to chores or agriculture), and gender can affect transmission (e.g., increased male-to-female transmission of human T-cell leukemia virus-1).Preventing maternal infections may further minimize chronic disease and neurodevelopmental disorders in offspring.
Are Women's Autoimmune Diseases Really Autoimmune?
Systemic and organ-specific autoimmune diseases, such as rheumatoid arthritis and myocarditis, are the leading cause of death in women >65 years of age (2).They affect 14-22 million people (5%-8% of the population) in the United States (3) and millions more worldwide.In autoimmunity, the immune system may attack or damage self-tissues with autoantibodies and autoreactive T and B cells.However, the indolent nature of most autoimmune diseases makes determining infectious triggers difficult.Animal models help to understand such links.For example, transfer of disease by autoantibodies and immune cells from affected animals indicates the immune-mediated nature of these syndromes (4-6).Toll-like receptors and the innate immune system, critical components of the normal human response to infection, are essential to naturally and experimentally induced autoimmunity.Genetic and other factors affect susceptibility to both infection and autoimmune disease.For example, coxsackievirus B3 induces viral myocarditis in susceptible mice.Certain cytokines (interleukin [IL]-1 and tumor necrosis factor [TNF]-α), but not viral replication, correlate with cardiac inflammation and can overcome resistance to chronic myocarditis (7)(8)(9).These findings suggest that, while infection may trigger autoimmunity, immune processes drive disease progression.Estrogen amplifies the immune response to coxsackievirus B3 in susceptible mice, increasing TNF-α and IL-4 levels (unpub.data), which is perhaps consistent with women's predisposition to autoimmune disease.Identifying triggers, including infection, and early markers of autoimmunity are important goals for preventing onset of or disrupting progression to autoimmune disease.
Infection Connection in Neurodevelopmental Disorders
Intrauterine infections are known causes of congenital defects worldwide.Infections during the time of fetal brain development might also contribute to neuropsychiatric disorders, including schizophrenia.Studies linking various gestational insults (including infections) and subtle premorbid behavioral alterations to adult schizophrenia implicate a neurodevelopmental origin.However, the long latency between putative infection or insult and the emergence of psychotic symptoms complicates establishing direct links.While most reports have been ecologic studies without confirmed maternal infection, Brown et al. (10) found that 20.4% of persons with a documented in utero exposure to rubella developed an adult schizophrenia spectrum disorder.Experimentally, lymphocytic choriomeningitis virus infection in a neonatal rat model produces some latent changes similar to those of schizophrenia, e.g., hippocampal atrophy and impaired inhibitory GABA neurotransmission (11); blocking IL-1 partially attenuates the hippocampal cell loss.Inflammatory cytokine responses, perhaps amplified by immunogenetic abnormalities, may be a common thread linking intrapartum infections and noninfectious gestational and obstetric complications to neurodevelopmental disorders (12).
Keys to the Future
A continuum from acute infection to chronic disease exists, and each stage is an opportunity to prevent or minimize an avoidable fraction of chronic disease-that resulting from infectious disease.Crucial steps include identifying infectious etiologies and cofactors, determining persons (including women) at risk for infection or outcome, and implementing measures that minimize chronic sequelae.Research incorporating longitudinal studies that precede clinical disease must support evidenced-based conclusions and actions.The benefits to women could be substantial.
*
Centers for Disease Control and Prevention, Atlanta, Georgia, USA; †Johns Hopkins University, Baltimore, Maryland, USA; and ‡Emory University School of Medicine, Atlanta, Georgia, USA
|
2016-05-04T20:20:58.661Z
|
2004-11-01T00:00:00.000
|
{
"year": 2004,
"sha1": "1135a1573f4aa980e0f4c1b2403f188c92f20eef",
"oa_license": "CCBY",
"oa_url": "https://wwwnc.cdc.gov/eid/article/10/11/pdfs/04-0623_07.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "78b95db15b4a37b8dac041f7ae0fbefae0515e6d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9694984
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Being an Only Child, Family Cohesion, and Family Conflict on Behavioral Problems among Adolescents with Physically Ill Parents
Background: This study aimed to examine the parental physical illness’ effect on behavioral problems among adolescents, and the effects of being an only child, family cohesion, and family conflict on behavioral problems among adolescents with physically ill parents in Liaoning province, China. Methods: This cross-sectional study was performed in 2009. A questionnaire including two dimensions of the Family Environment Scale (family cohesion and family conflict), self-reported Strength and Difficulties Questionnaire (SDQ), and demographic factors was distributed to the subjects. Results: Among the 5220 adolescents, 308 adolescents lived with physically ill parents. The adolescents with physically ill parents had more behavioral problems than adolescents with healthy parents. Among the girls who lived in families with physically ill parents, the SDQ score and the prevalence of SDQ syndromes were higher in the girls with siblings than the girls without siblings after adjusting for variables; the effect of family cohesion on SDQ was significant after adjusting for variables. Conclusion: Interventions targeting family cohesion may be effective to reduce behavioral problems of adolescents with physically ill parents.
Introduction
Living with physically ill parents for adolescents is frequency, as 5%-15% children and adolescents may live with a parent who suffers from physical illness (the US National Center for Health Statistics) [1]. It is challenging for adolescents to live with a physically ill parent. They need to take on additional family responsibilities during parental illness [2,3]. Families with physically ill parents may suffer the loss of financial resources [4]. Due to these factors, adolescents with physically ill parents may be at a high risk of behavioral problems [5][6][7]. Studies reported that adolescents with physically ill parents may have more internalizing problems (such as anxiety and depressed mood) and externalizing problems (such as aggressive behavior and delinquent behavior) compared to adolescents with healthy parents [5][6][7]. Adolescents who have behavioral problems are vulnerable to psychiatric disorders in their adulthood [8,9].Therefore, it is important to focus on behavioral problems among adolescents with physically ill parents. However, a substantial body of research in adolescents with physically ill parents was performed in Europe and America to date, and few studies were conducted in China. In this study, we hypothesized that Chinese adolescents with physically ill parents would have more behavioral problems.
Previous studies have demonstrated that being an only child was associated with behavioral problems. The Chinese government implemented the "One child policy" in early 1979. Therefore there are now a large proportion of adolescents without siblings in China. Some studies from Britain, Korea, and Netherlands have shown that children without siblings are overprotected and self-centered, which may have a negative effect on their psychological development [10][11][12]. In China, researchers have found that children without siblings might have more behavioral problems than children with siblings [13,14]. However, Liu et al., I found that compared to adolescents without siblings, adolescents with siblings had worse mental health in China based on resource dilution theory (family resources are divided by the number of children, and increasing number of children may lower quality of the output) [15].The quality of output means that children obtain resources such as economic resources and interpersonal resources from family. Compared to the children with siblings, the children without siblings obtain more economic and interpersonal resources (such as attention, time, and energy), which may be conductive to their well-being [15]. Moreover, Li et al., reported that there were no significant differences for behavioral problems between children with and without siblings [16]. Although there were many studies exploring the effects of being an only child on adolescents' mental health, most studies were limited to general population, and few of them focused on adolescents with physically ill parents. Parental physical illness may lead to the loss of financial resources and less attention to adolescents from family members [4,17]. Loss of family resources may enhance the negative effect of sibship size on children's development [18]. In addition, due to the implementation of "one child policy" in China, family with two or more children would suffer financial punishment and family with only one child can be encouraged, such as obtaining more financial aid for medical problems [19]. All above facts seem to deteriorate the mental health of children with siblings living in families with physically ill parents. Therefore, we hypothesized that Chinese children with siblings living in families with physically ill parents would have more behavioral problems.
Family cohesion is defined as "the degree of commitment and support family members provide for one another"; family conflict is defined as "the amount of openly expressed anger and conflict among family members" [20]. Kissane et al., reported that family cohesion was an important element of family coping [21]. Previous studies have suggested that high levels of support and low levels of conflict may help family members better cope with physical illness [22][23][24][25]. Parental physical illness may have a negative effect on adolescents' behavioral problems [5][6][7]. Adolescents from high cohesive and low conflictive family may better psychologically adjust to physical illness [23][24][25].
To date, most studies exploring the effects of family cohesion and conflict on adolescents' behavioral problems were conducted in western societies. A study from 68 countries demonstrated that China was more collectivistic than western countries [26]. Collectivism focuses on community and emphasizes the importance of cohesion [27]. In China, family is considered a community. Family members are more likely to support each other and less likely to express their anger [28]. Greenberger et al., found that the relationship between the quality of family relationship and depressive symptoms was stronger among Chinese adolescents than among US adolescents [29]. Therefore, we hypothesized that family cohesion and family conflict would have effects on behavioral problems of adolescents with physically ill parents in China.
There are three aims in this study. First, we examined whether adolescents with physically ill parents had more behavioral problems compared to adolescents with healthy parents in China. Second, we assessed the effects of being an only child on adolescents' behavioral problems in the families with physically ill parents. Third, we explored the effects of family cohesion and family conflict on adolescents' behavioral problems in families with physically ill parents.
Sample
A survey was conducted in Liaoning province, China in 2009. According to population size, Liaoning province comprises three metropolitan cities (≥1,000,000), seven medium-size cities (500,000-1,000,000) and four small cities (200,000-500,000).Three cities (one metropolitan, one medium-size and one small city) were randomly selected in 2009.Three urban areas from each city and two rural areas from the large and medium-size cities were randomly selected. Rural areas in small city with few public schools were not included. Six public schools (two primary schools (grades 5-6) and four middle schools (grades 7-12)) were randomly selected from each area by age range (11-18 years), and two or three classes were randomly selected from each grade. We randomly selected 30 students (15 boys and 15girls) in each selected class using a pre-prepared list of random numbers. Parental illness was reported by the parents. Parents with mental health illness were excluded. Among the 5220 adolescents (boys: 2277 (43.6%); mean age: (13.74 ± 2.10) years; age range: (11-18 years), 308 adolescents lived with physically ill parents. Among these 308 adolescents, 172 had fathers with physical illness; 175 had mothers with physical illness; 39 had both parents with physical illness. The details of parental physical illness were presented in Table 1. Four thousand one hundred and three (78.6%) were only children; 4147 (79.4%) lived in urban areas; 4777 (91.5%) lived with both biological parents; as to family income, the proportions of "<500 RMB", "500-1000 RMB", "1001-1500 RMB", "1501-2500 RMB", ">2500 RMB" were 7.9%, 19.9%, 23.3%, 22.7% and 26.1% respectively; as to father's educational level, the proportions of junior high school level, senior high school level, and college level or higher were 41.2%, 40.9%, and 17.9% respectively; for mother's educational level, the proportions of junior high school level, senior high school level, and college level or higher were 43.1%, 40.8% and 16.1% respectively. The average age of the fathers was (40.50 ± 4.10) years; the average age of the mothers was (39.02 ± 3.84) years. All the participants were informed about the purpose of the survey before their participation. The procedures were approved by the Ethics Committee on Human Experimentation of China Medical University. Written informed consent was obtained from all participating adolescents and their parents (CMU62083004).
The Self-Report Version of the Strengths and Difficulties Questionnaire (SDQ)
Behavioral problems were measured with the self-reported SDQ. SDQ contains 25 items and can be responded to on a three-point Likert scale, with response categories ranging from "no" (0 points) to "somewhat" (2 points) [30]. The self-reported SDQ includes five factors: emotional symptoms (5 items, e.g., "I get a lot of headaches, stomach-aches or sickness"; "I worry a lot"), conduct problems (5 items, e.g., "I get very angry and often lose my temper"; "I usually do as I am told"), hyperactivity/inattention (5 items, e.g., "I am restless, I cannot stay still for long"; "I am constantly fidgeting or squirming"), peer problems(5 items, e.g., "I am usually on my own. I generally play alone or keep to myself"; "I have one good friend or more"), and prosocial behavior (5 items, e.g., "I try to be nice to other people. I care about their feelings"; "I usually share with others (food, games, pens, etc.)") [31]. All factors except the prosocial behavior are summed to assess behavioral problems [31]. For behavioral problems, we applied the cut-off score of 18 to define the "behavioral problems" group [32]. The Cronbach's alphas for the whole questionnaire were 0.69 and 0.71 in adolescent with and without physically ill parents, respectively.
Family Environment Scale (FES)
Family cohesion and family conflict were measured with the FES, which was filled in by adolescents. The FES is a 90-item true-false measure which forms 10 factors [33]. In this study, we focused on two factors, cohesion and conflict. Each of these factors contains nine items. The sample items of family cohesion included "Household members really help and support one another", "We put a lot of energy into what we do at home" and so on. The sample items of family conflict included "We fight a lot in our household", "Household members often criticize each other" and so on.
The Cronbach's alphas for family cohesion were 0.74 and 0.78 in adolescent with and without physically ill parents, respectively; The Cronbach's alpha for family conflict were 0.63 and 0.72 in adolescent with and without physically ill parents, respectively.
2.2.3.Demographic Factors
Demographic factors included adolescent's age, adolescent's gender, being an only child, living area(urban/rural), family structure, family income (<500 RMB/500-1000 RMB/1001-1500 RMB/1501-2500 RMB/>2500 RMB), father's/mother's age and father's/mother's educational levels (junior high school level/senior high school level/college level or higher). "Family structure" was divided into two groups of "intact family" (living with both biological parents) and "non-intact family" (living with single parent, step-parent, or foster parent) [34]. Adolescent's age, adolescent's gender, being an only child, family structure and living area were completed by adolescents. Father's/mother's educational levels and family income were completed by parents. Previous studies reported that adolescent age, adolescent gender, living area, family structure, family income, father's/mother's age, father's/mother's educational levels were associated with behavioral problems [34][35][36][37]. Therefore, in this study, we adjusted for these variables.
2.3.Statistical Analysis
Chi-square analyses for dichotomous variables, and independent-sample t-tests and one-way ANOVA analyses for continuous variables were used to examine differences of demographic variables and behavioral problems between adolescents with and without physically ill parents, and among the adolescents with physically ill father, physically ill mother, and both parents with physical illness. The distributions of behavioral problems in categorical variables were examined by the independent-sample t-tests, one-way ANOVA analyses and Chi-square analyses. Correlations among behavioral problems and all continuous variables were examined by Pearson correlation. In the total sample, the effects of parental physical illness and being an only child on adolescents' behavioral problems were examined using analyses of covariance; we then performed logistic regression analyses to examine the effects of parental physical illness and being an only child on adolescents' behavioral problems. The interaction terms of only child with parental physical illness on behavioral problems were examined by logistic regression analyses and analyses of covariance. In the families with and without physically ill parents, after adjusting for adolescent's age, adolescent's gender, family structure, living area, father's/mother's educational levels, father's/mother's age and family income, the effects of being an only child on adolescents' behavioral problems was examined using analyses of covariance; we then performed logistic regression analyses to examine the effects of being an only child on adolescents' behavioral problems. The interaction terms of family cohesion and family conflict with parental physical illness on behavioral problems were examined by multiple linear regression analyses. In the families with and without physically ill parents, we used multiple linear regression analyses to examine the effects of family cohesion and family conflict on adolescents' behavioral problems.
The analyses were performed with SPSS13.0, with two-tailed probability value of <0.05 considered to be statistically significant.
Results
There were no differences in demographic characteristics (including adolescent's gender, adolescent's age, being an only child, living area, family structures, family income, father's education levels, father's age, and mother's age) and behavioral problems except mother's education levels among adolescents with physically ill father, physically ill mother and both parents with physical illness (p > 0.05). There were no differences between the adolescents with and without physically ill parents in terms of adolescent's gender, adolescent's age, being an only child, and living area. The distributions of family income, family structures, and father's/mother's education level were significantly different between the adolescents with and without physically ill parents (p > 0.05). In the family with physically ill parents, parents reported older age, lower family income and education levels; adolescents reported that a smaller proportion of them lived with both biological parents compared to children with healthy parents (p < 0.05).
Based on the results from Chi-square analyses, independent-sample t-tests and one-way ANOVA analyses, being an only child was associated with behavioral problems among adolescents with physically ill parents; living area and father's/mother's education were associated with behavioral problems among the adolescents without physically ill parents; gender was associated with behavioral problems among the adolescents without physically ill parents; family structure was associated with behavioral problems among the girls without physically ill parents; adolescents with physically ill parents had more behavioral problems (p < 0.05). Based on independent-sample t-tests and one-way ANOVA analyses, we also found that family structure was correlated with behavioral problems among the boys without physically ill parents; family income was correlated with behavioral problems among the girls without physically ill parents; being an only child was related to behavioral problems among the girls without physically ill parents (p < 0.05).
Adolescent's age was associated with hyperactivity/inattention among the boys with physically ill parents; adolescent's age was associated with behavioral problems and emotional symptoms among the girls with physically ill parents; adolescent's age, father's age and mother's age were related to behavioral problems, emotional symptoms, conduct problems, and hyperactivity/inattention among the adolescents without physically ill parents; father's age was also related to peer problems among the girls without physically ill parents (p <0.05). Family cohesion was associated with behavioral problems, emotional symptoms, conduct problems, and hyperactivity/inattention among the adolescents with physically ill parents; family cohesion was also related to prosocial behavior and peer problems among the boys with physically ill parents and the girls with physically ill parents, respectively; family conflict was associated with behavioral problems, emotional symptoms, conduct problems, and hyperactivity/inattention among adolescents with physically ill parents; family conflict was also related to peer problems and prosocial behavior among boys with physically ill parents; family cohesion and conflict were associated with total scale and each sub-scale of behavioral problems among the adolescents without physically ill parents (p < 0.05).(shown in Supplementary Materials and Research Data).
The effect of parental physical illness on behavioral problems in the total study population is presented in Table 2. In the total study population, the effect of parental physical illness on behavioral problems was significant after adjusting for variables. Adolescents with physically ill parents had more behavioral problems compared to the adolescents without physically ill parents. * p < 0.05; a : adjusting for adolescent's age, adolescent's gender, only child, family structure, living area, father's/mother's educational level, father's/mother's age and family income in analyses of covariance among the total samples; all these variables excluding adolescent gender were controlled in analyses of covariance among boys and girls; b : adjusting for adolescent's age, adolescent's gender, only child, family structure, living area, father's/mother's educational level, father's/mother's age and family income in logistic regression analysis among the total samples; all these variables excluding adolescent gender were controlled in logistic regression analysis among boys and girls.
The interaction terms of being an only child with parental physical illness on behavioral problems were significant among the girls after adjusting for adolescent's age, family structure, living area, father's/mother's educational levels, father's/mother's age and family income (F=4.06, p < 0.05, OR = 2.87, 95%CI: 1.15 to 7.12). But it wasn't significant among boys (boys: F = 1.34, p = 0.25, OR = 1.03, 95%CI: 0.36 to 2.97). Compared to the girls without physically ill parents, being an only child had a stronger effect on behavioral problems among the girls with physically ill parents.
The effect of being an only child on behavioral problems in the total study population and the adolescents with and without physically ill parents is presented in Table 3. In the total study population and the adolescents without physically ill parents, the effect of being an only child on behavioral problems was not significant after adjusting for adolescent's age, family structure, living area, father's/mother's educational levels, father's/mother's age and family income. Among girls with physically ill parents, the effect of only child on behavioral problems was significant after adjusting for adolescent's age, family structure, living area, father's/mother's educational levels, father's/mother's age and family income (p <0.05). Compared to girls without siblings, girls with siblings had more behavioral problems. * p < 0.05; a : In the total samples, adjusting for adolescent's age, adolescent's gender, family structure, living area, father's/mother's educational level, father's/mother's age, parental physical illness and family income; all these variables excluding adolescent gender were controlled among boys and girls; b and e : Among adolescents with and without physically ill parents, adjusting for adolescent's age, adolescent's gender, family structure, living area, father's/mother's educational level, father's/mother's age and family income; all these variables excluding adolescent gender were controlled among boys and girls; c : analyses of covariance; d : logistic regression analyses.
The interaction terms of family cohesion and family conflict with parental physical illness on behavioral problems weren't significant among boys and girls after adjusting for adolescent's age, family structure, being an only child, living area, father's/mother's educational levels, father's/mother's age and family income (boys: family cohesion × parental physical illness, beta = −0.08, p = 0.82. family conflict × parental physical illness, beta = −0.23, p = 0.48; girls: family cohesion × parental physical illness, beta = 0.25, p = 0.28. family conflict × parental physical illness, beta = −0.25, p = 0.26). There was no difference in the effects of family cohesion and conflict on behavioral problems between the adolescents with and without physically ill parents.
In the families with physically ill parents, after adjusting for adolescent's age, family structure, living area, father's/mother's educational level, father's/mother's age and family income, the effect of family cohesion on behavioral problems was significant among boys and girls (boys: beta = −1.01,
Discussion
In this study, we found adolescents with physically ill parents had more behavioral problems than adolescents with healthy parents in China. These results were consistent with previous studies that reported mental health problems in adolescents whose parents were affected by mixed physical illness and particular physical illness [5][6][7]. Adolescents living with physically ill parents may need to take more responsibilities, such as taking care of family members, and have less leisure time to play with friends, which may deteriorate their mental health [2,3]. In addition, parental physical illness may give rise to depletion of financial resources [29]. Compared to western countries, investment in health care in China is less, which may aggravate the financial burden [38,39]. Therefore, Chinese adolescents with physically ill parents may have more behavioral problems.
In families with physically ill parents, the girls with siblings were found to have more behavioral problems than the girls without siblings. However, we didn't find that the effect of only child on adolescents' behavioral problems was significant in the whole sample and boys with physically ill parents. Compared to the girls without physically ill parents, the effect of only child on behavioral problems was stronger among the girls with physically ill parents. Our results were inconsistent with Visser et al., which showed that children with no or few siblings were more vulnerable to behavioral problems than children with more siblings in the families with parents affected by cancer [40].
Multiple factors may contribute to the results. Families with physically ill parents may have to suffer the potential depletion of financial resources [4]. In families with ill parents, family members' attention to adolescents may be less than that in families with healthy parents [17]. According to resource dilution theory, more economic and family interpersonal resources (such as attention, time, and energy) may help only children better adjust to stress and challenges [15].Additionally, girls tend to rely on their family as a source of emotional support [41]. Moreover, as part of traditional Chinese culture, gender discrimination against girls is still prevalent so that boys may get more attention compared to their sisters [19]. Therefore, in families with physically ill parents, the girls with siblings are more susceptible to behavioral problems. This increased risk may also exist in other Asian developing countries that share similar cultures. And due to the implementation of the "one child policy" in China, parents who violate it may lose their work and pay a fine to compensate the "cost of society", and the Chinese government designed some policies to encourage family with only one child, such as more financial aid for medical problems, which may also contribute to our results [19]. Therefore, these findings need to be confirmed in future studies outside China.
In this study, family cohesion was found to be associated with behavioral problems among the adolescents with physically ill parents. These results were consistent with previous studies conducted in cancer and multiple sclerosis samples [23][24][25]. These results revealed that the quality of family environment, especially family cohesion, could help adolescents better adjust to parental physical illness psychologically, while the adolescents in less cohesive families were at higher risk of mental health problems. Collectivism focuses on community and stressed the importance of cohesion [27]. In China, family is considered as a community so that family members are prone to support each other, which may reduce behavioral problems of the adolescents with physically ill parents [28]. It may be a positive and feasible strategy to develop programs to increase family cohesion, thus improving adolescents' mental health in the families with physically ill parents in the long run. For example, "Triple P-positive parenting program" has been shown to be effective in enhancing family relationships and attenuating adolescents' psychological distress [42,43].
There are some limitations in this study. Firstly, we were unable to draw any causal conclusions because of the cross-sectional design, so all these findings need to be confirmed in future longitudinal studies. Secondly, this research was preliminary. Although we considered the effect of some characteristics of child and family in this study, future research could consider the effects of other aspects such as illness-related characteristics (e.g., illness duration and illness severity).Thirdly, the investigation was based on self-reported behavioral problems. Data obtained from parents and teachers should be considered in future study because data from multiple informants are more comprehensive.
Conclusions
To our knowledge, this is the first study focusing on the adolescents with physically ill parents in China. Our findings showed that Chinese adolescents with physically ill parents had more behavioral problems than those with healthy parents. In families with physically ill parents, the girls with siblings may have more behavioral problems than the girls without siblings; family cohesion was associated with behavioral problems. Hence, in the families with physically ill parents, interventions to enhance family cohesion may be positive and effective in reducing Chinese adolescents' behavioral problems.
|
2016-03-22T00:56:01.885Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "85712703d37495f087985cbf0b0cf0e7357eabc8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/12/9/10910/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "85712703d37495f087985cbf0b0cf0e7357eabc8",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
221049716
|
pes2o/s2orc
|
v3-fos-license
|
Racial differences in treatment and outcomes in multiple myeloma: a multiple myeloma research foundation analysis
Findings on racial differences in survival in multiple myeloma (MM) have been inconclusive. We assessed differences in outcomes between White and Black individuals among 639 newly diagnosed MM patients in the MM Research Foundation CoMMpass registry with baseline cytogenetic data. Survival curves were constructed using the Kaplan–Meier method. Hazard ratios and 95% confidence intervals were derived from Cox proportional hazard regression models. Age, gender, and stage were similar between Whites (n = 526) and Blacks (n = 113). Blacks had inferior overall survival (OS) compared with Whites and were less likely to receive triplet therapies or frontline autologous stem cell transplant (ASCT). The following factors were significantly associated with inferior OS in multivariate analysis: higher international staging system (ISS) score, ≥1 or ≥2 high-risk cytogenetic abnormalities (HRCA), high-risk gene expression profile (GEP), and lack of ASCT. Multivariate analysis in the Black subset found that only lack of ASCT was significantly associated with inferior OS. The receipt of both triplet induction and ASCT only partly abrogated the effect of race on survival. HRCA did not track with survival in Blacks, emphasizing the need for race-specific risk prognostication schema to guide optimal MM therapy.
Introduction
Multiple myeloma (MM) is part of a spectrum of monoclonal plasma cell disorders with an age-adjusted incidence of 7.0/100,000 in the United States and comprising 1.8% of all new cancer diagnoses in 2020 1 . Unlike the well-recognized two-to-threefold higher incidence rate of MM among Black individuals compared with Whites 2-6 , findings on racial differences in mortality and treatment outcomes have been inconclusive. Populationbased studies using surveillance, epidemiology, and end results (SEER) registry and studies using trial data have suggested either similar or superior relative survival for Blacks compared to Whites with MM 2,7-11 .
These findings are surprising in light of the fact that Blacks face barriers that may lead to inferior survival, including lower socioeconomic status and lower likelihood to receive contemporary MM agents or undergo autologous stem cell transplant (ASCT) 9,12 . In a retrospective analysis of 15,717 patients with MM in the Veterans Association (VA) health care system with equal access to care between 2000 and 2017, Fillmore et al. 13 found that Blacks had better overall survival (OS) compared with Whites, even after adjusting for age, sex, rurality, income, stage, transplantation, and induction therapies. A similar superior survival in Black individuals with MM after ASCT was also reported by Sweiss et al. 14 .
In contrast, several studies have shown similar OS between Blacks and Whites, though this is despite later access to novel therapies or ASCT 9,15-18 .
One potential explanation for the racial differences in outcomes may lie in the distribution and impact of cytogenetic or molecular mutations that have prognostic significance. One multi-institutional study reported that the cytogenetic abnormalities t (11;14), t (4;14), monosomy 13, and monosomy 17 were less common in Blacks 19 . Analysis of the Multiple Myeloma Research Foundation (MMRF) CoMMpass data set found that Blacks had a higher frequency of BCL7A, BRWD3, and AUTS2 mutations, and a lower frequency of TP53 and IRF4 mutations compared with Whites 16 . Despite examining these differences, no study has holistically evaluated the racial differences in outcomes according to the complex interplay of prognostic indices, cytogenetics, and modern treatment approaches. To address this quandary, we investigated outcomes between Blacks and Whites in a cohort of 639 MM patients receiving modern treatment approaches.
Study population
We obtained the data on newly diagnosed MM patients from the Multiple Myeloma Research Foundation (MMRF) CoMMpass registry (NCT01454297, version IA13). The CoMMpass study was initiated in 2011 as a large-scale prospective observational study in MM that has collected tissue samples, genetic information, quality of life, and clinical outcomes from over 1100 patients with newly diagnosed MM at 90 different sites worldwide. Each patient is followed every 6 months for a total of 8 years. Bone marrow samples were collected at enrollment, during response to therapy, and at relapse.
From an initial 1,154 patients with accessible data in the CoMMpass registry, 515 were excluded due to incomplete cytogenetic data (n = 274), missing demographic data (n = 172), or self-identified race other than Black or White (n = 69). This resulted in a total of 639 evaluable patients that made up the study population. Fifty patients reported being of Hispanic/Latino ethnicity, all of whom reported to be of White race and all of whom had >60% European ancestry according to the calculated ancestries by Manojlovic et al. 16 . These patients were included in the current report given that their exclusion did not materially change point estimates and overall findings.
Cytogenetics and treatment data
The CoMMPass registry inferred cytogenetic changes from the next-generation sequencing (NGS) data; a deletion required that 21% of cells have at least a one copy deletion, a gain required that 23% of the cells have a one copy gain, and translocations required at least 30% of cells having the event. Abstracted data included pre-treatment demographics, International Staging System (ISS), baseline MM parameters, cytogenetics, induction regimen, autologous stem cell transplant (ASCT) and maintenance therapy use, progression-free survival (PFS), and OS. Race was determined based on self-reported race. Highrisk cytogenetic abnormalities (HRCA) were defined according to the International Myeloma Working Group classification as any of the following: deletion 17p/TP53, 1q gain or amplification, t(4;14), t (14;16), and t(14;20) 20 . High risk by UAMS70 gene expression profiling from the CoMMpass data set was determined using an independent cutoff in a manner similar to what was previously done 21 .
Statistical analysis
Chi-square or Fisher's exact test were used for comparisons of categorical variables and the t test for continuous variables. We defined PFS as the time from diagnosis until progression or death. OS was defined as the time from diagnosis until death from any cause. Survival curves were constructed using the Kaplan-Meier method and compared with the log-rank test. Cox proportional hazard models were computed to estimate hazard ratio (HR) and 95% confidence interval (CI) for association between pre-treatment variables and outcomes. Age was evaluated as both a continuous and categorical variable for age-adjusted Cox analysis and the methods generated similar findings; therefore, age was treated as a categorical variable for the multivariate analysis. Multivariate analysis was performed using all variables that were significantly associated (P < 0.05) with PFS and OS by univariate analysis within each group, in addition to HRCA and high risk by UAMS70 given clinical interest in and biologic plausibility of these variables. Data analysis was carried out in Stata V15.0 (StataCorp).
Results
A total of 639 MM patients (113 Blacks and 526 Whites) were identified in the MMRF CoMMpass registry with complete baseline cytogenetic data available. Median age was 65 years for Whites, and 63 years for Blacks (P = 0.2); 319/426 (61%) Whites and 69/113 (61%) Blacks were male (P = 0.9). There was a similar distribution of HRCA and the number of HRCA between Blacks and Whites ( Table 1). There were also no between-race differences in ECOG performance status, ISS/Revised-ISS stage, or bone marrow monotypic plasmacytosis percentage.
In both Blacks and Whites, age ≥65 was associated with both inferior PFS and OS (Supplementary Table 1). Ageadjusted univariate analysis in the whole cohort showed that both inferior PFS and OS were associated with male gender, Black race, ECOG PS ≥ 2, increasing ISS, eGFR ≤60, presence of ≥1 HRCA (OS only), presence of ≥2 HRCA, high risk by UAMS70, no triplet induction, and no ASCT (Supplementary Table 1). As shown in Fig. 1, OS was shorter for Blacks compared with Whites (ageadjusted hazard ratio (HR) 1.7, 95% confidence interval (CI) 1.2-2.4, P = 0.003). However, the difference in OS was attenuated in patients receiving triplet therapy and autologous transplant (Fig. 2). Multivariate analysis showed that increase in ISS, increase in number of HRCA, high risk by UAMS70, and no ASCT remained significantly associated with worse OS and PFS in Whites; male gender was also associated with inferior OS in Whites (Table 2). However, in Blacks, only the lack of frontline ASCT was associated with worse PFS and OS.
Given the persistent effect of triplet induction therapy and ASCT (triplet + ASCT) on OS, we performed univariate and multivariate analysis on this subgroup of patients. The effect of black race on OS appears to have only been partly mitigated by the receipt of triplet + ASCT (age-adjusted HR 2.3, 95% CI 0.9-5.8, P = 0.08) (Supplementary Table 1). When controlling for age (categorical), gender, ECOG PS, ISS, eGFR, and receipt of triplet + ASCT, Cox modeling shows again that the presence of 1 or 2+ HRCA had an impact on OS for Whites, but not for Blacks (Table 3).
Discussion
In this large longitudinal cohort of newly diagnosed MM patients receiving modern treatment approaches, we show that Blacks had inferior OS compared with Whites and that this risk was only partly abrogated by receipt of triplet therapy and ASCT. Our findings of worse OS in Blacks than Whites are not consistent with previous studies. eras where state-of-the-art therapy approaches such as PIs and IMiDs were nonexistent or underutilized. The largest study to date-a VA study conducted by Fillmore et al. 13 showed superior OS for Blacks compared with Whites with 1400 patients having received a PI and IMiD as frontline therapy, but this is not directly comparable to our study because: (1) the percentage of patients who received novel induction regimens was much lower in the VA study, (2) 98% of patients in the VA study were males, which we show to be an adverse prognostic factor, and (3) there was a lack of clinical annotation with cytogenetic data. In addition, the VA study found that the OS benefit for Black race was limited to those <65 years old at MM diagnosis (no racial difference in OS for those ≥65 years old). Indeed, nearly all population-based studies or those using administrative data (e.g., SEER-Medicare-linked data) lack prognostic information such as disease severity or cytogenetic risk stratification that could have contributed to treatment outcomes. It is also important to note that the patients included in our analysis have had access to improved therapeutic modalities for later lines of treatment (including monoclonal antibodies elotuzumab and daratumumab) compared with older cohorts.
In the current report, we found that the frequency of the number and type of HRCA were similar between races, which has been confirmed in an analysis of the Cancer Outcomes Tracking and Analysis (COTA) real-world database as well 22 . In contrast, a prior study found that Blacks were less likely to harbor t(11;14), t(4:14), monosomy 13, and monosomy 17 determined by fluorescent in situ hybridization (FISH) 19 . However, that analysis was limited by the restriction to only four cytogenetic abnormalities, the heterogeneity of FISH probes, and the lack of uniform CD138 + selection for FISH analysis which likely led to false negatives and underreported cytogenetic abnormalities. This current report circumvented these issues as cytogenetic abnormalities were inferred from NGS.
Increasing numbers of HRCA has been associated with inferior outcomes 23 , giving rise to terminology such as "single-hit" to describe the presence of one HRCA and "double-hit" when two HRCAs are present. While the presence of HRCA (single-hit or double-hit) in our study had a significant impact on survival in White patients, this was not the case for Black patients even after accounting for access to optimal frontline therapy. This discrepancy is likely not accounted for by superior responses among Black patients with HRCA, as the ≥VGPR rate was 22% for Black patients compared with 49% for White patients with HRCA. Alternatively, this may be due partly to our finding of differences in receiving ASCT or triplet therapy across different HRCA group between Blacks and Whites. We found that Blacks with 0 or 1 HRCA were less likely to receive ASCT or ASCT + triplet, whereas Blacks with 2+ HRCA were more likely to receive ASCT and ASCT + triplet, compared with Whites. This disparity may be attributed to implicit bias among physicians against ASCT in Blacks and requires further investigation. This may also reflect the fact that prior studies of cytogenetics in MM have used pooled data from clinical trials, which comprise patients predominantly of Caucasian backgrounds 24 . This study also suggests that high-risk gene expression profile by UAMS70 may be associated with PFS and OS in Blacks, though the confidence interval was wide. Overall, our findings show that whereas conventional HRCA has been used to determine the intensity of frontline therapy, this needs to be separately considered and tailored for black patients. Gene expression profiling may also be an important prognostic tool for Black patients, but this requires further validation in a larger cohort.
We found that many baseline MM characteristics were similar between Blacks and Whites, including the presence of renal dysfunction. Renal dysfunction at diagnosis for MM may be associated with lower relative OS-in particular when renal recovery does not occur with treatment-but prior data also suggest that Blacks may experience greater renal recovery than Whites [25][26][27] . Our findings of no difference between Black and White individuals on several clinical features are not entirely consistent with previous reports 2, 11 . Given that patients included in this analysis were part of a prospective data collection research effort, it is possible that characteristics and treatments received by these patients are more representative of the centers that participated in the CoMMpass study rather than the entire MM patients population at large. Importantly, the proportion of Black patients in this study (18%) reflects the proportion of newly diagnosed Black MM patients (18-24%) in the United States 9,28 .
Though the use of frontline triplet induction therapy with a PI and IMiD was higher in Whites than Blacks (46% vs. 35%, P = 0.05), these rates are much higher than previously reported, such as in a study of VA patients (12.7% Whites vs. 8.8% Blacks, P < 0.001) 13 . Similarly, the rate of frontline transplant utilization was higher in Whites than Blacks (49% vs. 39%, P = 0.05), but this also exceeds previously reported data (as low as 9.7% in Whites and 9.3% in Blacks, and as high as 37.8% in Whites and 20.5% Blacks) 9,15 . This suggests that our study population represents a modern real-world one that is enriched for patients who received standard-of-care frontline therapy, including triplet induction and ASCT.
This study has several strengths including the use of the MMRF data that were prospectively collected with highly annotated clinical indices to allow for in-depth analysis of clinical outcomes. Moreover, there were a substantial number of black patients included in the study. The study's limitations include the lack of cytogenetic information for all participants in the whole MMRF registry, and that cytogenetic abnormalities were inferred from NGS in the CoMMpass database. However, this method was standardized across all patients and prior studies have shown that using NGS in this manner achieves accuracy comparable to FISH 29,30 .
We have shown that Blacks had inferior OS compared with Whites, and this effect was not completely abrogated by controlling for access to standard-of-care regimens such as triplet induction and ASCT. That these surrogates of socioeconomic status do not explain the differences in OS suggests there may be a yet undescribed interplay of socioeconomic or biologic underpinnings to racial disparities in MM. Attributing racial differences to biology must be approached with care, as socioeconomic differences can be mistaken for biologic ones 31 . Deep response "-" Indicates variables that were not included in the multivariate analysis.
P-values were computed using Cox proportional hazard models. ASCT autologous stem cell transplant, HRCA high-risk cytogenetic abnormality, triplet combination therapy involving three drugs including corticosteroids, a proteasome inhibitor, and either an alkylator or immunomodulatory imide drug. rates were lower among Black patients, regardless of HRCA status; however HRCA did not track with survival outcomes in Blacks, underscoring that the lack of racespecific risk prognostication schema for Blacks may be a key limitation toward achieving equal access to tailored therapy. Further investigation of racial differences in gene expression, including changes at the epigenetic level, serve as promising leads to identify potential reasons for these disparities. This serves once again as a clarion call to narrow the barriers toward ensuring black patients have access to and are offered optimal MM therapy.
|
2020-08-08T15:53:42.779Z
|
2020-08-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e2b6c86a6b8f528c00d4d7aeb93b6c640a917f9d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41408-020-00347-6.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2b6c86a6b8f528c00d4d7aeb93b6c640a917f9d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.